Você está na página 1de 1568

Table of Contents

Overview
ALTER
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AUTHORIZATION
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMN ENCRYPTION KEY
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE (Azure SQL Database)
DATABASE (Azure SQL Data Warehouse)
DATABASE (Parallel Data Warehouse)
DATABASE AUDIT SPECIFICATION
DATABASE compatibility level
DATABASE database mirroring
DATABASE ENCRYPTION KEY
DATABASE file and filegroup options
DATABASE HADR
DATABASE SCOPED CREDENTIAL
DATABASE SCOPED CONFIGURATION
DATABASE SET Options
ENDPOINT
EVENT SESSION
EXTERNAL DATA SOURCE
EXTERNAL LIBRARY
EXTERNAL RESOURCE POOL
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
INDEX
INDEX (Selective XML Indexes)
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
RESOURCE GOVERNOR
RESOURCE POOL
ROLE
ROUTE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER CONFIGURATION
SERVER ROLE
SERVICE
SERVICE MASTER KEY
SYMMETRIC KEY
TABLE
TABLE column_constraint
TABLE column_definition
TABLE computed_column_definition
TABLE index_option
TABLE table_constraint
TRIGGER
USER
VIEW
WORKLOAD GROUP
XML SCHEMA COLLECTION
Backup and restore
BACKUP
BACKUP CERTIFICATE
BACKUP DATABASE (Parallel Data Warehouse)
BACKUP MASTER KEY
BACKUP SERVICE MASTER KEY
RESTORE
RESTORE statements
RESTORE DATABASE (Parallel Data Warehouse)
RESTORE arguments
RESTORE FILELISTONLY
RESTORE HEADERONLY
RESTORE LABELONLY
RESTORE MASTER KEY
RESTORE REWINDONLY
RESTORE VERIFYONLY
BULK INSERT
CREATE
AGGREGATE
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMNSTORE INDEX
COLUMN ENCRYPTION KEY
COLUMN MASTER KEY
CONTRACT
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE (Azure SQL Database)
DATABASE (Azure SQL Data Warehouse)
DATABASE (Parallel Data Warehouse)
DATABASE AUDIT SPECIFICATION
DATABASE ENCRYPTION KEY
DATABASE SCOPED CREDENTIAL
DEFAULT
ENDPOINT
EVENT NOTIFICATION
EVENT SESSION
EXTERNAL DATA SOURCE
EXTERNAL LIBRARY
EXTERNAL FILE FORMAT
EXTERNAL RESOURCE POOL
EXTERNAL TABLE
EXTERNAL TABLE AS SELECT
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
FUNCTION (SQL Data Warehouse)
INDEX
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
REMOTE TABLE AS SELECT (Parallel Data Warehouse)
RESOURCE POOL
ROLE
ROUTE
RULE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SELECTIVE XML INDEX
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER ROLE
SERVICE
SPATIAL INDEX
STATISTICS
SYMMETRIC KEY
SYNONYM
TABLE
TABLE (Azure SQL Data Warehouse)
TABLE (SQL Graph)
TABLE AS SELECT (Azure SQL Data Warehouse)
TABLE IDENTITY (Property)
TRIGGER
TYPE
USER
VIEW
WORKLOAD GROUP
XML INDEX
XML INDEX (Selective XML Indexes)
XML SCHEMA COLLECTION
Collations
COLLATE clause
SQL Server Collation Name
Windows Collation Name
Collation Precedence
DELETE
DISABLE TRIGGER
DROP
AGGREGATE
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMN ENCRYPTION KEY
COLUMN MASTER KEY
CONTRACT
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE AUDIT SPECIFICATION
DATABASE ENCRYPTION KEY
DATABASE SCOPED CREDENTIAL
DEFAULT
ENDPOINT
EXTERNAL DATA SOURCE
EXTERNAL FILE FORMAT
EXTERNAL LIBRARY
EXTERNAL RESOURCE POOL
EXTERNAL TABLE
EVENT NOTIFICATION
EVENT SESSION
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
INDEX
INDEX (Selective XML Indexes)
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
RESOURCE POOL
ROLE
ROUTE
RULE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER ROLE
SERVICE
SIGNATURE
STATISTICS
SYMMETRIC KEY
SYNONYM
TABLE
TRIGGER
TYPE
USER
VIEW
WORKLOAD GROUP
XML SCHEMA COLLECTION
ENABLE TRIGGER
INSERT
INSERT (SQL Graph)
MERGE
RENAME
Permissions
ADD SIGNATURE
CLOSE MASTER KEY
CLOSE SYMMETRIC KEY
DENY
DENY Assembly Permissions
DENY Asymmetric Key Permissions
DENY Availability Group Permissions
DENY Certificate Permissions
DENY Database Permissions
DENY Database Principal Permissions
DENY Database Scoped Credential
DENY Endpoint Permissions
DENY Full-Text Permissions
DENY Object Permissions
DENY Schema Permissions
DENY Search Property List Permissions
DENY Server Permissions
DENY Server Principal Permissions
DENY Service Broker Permissions
DENY Symmetric Key Permissions
DENY System Object Permissions
DENY Type Permissions
DENY XML Schema Collection Permissions
EXECUTE AS
EXECUTE AS Clause
GRANT
GRANT Assembly Permissions
GRANT Asymmetric Key Permissions
GRANT Availability Group Permissions
GRANT Certificate Permissions
GRANT Database Permissions
GRANT Database Principal Permissions
GRANT Database Scoped Credential
GRANT Endpoint Permissions
GRANT Full-Text Permissions
GRANT Object Permissions
GRANT Schema Permissions
GRANT Search Property List Permissions
GRANT Server Permissions
GRANT Server Principal Permissions
GRANT Service Broker Permissions
GRANT Symmetric Key Permissions
GRANT System Object Permissions
GRANT Type Permissions
GRANT XML Schema Collection Permissions
OPEN MASTER KEY
OPEN SYMMETRIC KEY
Permissions: GRANT, DENY, REVOKE (Azure SQL Data Warehouse, Parallel Data
Warehouse)
REVERT
REVOKE
REVOKE Assembly Permissions
REVOKE Asymmetric Key Permissions
REVOKE Availability Group Permissions
REVOKE Certificate Permissions
REVOKE Database Permissions
REVOKE Database Principal Permissions
REVOKE Database Scoped Credential
REVOKE Endpoint Permissions
REVOKE Full-Text Permissions
REVOKE Object Permissions
REVOKE Schema Permissions
REVOKE Search Property List Permissions
REVOKE Server Permissions
REVOKE Server Principal Permissions
REVOKE Service Broker Permissions
REVOKE Symmetric Key Permissions
REVOKE System Object Permissions
REVOKE Type Permissions
REVOKE XML Schema Collection Permissions
SETUSER
Service Broker
BEGIN CONVERSATION TIMER
BEGIN DIALOG CONVERSATION
END CONVERSATION
GET CONVERSATION GROUP
GET_TRANSMISSION_STATUS
MOVE CONVERSATION
RECEIVE
SEND
SET
Overview
ANSI_DEFAULTS
ANSI_NULL_DFLT_OFF
ANSI_NULL_DFLT_ON
ANSI_NULLS
ANSI_PADDING
ANSI_WARNINGS
ARITHABORT
ARITHIGNORE
CONCAT_NULL_YIELDS_NULL
CONTEXT_INFO
CURSOR_CLOSE_ON_COMMIT
DATEFIRST
DATEFORMAT
DEADLOCK_PRIORITY
FIPS_FLAGGER
FMTONLY
FORCEPLAN
IDENTITY_INSERT
IMPLICIT_TRANSACTIONS
LANGUAGE
LOCK_TIMEOUT
NOCOUNT
NOEXEC
NUMERIC_ROUNDABORT
OFFSETS
PARSEONLY
QUERY_GOVERNOR_COST_LIMIT
QUOTED_IDENTIFIER
REMOTE_PROC_TRANSACTIONS
ROWCOUNT
SHOWPLAN_ALL
SHOWPLAN_TEXT
SHOWPLAN_XML
STATISTICS IO
STATISTICS PROFILE
STATISTICS TIME
STATISTICS XML
TEXTSIZE
TRANSACTION ISOLATION LEVEL
XACT_ABORT
TRUNCATE TABLE
UPDATE STATISTICS
Transact-SQL statements
5/30/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This reference topic summarizes the categories of statements for use with Transact-SQL (T-SQL ). You can find all
of the statements listed in the left-hand navigation.

Backup and restore


The backup and restore statements provide ways to create backups and restore from backups. For more
information, see the Backup and restore overview.

Data Definition Language


Data Definition Language (DDL ) statements defines data structures. Use these statements to create, alter, or drop
data structures in a database.
ALTER
Collations
CREATE
DROP
DISABLE TRIGGER
ENABLE TRIGGER
RENAME
UPDATE STATISTICS

Data Manipulation Language


Data Manipulation Language (DML ) affect the information stored in the database. Use these statements to insert,
update, and change the rows in the database.
BULK INSERT
DELETE
INSERT
UPDATE
MERGE
TRUNCATE TABLE

Permissions statements
Permissions statements determine which users and logins can access data and perform operations. For more
information about authentication and access, see the Security center.

Service Broker statements


Service Broker is a feature that provides native support for messaging and queuing applications. For more
information, see Service Broker.
Session settings
SET statements determine how the current session handles run time settings. For an overview, see SET statements.
ALTER APPLICATION ROLE (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the name, password, or default schema of an application role.
Transact-SQL Syntax Conventions

Syntax
ALTER APPLICATION ROLE application_role_name
WITH <set_item> [ ,...n ]

<set_item> ::=
NAME = new_application_role_name
| PASSWORD = 'password'
| DEFAULT_SCHEMA = schema_name

Arguments
application_role_name
Is the name of the application role to be modified.
NAME =new_application_role_name
Specifies the new name of the application role. This name must not already be used to refer to any principal in the
database.
PASSWORD ='password'
Specifies the password for the application role. password must meet the Windows password policy requirements
of the computer that is running the instance of SQL Server. You should always use strong passwords.
DEFAULT_SCHEMA =schema_name
Specifies the first schema that will be searched by the server when it resolves the names of objects. schema_name
can be a schema that does not exist in the database.

Remarks
If the new application role name already exists in the database, the statement will fail. When the name, password,
or default schema of an application role is changed the ID associated with the role is not changed.

IMPORTANT
Password expiration policy is not applied to application role passwords. For this reason, take extra care in selecting strong
passwords. Applications that invoke application roles must store their passwords.

Application roles are visible in the sys.database_principals catalog view.


Cau t i on

In SQL Server 2005the behavior of schemas changed from the behavior in earlier versions of SQL Server. Code
that assumes that schemas are equivalent to database users may not return correct results. Old catalog views,
including sysobjects, should not be used in a database in which any of the following DDL statements has ever been
used: CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER,
CREATE ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In a database in which any of these statements has ever been used, you must use the new
catalog views. The new catalog views take into account the separation of principals and schemas that is introduced
in SQL Server 2005. For more information about catalog views, see Catalog Views (Transact-SQL ).

Permissions
Requires ALTER ANY APPLICATION ROLE permission on the database. To change the default schema, the user
also needs ALTER permission on the application role. An application role can alter its own default schema, but not
its name or password.

Examples
A. Changing the name of application role
The following example changes the name of the application role weekly_receipts to receipts_ledger .

USE AdventureWorks2012;
CREATE APPLICATION ROLE weekly_receipts
WITH PASSWORD = '987Gbv8$76sPYY5m23' ,
DEFAULT_SCHEMA = Sales;
GO
ALTER APPLICATION ROLE weekly_receipts
WITH NAME = receipts_ledger;
GO

B. Changing the password of application role


The following example changes the password of the application role receipts_ledger .

ALTER APPLICATION ROLE receipts_ledger


WITH PASSWORD = '897yUUbv867y$200nk2i';
GO

C. Changing the name, password, and default schema


The following example changes the name, password, and default schema of the application role receipts_ledger
all at the same time.

ALTER APPLICATION ROLE receipts_ledger


WITH NAME = weekly_ledger,
PASSWORD = '897yUUbv77bsrEE00nk2i',
DEFAULT_SCHEMA = Production;
GO

See Also
Application Roles
CREATE APPLICATION ROLE (Transact-SQL )
DROP APPLICATION ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER ASSEMBLY (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters an assembly by modifying the SQL Server catalog properties of an assembly. ALTER ASSEMBLY refreshes
it to the latest copy of the Microsoft .NET Framework modules that hold its implementation and adds or removes
files associated with it. Assemblies are created by using CREATE ASSEMBLY.

WARNING
CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer supported as a security boundary. A CLR
assembly created with PERMISSION_SET = SAFE may be able to access external system resources, call unmanaged code, and
acquire sysadmin privileges. Beginning with SQL Server 2017 (14.x), an sp_configure option called clr strict security
is introduced to enhance the security of CLR assemblies. clr strict security is enabled by default, and treats SAFE and
EXTERNAL_ACCESS assemblies as if they were marked UNSAFE . The clr strict security option can be disabled for
backward compatibility, but this is not recommended. Microsoft recommends that all assemblies be signed by a certificate or
asymmetric key with a corresponding login that has been granted UNSAFE ASSEMBLY permission in the master database. For
more information, see CLR strict security.

Transact-SQL Syntax Conventions

Syntax
ALTER ASSEMBLY assembly_name
[ FROM <client_assembly_specifier> | <assembly_bits> ]
[ WITH <assembly_option> [ ,...n ] ]
[ DROP FILE { file_name [ ,...n ] | ALL } ]
[ ADD FILE FROM
{
client_file_specifier [ AS file_name ]
| file_bits AS file_name
} [,...n ]
] [ ; ]
<client_assembly_specifier> :: =
'\\computer_name\share-name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'

<assembly_bits> :: =
{ varbinary_literal | varbinary_expression }

<assembly_option> :: =
PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE }
| VISIBILITY = { ON | OFF }
| UNCHECKED DATA

Arguments
assembly_name
Is the name of the assembly you want to modify. assembly_name must already exist in the database.
FROM <client_assembly_specifier> | <assembly_bits>
Updates an assembly to the latest copy of the .NET Framework modules that hold its implementation. This option
can only be used if there are no associated files with the specified assembly.
<client_assembly_specifier> specifies the network or local location where the assembly being refreshed is located.
The network location includes the computer name, the share name and a path within that share.
manifest_file_name specifies the name of the file that contains the manifest of the assembly.
<assembly_bits> is the binary value for the assembly.
Separate ALTER ASSEMBLY statements must be issued for any dependent assemblies that also require updating.
PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE }

IMPORTANT
The PERMISSION_SET option is affected by the clr strict security option, described in the opening warning. When
clr strict security is enabled, all assemblies are treated as UNSAFE .
Specifies the .NET Framework code access permission set property of the assembly. For more information about this
property, see CREATE ASSEMBLY (Transact-SQL).

NOTE
The EXTERNAL_ACCESS and UNSAFE options are not available in a contained database.

VISIBILITY = { ON | OFF }
Indicates whether the assembly is visible for creating common language runtime (CLR ) functions, stored
procedures, triggers, user-defined types, and user-defined aggregate functions against it. If set to OFF, the
assembly is intended to be called only by other assemblies. If there are existing CLR database objects already
created against the assembly, the visibility of the assembly cannot be changed. Any assemblies referenced by
assembly_name are uploaded as not visible by default.
UNCHECKED DATA
By default, ALTER ASSEMBLY fails if it must verify the consistency of individual table rows. This option allows
postponing the checks until a later time by using DBCC CHECKTABLE. If specified, SQL Server executes the
ALTER ASSEMBLY statement even if there are tables in the database that contain the following:
Persisted computed columns that either directly or indirectly reference methods in the assembly, through
Transact-SQL functions or methods.
CHECK constraints that directly or indirectly reference methods in the assembly.
Columns of a CLR user-defined type that depend on the assembly, and the type implements a
UserDefined (non-Native) serialization format.
Columns of a CLR user-defined type that reference views created by using WITH SCHEMABINDING.
If any CHECK constraints are present, they are disabled and marked untrusted. Any tables containing
columns depending on the assembly are marked as containing unchecked data until those tables are
explicitly checked.
Only members of the db_owner and db_ddlowner fixed database roles can specify this option.
Requires the ALTER ANY SCHEMA permission to specify this option.
For more information, see Implementing Assemblies.
[ DROP FILE { file_name[ ,...n] | ALL } ]
Removes the file name associated with the assembly, or all files associated with the assembly, from the
database. If used with ADD FILE that follows, DROP FILE executes first. This lets you to replace a file with
the same file name.

NOTE
This option is not available in a contained database.

[ ADD FILE FROM { client_file_specifier [ AS file_name] | file_bitsAS file_name}


Uploads a file to be associated with the assembly, such as source code, debug files or other related information,
into the server and made visible in the sys.assembly_files catalog view. client_file_specifier specifies the location
from which to upload the file. file_bits can be used instead to specify the list of binary values that make up the file.
file_name specifies the name under which the file should be stored in the instance of SQL Server. file_name must
be specified if file_bits is specified, and is optional if client_file_specifier is specified. If file_name is not specified, the
file_name part of client_file_specifier is used as file_name.

NOTE
This option is not available in a contained database.

Remarks
ALTER ASSEMBLY does not disrupt currently running sessions that are running code in the assembly being
modified. Current sessions complete execution by using the unaltered bits of the assembly.
If the FROM clause is specified, ALTER ASSEMBLY updates the assembly with respect to the latest copies of the
modules provided. Because there might be CLR functions, stored procedures, triggers, data types, and user-
defined aggregate functions in the instance of SQL Server that are already defined against the assembly, the
ALTER ASSEMBLY statement rebinds them to the latest implementation of the assembly. To accomplish this
rebinding, the methods that map to CLR functions, stored procedures, and triggers must still exist in the modified
assembly with the same signatures. The classes that implement CLR user-defined types and user-defined
aggregate functions must still satisfy the requirements for being a user-defined type or aggregate.
Cau t i on

If WITH UNCHECKED DATA is not specified, SQL Server tries to prevent ALTER ASSEMBLY from executing if
the new assembly version affects existing data in tables, indexes, or other persistent sites. However, SQL Server
does not guarantee that computed columns, indexes, indexed views or expressions will be consistent with the
underlying routines and types when the CLR assembly is updated. Use caution when you execute ALTER
ASSEMBLY to make sure that there is not a mismatch between the result of an expression and a value based on
that expression stored in the assembly.
ALTER ASSEMBLY changes the assembly version. The culture and public key token of the assembly remain the
same.
ALTER ASSEMBLY statement cannot be used to change the following:
The signatures of CLR functions, aggregate functions, stored procedures, and triggers in an instance of SQL
Server that reference the assembly. ALTER ASSEMBLY fails when SQL Server cannot rebind .NET
Framework database objects in SQL Server with the new version of the assembly.
The signatures of methods in the assembly that are called from other assemblies.
The list of assemblies that depend on the assembly, as referenced in the DependentList property of the
assembly.
The indexability of a method, unless there are no indexes or persisted computed columns depending on that
method, either directly or indirectly.
The FillRow method name attribute for CLR table-valued functions.
The Accumulate and Terminate method signature for user-defined aggregates.
System assemblies.
Assembly ownership. Use ALTER AUTHORIZATION (Transact-SQL ) instead.
Additionally, for assemblies that implement user-defined types, ALTER ASSEMBLY can be used for making
only the following changes:
Modifying public methods of the user-defined type class, as long as signatures or attributes are not
changed.
Adding new public methods.
Modifying private methods in any way.
Fields contained within a native-serialized user-defined type, including data members or base classes,
cannot be changed by using ALTER ASSEMBLY. All other changes are unsupported.
If ADD FILE FROM is not specified, ALTER ASSEMBLY drops any files associated with the assembly.
If ALTER ASSEMBLY is executed without the UNCHECKED data clause, checks are performed to verify that
the new assembly version does not affect existing data in tables. Depending on the amount of data that
needs to be checked, this may affect performance.

Permissions
Requires ALTER permission on the assembly. Additional requirements are as follows:
To alter an assembly whose existing permission set is EXTERNAL_ACCESS, requiresEXTERNAL ACCESS
ASSEMBLYpermission on the server.
To alter an assembly whose existing permission set is UNSAFE requires UNSAFE ASSEMBLY permission
on the server.
To change the permission set of an assembly to EXTERNAL_ACCESS, requiresEXTERNAL ACCESS
ASSEMBLY permission on the server.
To change the permission set of an assembly to UNSAFE, requires UNSAFE ASSEMBLY permission on
the server.
Specifying WITH UNCHECKED DATA, requires ALTER ANY SCHEMA permission.
Permissions with CLR strict security
The following permissions required to alter a CLR assembly when CLR strict security is enabled:
The user must have the ALTER ASSEMBLY permission
And one of the following conditions must also be true:
The assembly is signed with a certificate or asymmetric key that has a corresponding login with the
UNSAFE ASSEMBLY permission on the server. Signing the assembly is recommended.
The database has the TRUSTWORTHY property set to ON , and the database is owned by a login that has the
UNSAFE ASSEMBLY permission on the server. This option is not recommended.

For more information about assembly permission sets, see Designing Assemblies.

Examples
A. Refreshing an assembly
The following example updates assembly ComplexNumber to the latest copy of the .NET Framework modules that
hold its implementation.

NOTE
Assembly ComplexNumber can be created by running the UserDefinedDataType sample scripts. For more information, see
User Defined Type.

ALTER ASSEMBLY ComplexNumber


FROM 'C:\Program Files\Microsoft SQL
Server\130\Tools\Samples\1033\Engine\Programmability\CLR\UserDefinedDataType\CS\ComplexNumber\obj\Debug\Comple
xNumber.dll'

B. Adding a file to associate with an assembly


The following example uploads the source code file Class1.cs to be associated with assembly MyClass . This
example assumes assembly MyClass is already created in the database.

ALTER ASSEMBLY MyClass


ADD FILE FROM 'C:\MyClassProject\Class1.cs';

C. Changing the permissions of an assembly


The following example changes the permission set of assembly ComplexNumber from SAFE to EXTERNAL ACCESS .

ALTER ASSEMBLY ComplexNumber WITH PERMISSION_SET = EXTERNAL_ACCESS;

See Also
CREATE ASSEMBLY (Transact-SQL )
DROP ASSEMBLY (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER ASYMMETRIC KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of an asymmetric key.
Transact-SQL Syntax Conventions

Syntax
ALTER ASYMMETRIC KEY Asym_Key_Name <alter_option>

<alter_option> ::=
<password_change_option>
| REMOVE PRIVATE KEY

<password_change_option> ::=
WITH PRIVATE KEY ( <password_option> [ , <password_option> ] )

<password_option> ::=
ENCRYPTION BY PASSWORD = 'strongPassword'
| DECRYPTION BY PASSWORD = 'oldPassword'

Arguments
Asym_Key_Name
Is the name by which the asymmetric key is known in the database.
REMOVE PRIVATE KEY
Removes the private key from the asymmetric key The public key is not removed.
WITH PRIVATE KEY
Changes the protection of the private key.
ENCRYPTION BY PASSWORD ='stongPassword'
Specifies a new password for protecting the private key. password must meet the Windows password policy
requirements of the computer that is running the instance of SQL Server. If this option is omitted, the private key
will be encrypted by the database master key.
DECRYPTION BY PASSWORD ='oldPassword'
Specifies the old password, with which the private key is currently protected. Is not required if the private key is
encrypted with the database master key.

Remarks
If there is no database master key the ENCRYPTION BY PASSWORD option is required, and the operation will fail
if no password is supplied. For information about how to create a database master key, see CREATE MASTER KEY
(Transact-SQL ).
You can use ALTER ASYMMETRIC KEY to change the protection of the private key by specifying PRIVATE KEY
options as shown in the following table.
CHANGE PROTECTION FROM ENCRYPTION BY PASSWORD DECRYPTION BY PASSWORD

Old password to new password Required Required

Password to master key Omit Required

Master key to password Required Omit

The database master key must be opened before it can be used to protect a private key. For more information, see
OPEN MASTER KEY (Transact-SQL ).
To change the ownership of an asymmetric key, use ALTER AUTHORIZATION.

Permissions
Requires CONTROL permission on the asymmetric key if the private key is being removed.

Examples
A. Changing the password of the private key
The following example changes the password used to protect the private key of asymmetric key PacificSales09 .
The new password will be <enterStrongPasswordHere> .

ALTER ASYMMETRIC KEY PacificSales09


WITH PRIVATE KEY (
DECRYPTION BY PASSWORD = '<oldPassword>',
ENCRYPTION BY PASSWORD = '<enterStrongPasswordHere>');
GO

B. Removing the private key from an asymmetric key


The following example removes the private key from PacificSales19 , leaving only the public key.

ALTER ASYMMETRIC KEY PacificSales19 REMOVE PRIVATE KEY;


GO

C. Removing password protection from a private key


The following example removes the password protection from a private key and protects it with the database
master key.

OPEN MASTER KEY;


ALTER ASYMMETRIC KEY PacificSales09 WITH PRIVATE KEY (
DECRYPTION BY PASSWORD = '<enterStrongPasswordHere>' );
GO

See Also
CREATE ASYMMETRIC KEY (Transact-SQL )
DROP ASYMMETRIC KEY (Transact-SQL )
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
Extensible Key Management (EKM )
ALTER AUTHORIZATION (Transact-SQL)
5/3/2018 • 11 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the ownership of a securable.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server
ALTER AUTHORIZATION
ON [ <class_type>:: ] entity_name
TO { principal_name | SCHEMA OWNER }
[;]

<class_type> ::=
{
OBJECT | ASSEMBLY | ASYMMETRIC KEY | AVAILABILITY GROUP | CERTIFICATE
| CONTRACT | TYPE | DATABASE | ENDPOINT | FULLTEXT CATALOG
| FULLTEXT STOPLIST | MESSAGE TYPE | REMOTE SERVICE BINDING
| ROLE | ROUTE | SCHEMA | SEARCH PROPERTY LIST | SERVER ROLE
| SERVICE | SYMMETRIC KEY | XML SCHEMA COLLECTION
}

-- Syntax for SQL Database

ALTER AUTHORIZATION
ON [ <class_type>:: ] entity_name
TO { principal_name | SCHEMA OWNER }
[;]

<class_type> ::=
{
OBJECT | ASSEMBLY | ASYMMETRIC KEY | CERTIFICATE
| TYPE | DATABASE | FULLTEXT CATALOG
| FULLTEXT STOPLIST
| ROLE | SCHEMA | SEARCH PROPERTY LIST
| SYMMETRIC KEY | XML SCHEMA COLLECTION
}
-- Syntax for Azure SQL Data Warehouse

ALTER AUTHORIZATION ON
[ <class_type> :: ] <entity_name>
TO { principal_name | SCHEMA OWNER }
[;]

<class_type> ::= {
SCHEMA
| OBJECT
}

<entity_name> ::=
{
schema_name
| [ schema_name. ] object_name
}

-- Syntax for Parallel Data Warehouse

ALTER AUTHORIZATION ON
[ <class_type> :: ] <entity_name>
TO { principal_name | SCHEMA OWNER }
[;]

<class_type> ::= {
DATABASE
| SCHEMA
| OBJECT
}

<entity_name> ::=
{
database_name
| schema_name
| [ schema_name. ] object_name
}

Arguments
<class_type> Is the securable class of the entity for which the owner is being changed. OBJECT is the default.

OBJECT APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database, Azure SQL Data Warehouse, Parallel
Data Warehouse.

ASSEMBLY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

ASYMMETRIC KEY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

AVAILABILITY GROUP APPLIES TO: SQL Server 2012 through SQL Server 2017.

CERTIFICATE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

CONTRACT APPLIES TO: SQL Server 2008 through SQL Server 2017.
DATABASE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database. For more information,see ALTER
AUTHORIZATION FOR databases section below.

ENDPOINT APPLIES TO: SQL Server 2008 through SQL Server 2017.

FULLTEXT CATALOG APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

FULLTEXT STOPLIST APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

MESSAGE TYPE APPLIES TO: SQL Server 2008 through SQL Server 2017.

REMOTE SERVICE BINDING APPLIES TO: SQL Server 2008 through SQL Server 2017.

ROLE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

ROUTE APPLIES TO: SQL Server 2008 through SQL Server 2017.

SCHEMA APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database, Azure SQL Data Warehouse, Parallel
Data Warehouse.

SEARCH PROPERTY LIST APPLIES TO: SQL Server 2012 (11.x) through SQL Server
2017, Azure SQL Database.

SERVER ROLE APPLIES TO: SQL Server 2008 through SQL Server 2017.

SERVICE APPLIES TO: SQL Server 2008 through SQL Server 2017.

SYMMETRIC KEY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

TYPE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

XML SCHEMA COLLECTION APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

entity_name
Is the name of the entity.
principal_name | SCHEMA OWNER
Name of the security principal that will own the entity. Database objects must be owned by a database principal;
a database user or role. Server objects (such as databases) must be owned by a server principal (a login). Specify
SCHEMA OWNER as the principal_name to indicate that the object should be owned by the principal that
owns the schema of the object.

Remarks
ALTER AUTHORIZATION can be used to change the ownership of any entity that has an owner. Ownership of
database-contained entities can be transferred to any database-level principal. Ownership of server-level entities
can be transferred only to server-level principals.

IMPORTANT
Beginning with SQL Server 2005, a user can own an OBJECT or TYPE that is contained by a schema owned by another
database user. This is a change of behavior from earlier versions of SQL Server. For more information, see
OBJECTPROPERTY (Transact-SQL) and TYPEPROPERTY (Transact-SQL).

Ownership of the following schema-contained entities of type "object" can be transferred: tables, views,
functions, procedures, queues, and synonyms.
Ownership of the following entities cannot be transferred: linked servers, statistics, constraints, rules, defaults,
triggers, Service Broker queues, credentials, partition functions, partition schemes, database master keys, service
master key, and event notifications.
Ownership of members of the following securable classes cannot be transferred: server, login, user, application
role, and column.
The SCHEMA OWNER option is only valid when you are transferring ownership of a schema-contained entity.
SCHEMA OWNER will transfer ownership of the entity to the owner of the schema in which it resides. Only
entities of class OBJECT, TYPE, or XML SCHEMA COLLECTION are schema-contained.
If the target entity is not a database and the entity is being transferred to a new owner, all permissions on the
target will be dropped.
Cau t i on

In SQL Server 2005, the behavior of schemas changed from the behavior in earlier versions of SQL Server.
Code that assumes that schemas are equivalent to database users may not return correct results. Old catalog
views, including sysobjects, should not be used in a database in which any of the following DDL statements has
ever been used: CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP
USER, CREATE ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE,
ALTER AUTHORIZATION. In a database in which any of these statements has ever been used, you must use the
new catalog views. The new catalog views take into account the separation of principals and schemas that was
introduced in SQL Server 2005. For more information about catalog views, see Catalog Views (Transact-SQL ).
Also, note the following:

IMPORTANT
The only reliable way to find the owner of a object is to query the sys.objects catalog view. The only reliable way to find
the owner of a type is to use the TYPEPROPERTY function.

Special Cases and Conditions


The following table lists special cases, exceptions, and conditions that apply to altering authorization.

CLASS CONDITION

OBJECT Cannot change ownership of triggers, constraints, rules,


defaults, statistics, system objects, queues, indexed views, or
tables with indexed views.
CLASS CONDITION

SCHEMA When ownership is transferred, permissions on schema-


contained objects that do not have explicit owners will be
dropped. Cannot change the owner of sys, dbo, or
information_schema.

TYPE Cannot change ownership of a TYPE that belongs to sys or


information_schema.

CONTRACT, MESSAGE TYPE, or SERVICE Cannot change ownership of system entities.

SYMMETRIC KEY Cannot change ownership of global temporary keys.

CERTIFICATE or ASYMMETRIC KEY Cannot transfer ownership of these entities to a role or


group.

ENDPOINT The principal must be a login.

ALTER AUTHORIZATION for databases


APPLIES TO: SQL Server 2017, Azure SQL Database.
For SQL Server:
Requirements for the new owner:
The new owner principal must be one of the following:
A SQL Server authentication login.
A Windows authentication login representing a Windows user (not a group).
A Windows user that authenticates through a Windows authentication login representing a Windows group.
Requirements for the person executing the ALTER AUTHORIZATION statement:
If you are not a member of the sysadmin fixed server role, you must have at least TAKE OWNERSHIP
permission on the database, and must have IMPERSONATE permission on the new owner login.
For Azure SQL Database:
Requirements for the new owner:
The new owner principal must be one of the following:
A SQL Server authentication login.
A federated user (not a group) present in Azure AD.
A managed user (not a group) or an application present in Azure AD.

NOTE
If the new owner is an Azure Active Directory user, it cannot exist as a user in the database where the new owner will
become the new DBO. Such Azure AD user must be first removed from the database before executing the ALTER
AUTHORIZATION statement changing the database ownership to the new user. For more information about configuring
an Azure Active Directory users with SQL Database, see Connecting to SQL Database or SQL Data Warehouse By Using
Azure Active Directory Authentication.

Requirements for the person executing the ALTER AUTHORIZATION statement:


You must connect to the target database to change the owner of that database.
The following types of accounts can change the owner of a database.
The service-level principal login. (The SQL Azure administrator provisioned when the logical server was
created.)
The Azure Active Directory administrator for the Azure SQL Server.
The current owner of the database.
The following table summarizes the requirements:

EXECUTOR TARGET RESULT

SQL Server Authentication login SQL Server Authentication login Success

SQL Server Authentication login Azure AD user Fail

Azure AD user SQL Server Authentication login Success

Azure AD user Azure AD user Success

To verify an Azure AD owner of the database execute the following Transact-SQL command in a user database
(in this example testdb ).

SELECT CAST(owner_sid as uniqueidentifier) AS Owner_SID


FROM sys.databases
WHERE name = 'testdb';

The output will be an identifier (such as 6D8B81F6-7C79-444C -8858-4AF896C03C67) which corresponds to


Azure AD ObjectID assigned to richel@cqclinic.onmicrosoft.com
When a SQL Server authentication login user is the database owner, execute the following statement in the
master database to verify the database owner:

SELECT d.name, d.owner_sid, sl.name


FROM sys.databases AS d
JOIN sys.sql_logins AS sl
ON d.owner_sid = sl.sid;

Best practice
Instead of using Azure AD users as individual owners of the database, use an Azure AD group as a member of
the db_owner fixed database role. The following steps, show how to configure a disabled login as the database
owner, and make an Azure Active Directory group ( mydbogroup ) a member of the db_owner role.
1. Login to SQL Server as Azure AD admin, and change the owner of the database to a disabled SQL Server
authentication login. For example, from the user database execute:
ALTER AUTHORIZATION ON database::testdb TO DisabledLogin;
2. Create an Azure AD group that should own the database and add it as a user to the user database. For
example:
CREATE USER [mydbogroup] FROM EXTERNAL PROVIDER;
3. In the user database add the user representing the Azure AD group, to the db_owner fixed database role. For
example:
ALTER ROLE db_owner ADD MEMBER mydbogroup;

Now the mydbogroup members can centrally manage the database as members of the db_owner role.
When members of this group are removed from the Azure AD group, they automatically loose the dbo
permissions for this database.
Similarly if new members are added to mydbogroup Azure AD group, they automatically gain the dbo access
for this database.
To check if a specific user has the effective dbo permission, have the user execute the following statement:

SELECT IS_MEMBER ('db_owner');

A return value of 1 indicates the user is a member of the role.

Permissions
Requires TAKE OWNERSHIP permission on the entity. If the new owner is not the user that is executing this
statement, also requires either, 1) IMPERSONATE permission on the new owner if it is a user or login; or 2) if
the new owner is a role, membership in the role, or ALTER permission on the role; or 3) if the new owner is an
application role, ALTER permission on the application role.

Examples
A. Transfer ownership of a table
The following example transfers ownership of table Sprockets to user MichikoOsada . The table is located inside
schema Parts .

ALTER AUTHORIZATION ON OBJECT::Parts.Sprockets TO MichikoOsada;


GO

The query could also look like the following:

ALTER AUTHORIZATION ON Parts.Sprockets TO MichikoOsada;


GO

If the objects schema is not included as part of the statement, the Database Engine will look for the object in the
users default schema. For example:

ALTER AUTHORIZATION ON Sprockets TO MichikoOsada;


ALTER AUTHORIZATION ON OBJECT::Sprockets TO MichikoOsada;

B. Transfer ownership of a view to the schema owner


The following example transfers ownership the view ProductionView06 to the owner of the schema that contains
it. The view is located inside schema Production .

ALTER AUTHORIZATION ON OBJECT::Production.ProductionView06 TO SCHEMA OWNER;


GO

C. Transfer ownership of a schema to a user


The following example transfers ownership of the schema SeattleProduction11 to user SandraAlayo .

ALTER AUTHORIZATION ON SCHEMA::SeattleProduction11 TO SandraAlayo;


GO
D. Transfer ownership of an endpoint to a SQL Server login
The following example transfers ownership of endpoint CantabSalesServer1 to JaePak . Because the endpoint is
a server-level securable, the endpoint can only be transferred to a server-level principal.
Applies to: SQL Server 2008 through SQL Server 2017.

ALTER AUTHORIZATION ON ENDPOINT::CantabSalesServer1 TO JaePak;


GO

E. Changing the owner of a table


Each of the following examples changes the owner of the Sprockets table in the Parts database to the
database user MichikoOsada .

ALTER AUTHORIZATION ON Sprockets TO MichikoOsada;


ALTER AUTHORIZATION ON dbo.Sprockets TO MichikoOsada;
ALTER AUTHORIZATION ON OBJECT::Sprockets TO MichikoOsada;
ALTER AUTHORIZATION ON OBJECT::dbo.Sprockets TO MichikoOsada;

F. Changing the owner of a database


APPLIES TO: SQL Server 2008 through SQL Server 2017, Parallel Data Warehouse, SQL Database.
The following example change the owner of the Parts database to the login MichikoOsada .

ALTER AUTHORIZATION ON DATABASE::Parts TO MichikoOsada;

G. Changing the owner of a SQL Database to an Azure AD User


In the following example, an Azure Active Directory administrator for SQL Server in an organization with an
active directory named cqclinic.onmicrosoft.com , can change the current ownership of a database targetDB
and make an AAD user richel@cqclinic.onmicorsoft.com the new database owner using the following
command:

ALTER AUTHORIZATION ON database::targetDB TO [rachel@cqclinic.onmicrosoft.com];

Note that for Azure AD users the brackets around the user name must be used.

See Also
OBJECTPROPERTY (Transact-SQL )
TYPEPROPERTY (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER AVAILABILITY GROUP (Transact-SQL)
5/30/2018 • 29 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters an existing Always On availability group in SQL Server. Most ALTER AVAIL ABILITY GROUP arguments
are supported only the current primary replica. However the JOIN, FAILOVER, and
FORCE_FAILOVER_ALLOW_DATA_LOSS arguments are supported only on secondary replicas.
Transact-SQL Syntax Conventions

Syntax
ALTER AVAILABILITY GROUP group_name
{
SET ( <set_option_spec> )
| ADD DATABASE database_name
| REMOVE DATABASE database_name
| ADD REPLICA ON <add_replica_spec>
| MODIFY REPLICA ON <modify_replica_spec>
| REMOVE REPLICA ON <server_instance>
| JOIN
| JOIN AVAILABILITY GROUP ON <add_availability_group_spec> [ ,...2 ]
| MODIFY AVAILABILITY GROUP ON <modify_availability_group_spec> [ ,...2 ]
| GRANT CREATE ANY DATABASE
| DENY CREATE ANY DATABASE
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
| ADD LISTENER ‘dns_name’ ( <add_listener_option> )
| MODIFY LISTENER ‘dns_name’ ( <modify_listener_option> )
| RESTART LISTENER ‘dns_name’
| REMOVE LISTENER ‘dns_name’
| OFFLINE
}
[ ; ]

<set_option_spec> ::=
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
| FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
| HEALTH_CHECK_TIMEOUT = milliseconds
| DB_FAILOVER = { ON | OFF }
| REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = { integer }

<server_instance> ::=
{ 'system_name[\instance_name]' | 'FCI_network_name[\instance_name]' }

<add_replica_spec>::=
<server_instance> WITH
(
ENDPOINT_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT | CONFIGURATION_ONLY },
FAILOVER_MODE = { AUTOMATIC | MANUAL }
[ , <add_replica_option> [ ,...n ] ]
)

<add_replica_option>::=
SEEDING_MODE = { AUTOMATIC | MANUAL }
| BACKUP_PRIORITY = n
| SECONDARY_ROLE ( {
| SECONDARY_ROLE ( {
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
| READ_ONLY_ROUTING_URL = 'TCP://system-address:port'
} )
| PRIMARY_ROLE ( {
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
| READ_ONLY_ROUTING_LIST = { ( ‘<server_instance>’ [ ,...n ] ) | NONE }
} )
| SESSION_TIMEOUT = seconds

<modify_replica_spec>::=
<server_instance> WITH
(
ENDPOINT_URL = 'TCP://system-address:port'
| AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT }
| FAILOVER_MODE = { AUTOMATIC | MANUAL }
| SEEDING_MODE = { AUTOMATIC | MANUAL }
| BACKUP_PRIORITY = n
| SECONDARY_ROLE ( {
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
| READ_ONLY_ROUTING_URL = 'TCP://system-address:port'
} )
| PRIMARY_ROLE ( {
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
| READ_ONLY_ROUTING_LIST = { ( ‘<server_instance>’ [ ,...n ] ) | NONE }
} )
| SESSION_TIMEOUT = seconds
)

<add_availability_group_spec>::=
<ag_name> WITH
(
LISTENER_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT },
FAILOVER_MODE = MANUAL,
SEEDING_MODE = { AUTOMATIC | MANUAL }
)

<modify_availability_group_spec>::=
<ag_name> WITH
(
LISTENER_URL = 'TCP://system-address:port'
| AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT }
| SEEDING_MODE = { AUTOMATIC | MANUAL }
)

<add_listener_option> ::=
{
WITH DHCP [ ON ( <network_subnet_option> ) ]
| WITH IP ( { ( <ip_address_option> ) } [ , ...n ] ) [ , PORT = listener_port ]
}

<network_subnet_option> ::=
‘four_part_ipv4_address’, ‘four_part_ipv4_mask’

<ip_address_option> ::=
{
‘four_part_ipv4_address’, ‘four_part_ipv4_mask’
| ‘ipv6_address’
}

<modify_listener_option>::=
{
ADD IP ( <ip_address_option> )
| PORT = listener_port
}
Arguments
group_name
Specifies the name of the new availability group. group_name must be a valid SQL Server identifier, and it must
be unique across all availability groups in the WSFC cluster.
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
Specifies a preference about how a backup job should evaluate the primary replica when choosing where to
perform backups. You can script a given backup job to take the automated backup preference into account. It is
important to understand that the preference is not enforced by SQL Server, so it has no impact on ad hoc
backups.
Supported only on the primary replica.
The values are as follows:
PRIMARY
Specifies that the backups should always occur on the primary replica. This option is useful if you need backup
features, such as creating differential backups, that are not supported when backup is run on a secondary replica.

IMPORTANT
If you plan to use log shipping to prepare any secondary databases for an availability group, set the automated backup
preference to Primary until all the secondary databases have been prepared and joined to the availability group.

SECONDARY_ONLY
Specifies that backups should never be performed on the primary replica. If the primary replica is the only replica
online, the backup should not occur.
SECONDARY
Specifies that backups should occur on a secondary replica except when the primary replica is the only replica
online. In that case, the backup should occur on the primary replica. This is the default behavior.
NONE
Specifies that you prefer that backup jobs ignore the role of the availability replicas when choosing the replica to
perform backups. Note backup jobs might evaluate other factors such as backup priority of each availability
replica in combination with its operational state and connected state.

IMPORTANT
There is no enforcement of the AUTOMATED_BACKUP_PREFERENCE setting. The interpretation of this preference depends
on the logic, if any, that you script into back jobs for the databases in a given availability group. The automated backup
preference setting has no impact on ad hoc backups. For more information, see Configure Backup on Availability Replicas
(SQL Server).

NOTE
To view the automated backup preference of an existing availability group, select the automated_backup_preference or
automated_backup_preference_desc column of the sys.availability_groups catalog view. Additionally,
sys.fn_hadr_backup_is_preferred_replica (Transact-SQL) can be used to determine the preferred backup replica. This function
will always return 1 for at least one of the replicas, even when AUTOMATED_BACKUP_PREFERENCE = NONE .

FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
Specifies what failure conditions will trigger an automatic failover for this availability group.
FAILURE_CONDITION_LEVEL is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode (AVAILIBILITY_MODE = SYNCHRONOUS_COMMIT).
Furthermore, failure conditions can trigger an automatic failover only if both the primary and secondary replicas
are configured for automatic failover mode (FAILOVER_MODE = AUTOMATIC ) and the secondary replica is
currently synchronized with the primary replica.
Supported only on the primary replica.
The failure-condition levels (1–5) range from the least restrictive, level 1, to the most restrictive, level 5. A given
condition level encompasses all of the less restrictive levels. Thus, the strictest condition level, 5, includes the four
less restrictive condition levels (1-4), level 4 includes levels 1-3, and so forth. The following table describes the
failure-condition that corresponds to each level.

LEVEL FAILURE CONDITION

1 Specifies that an automatic failover should be initiated when


any of the following occurs:

The SQL Server service is down.

The lease of the availability group for connecting to the WSFC


cluster expires because no ACK is received from the server
instance. For more information, see How It Works: SQL Server
Always On Lease Timeout.

2 Specifies that an automatic failover should be initiated when


any of the following occurs:

The instance of SQL Server does not connect to cluster, and


the user-specified HEALTH_CHECK_TIMEOUT threshold of the
availability group is exceeded.

The availability replica is in failed state.

3 Specifies that an automatic failover should be initiated on


critical SQL Server internal errors, such as orphaned spinlocks,
serious write-access violations, or too much dumping.

This is the default behavior.

4 Specifies that an automatic failover should be initiated on


moderate SQL Server internal errors, such as a persistent out-
of-memory condition in the SQL Server internal resource
pool.

5 Specifies that an automatic failover should be initiated on any


qualified failure conditions, including:

Exhaustion of SQL Engine worker-threads.

Detection of an unsolvable deadlock.

NOTE
Lack of response by an instance of SQL Server to client requests is not relevant to availability groups.

The FAILURE_CONDITION_LEVEL and HEALTH_CHECK_TIMEOUT values, define a flexible failover policy for a
given group. This flexible failover policy provides you with granular control over what conditions must cause an
automatic failover. For more information, see Flexible Failover Policy for Automatic Failover of an Availability
Group (SQL Server).
HEALTH_CHECK_TIMEOUT = milliseconds
Specifies the wait time (in milliseconds) for the sp_server_diagnostics system stored procedure to return server-
health information before WSFC cluster assumes that the server instance is slow or hung.
HEALTH_CHECK_TIMEOUT is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode with automatic failover (AVAILIBILITY_MODE =
SYNCHRONOUS_COMMIT). Furthermore, a health-check timeout can trigger an automatic failover only if both
the primary and secondary replicas are configured for automatic failover mode (FAILOVER_MODE =
AUTOMATIC ) and the secondary replica is currently synchronized with the primary replica.
The default HEALTH_CHECK_TIMEOUT value is 30000 milliseconds (30 seconds). The minimum value is 15000
milliseconds (15 seconds), and the maximum value is 4294967295 milliseconds.
Supported only on the primary replica.

IMPORTANT
sp_server_diagnostics does not perform health checks at the database level.

DB_FAILOVER = { ON | OFF }
Specifies the response to take when a database on the primary replica is offline. When set to ON, any status other
than ONLINE for a database in the availability group triggers an automatic failover. When this option is set to
OFF, only the health of the instance is used to trigger automatic failover.
For more information regarding this setting, see Database Level Health Detection Option
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT
Introduced in SQL Server 2017. Used to set a minimum number of synchronous secondary replicas required to
commit before the primary commits a transaction. Guarantees that SQL Server transactions will wait until the
transaction logs are updated on the minimum number of secondary replicas. The default is 0 which gives the
same behavior as SQL Server 2016. The minimum value is 0. The maximum value is the number of replicas
minus 1. This option relates to replicas in synchronous commit mode. When replicas are in synchronous commit
mode, writes on the primary replica wait until writes on the secondary synchronous replicas are committed to the
replica database transaction log. If a SQL Server that hosts a secondary synchronous replica stops responding,
the SQL Server that hosts the primary replica will mark that secondary replica as NOT SYNCHRONIZED and
proceed. When the unresponsive database comes back online it will be in a "not synced" state and the replica will
be marked as unhealthy until the primary can make it synchronous again. This setting guarantees that the
primary replica will not proceed until the minimum number of replicas have committed each transaction. If the
minimum number of replicas is not available then commits on the primary will fail. For cluster type EXTERNAL the
setting is changed when the availability group is added to a cluster resource. See High availability and data
protection for availability group configurations.
ADD DATABASE database_name
Specifies a list of one or more user databases that you want to add to the availability group. These databases must
reside on the instance of SQL Server that hosts the current primary replica. You can specify multiple databases for
an availability group, but each database can belong to only one availability group. For information about the type
of databases that an availability group can support, see Prerequisites, Restrictions, and Recommendations for
Always On Availability Groups (SQL Server). To find out which local databases already belong to an availability
group, see the replica_id column in the sys.databases catalog view.
Supported only on the primary replica.
NOTE
After you have created the availability group, you will need connect to each server instance that hosts a secondary replica
and then prepare each secondary database and join it to the availability group. For more information, see Start Data
Movement on an Always On Secondary Database (SQL Server).

REMOVE DATABASE database_name


Removes the specified primary database and the corresponding secondary databases from the availability group.
Supported only on the primary replica.
For information about the recommended follow up after removing an availability database from an availability
group, see Remove a Primary Database from an Availability Group (SQL Server).
ADD REPLICA ON
Specifies from one to eight SQL server instances to host secondary replicas in an availability group. Each replica
is specified by its server instance address followed by a WITH (…) clause.
Supported only on the primary replica.
You need to join every new secondary replica to the availability group. For more information, see the description
of the JOIN option, later in this section.
<server_instance>
Specifies the address of the instance of SQL Server that is the host for a replica. The address format depends on
whether the instance is the default instance or a named instance and whether it is a standalone instance or a
failover cluster instance (FCI). The syntax is as follows:
{ 'system_name[\instance_name]' | 'FCI_network_name[\instance_name]' }
The components of this address are as follows:
system_name
Is the NetBIOS name of the computer system on which the target instance of SQL Server resides. This computer
must be a WSFC node.
FCI_network_name
Is the network name that is used to access a SQL Server failover cluster. Use this if the server instance
participates as a SQL Server failover partner. Executing SELECT @@SERVERNAME on an FCI server instance
returns its entire 'FCI_network_name[\instance_name]' string (which is the full replica name).
instance_name
Is the name of an instance of a SQL Server that is hosted by system_name or FCI_network_name and that has
Always On enabled. For a default server instance, instance_name is optional. The instance name is case
insensitive. On a stand-alone server instance, this value name is the same as the value returned by executing
SELECT @@SERVERNAME.
\
Is a separator used only when specifying instance_name, in order to separate it from system_name or
FCI_network_name.
For information about the prerequisites for WSFC nodes and server instances, see Prerequisites, Restrictions, and
Recommendations for Always On Availability Groups (SQL Server).
ENDPOINT_URL ='TCP://system -address:port'
Specifies the URL path for the database mirroring endpoint on the instance of SQL Server that will host the
availability replica that you are adding or modifying.
ENDPOINT_URL is required in the ADD REPLICA ON clause and optional in the MODIFY REPLICA ON clause.
For more information, see Specify the Endpoint URL When Adding or Modifying an Availability Replica (SQL
Server).
'TCP://system -address:port'
Specifies a URL for specifying an endpoint URL or read-only routing URL. The URL parameters are as follows:
system -address
Is a string, such as a system name, a fully qualified domain name, or an IP address, that unambiguously identifies
the destination computer system.
port
Is a port number that is associated with the mirroring endpoint of the server instance (for the ENDPOINT_URL
option) or the port number used by the Database Engine of the server instance (for the
READ_ONLY_ROUTING_URL option).
AVAIL ABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT |
CONFIGURATION_ONLY }
Specifies whether the primary replica has to wait for the secondary replica to acknowledge the hardening
(writing) of the log records to disk before the primary replica can commit the transaction on a given primary
database. The transactions on different databases on the same primary replica can commit independently.
SYNCHRONOUS_COMMIT
Specifies that the primary replica will wait to commit transactions until they have been hardened on this
secondary replica (synchronous-commit mode). You can specify SYNCHRONOUS_COMMIT for up to three
replicas, including the primary replica.
ASYNCHRONOUS_COMMIT
Specifies that the primary replica commits transactions without waiting for this secondary replica to harden the
log (synchronous-commit availability mode). You can specify ASYNCHRONOUS_COMMIT for up to five
availability replicas, including the primary replica.
CONFIGURATION_ONLY Specifies that the primary replica synchronously commit availability group
configuration metadata to the master database on this replica. The replica will not contain user data. This option:
Can be hosted on any edition of SQL Server, including Express Edition.
Requires the data mirroring endpoint of the CONFIGURATION_ONLY replica to be type WITNESS .
Can not be altered.
Is not valid when CLUSTER_TYPE = WSFC .
For more information, see Configuration only replica.
AVAIL ABILITY_MODE is required in the ADD REPLICA ON clause and optional in the MODIFY REPLICA
ON clause. For more information, see Availability Modes (Always On Availability Groups).
FAILOVER_MODE = { AUTOMATIC | MANUAL }
Specifies the failover mode of the availability replica that you are defining.
AUTOMATIC
Enables automatic failover. AUTOMATIC is supported only if you also specify AVAIL ABILITY_MODE =
SYNCHRONOUS_COMMIT. You can specify AUTOMATIC for two availability replicas, including the
primary replica.

NOTE
SQL Server Failover Cluster Instances (FCIs) do not support automatic failover by availability groups, so any availability
replica that is hosted by an FCI can only be configured for manual failover.
MANUAL
Enables manual failover or forced manual failover (forced failover) by the database administrator.
FAILOVER_MODE is required in the ADD REPLICA ON clause and optional in the MODIFY REPLICA ON
clause. Two types of manual failover exist, manual failover without data loss and forced failover (with possible
data loss), which are supported under different conditions. For more information, see Failover and Failover Modes
(Always On Availability Groups).
SEEDING_MODE = { AUTOMATIC | MANUAL }
Specifies how the secondary replica will be initially seeded.
AUTOMATIC
Enables direct seeding. This method will seed the secondary replica over the network. This method does not
require you to backup and restore a copy of the primary database on the replica.

NOTE
For direct seeding, you must allow database creation on each secondary replica by calling ALTER AVAILABILITY GROUP
with the GRANT CREATE ANY DATABASE option.

MANUAL
Specifies manual seeding (default). This method requires you to create a backup of the database on the primary
replica and manually restore that backup on the secondary replica.
BACKUP_PRIORITY =n
Specifies your priority for performing backups on this replica relative to the other replicas in the same availability
group. The value is an integer in the range of 0..100. These values have the following meanings:
1..100 indicates that the availability replica could be chosen for performing backups. 1 indicates the lowest
priority, and 100 indicates the highest priority. If BACKUP_PRIORITY = 1, the availability replica would be
chosen for performing backups only if no higher priority availability replicas are currently available.
0 indicates that this availability replica will never be chosen for performing backups. This is useful, for
example, for a remote availability replica to which you never want backups to fail over.
For more information, see Active Secondaries: Backup on Secondary Replicas (Always On Availability
Groups).
SECONDARY_ROLE ( … )
Specifies role-specific settings that will take effect if this availability replica currently owns the secondary
role (that is, whenever it is a secondary replica). Within the parentheses, specify either or both secondary-
role options. If you specify both, use a comma-separated list.
The secondary role options are as follows:
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
Specifies whether the databases of a given availability replica that is performing the secondary role (that is,
is acting as a secondary replica) can accept connections from clients, one of:
NO
No user connections are allowed to secondary databases of this replica. They are not available for read
access. This is the default behavior.
READ_ONLY
Only connections are allowed to the databases in the secondary replica where the Application Intent
property is set to ReadOnly. For more information about this property, see Using Connection String
Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the secondary replica for read-only access.
For more information, see Active Secondaries: Readable Secondary Replicas (Always On Availability
Groups).
READ_ONLY_ROUTING_URL ='TCP://system -address:port'
Specifies the URL to be used for routing read-intent connection requests to this availability replica. This is
the URL on which the SQL Server Database Engine listens. Typically, the default instance of the SQL
Server Database Engine listens on TCP port 1433.
For a named instance, you can obtain the port number by querying the port and type_desc columns of
the sys.dm_tcp_listener_states dynamic management view. The server instance uses the Transact-SQL
listener (type_desc='TSQL').
For more information about calculating the read-only routing URL for an availability replica, see
Calculating read_only_routing_url for Always On.

NOTE
For a named instance of SQL Server, the Transact-SQL listener should be configured to use a specific port. For more
information, see Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager).

PRIMARY_ROLE ( … )
Specifies role-specific settings that will take effect if this availability replica currently owns the primary role (that
is, whenever it is the primary replica). Within the parentheses, specify either or both primary-role options. If you
specify both, use a comma-separated list.
The primary role options are as follows:
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
Specifies the type of connection that the databases of a given availability replica that is performing the primary
role (that is, is acting as a primary replica) can accept from clients, one of:
READ_WRITE
Connections where the Application Intent connection property is set to ReadOnly are disallowed. When the
Application Intent property is set to ReadWrite or the Application Intent connection property is not set, the
connection is allowed. For more information about Application Intent connection property, see Using Connection
String Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the primary replica. This is the default behavior.
READ_ONLY_ROUTING_LIST = { (‘<server_instance>’ [ ,...n ] ) | NONE }
Specifies a comma-separated list of server instances that host availability replicas for this availability group that
meet the following requirements when running under the secondary role:
Be configured to allow all connections or read-only connections (see the ALLOW_CONNECTIONS
argument of the SECONDARY_ROLE option, above).
Have their read-only routing URL defined (see the READ_ONLY_ROUTING_URL argument of the
SECONDARY_ROLE option, above).
The READ_ONLY_ROUTING_LIST values are as follows:
<server_instance>
Specifies the address of the instance of SQL Server that is the host for an availability replica that is a
readable secondary replica when running under the secondary role.
Use a comma-separated list to specify all of the server instances that might host a readable secondary
replica. Read-only routing will follow the order in which server instances are specified in the list. If you
include a replica's host server instance on the replica's read-only routing list, placing this server instance at
the end of the list is typically a good practice, so that read-intent connections go to a secondary replica, if
one is available.
Beginning with SQL Server 2016 (13.x), you can load-balance read-intent requests across readable
secondary replicas. You specify this by placing the replicas in a nested set of parentheses within the read-
only routing list. For more information and examples, see Configure load-balancing across read-only
replicas.
NONE
Specifies that when this availability replica is the primary replica, read-only routing will not be supported.
This is the default behavior. When used with MODIFY REPLICA ON, this value disables an existing list, if
any.
SESSION_TIMEOUT =seconds
Specifies the session-timeout period in seconds. If you do not specify this option, by default, the time
period is 10 seconds. The minimum value is 5 seconds.

IMPORTANT
We recommend that you keep the time-out period at 10 seconds or greater.

For more information about the session-timeout period, see Overview of Always On Availability Groups (SQL
Server).
MODIFY REPLICA ON
Modifies any of the replicas of the availability group. The list of replicas to be modified contains the server
instance address and a WITH (…) clause for each replica.
Supported only on the primary replica.
REMOVE REPLICA ON
Removes the specified secondary replica from the availability group. The current primary replica cannot be
removed from an availability group. On being removed, the replica stops receiving data. Its secondary databases
are removed from the availability group and enter the RESTORING state.
Supported only on the primary replica.

NOTE
If you remove a replica while it is unavailable or failed, when it comes back online it will discover that it no longer belongs
the availability group.

JOIN
Causes the local server instance to host a secondary replica in the specified availability group.
Supported only on a secondary replica that has not yet been joined to the availability group.
For more information, see Join a Secondary Replica to an Availability Group (SQL Server).
FAILOVER
Initiates a manual failover of the availability group without data loss to the secondary replica to which you are
connected. The replica that will host the primary replica is the failover target. The failover target will take over the
primary role and recover its copy of each database and bring them online as the new primary databases. The
former primary replica concurrently transitions to the secondary role, and its databases become secondary
databases and are immediately suspended. Potentially, these roles can be switched back and forth by a series of
failures.
Supported only on a synchronous-commit secondary replica that is currently synchronized with the primary
replica. Note that for a secondary replica to be synchronized the primary replica must also be running in
synchronous-commit mode.

NOTE
A failover command returns as soon as the failover target has accepted the command. However, database recovery occurs
asynchronously after the availability group has finished failing over.

For information about the limitations, prerequisites and recommendations for a performing a planned manual
failover, see Perform a Planned Manual Failover of an Availability Group (SQL Server).
FORCE_FAILOVER_ALLOW_DATA_LOSS
Cau t i on

Forcing failover, which might involve some data loss, is strictly a disaster recovery method. Therefore, We strongly
recommend that you force failover only if the primary replica is no longer running, you are willing to risk losing
data, and you must restore service to the availability group immediately.
Supported only on a replica whose role is in the SECONDARY or RESOLVING state. --The replica on which you
enter a failover command is known as the failover target.
Forces failover of the availability group, with possible data loss, to the failover target. The failover target will take
over the primary role and recover its copy of each database and bring them online as the new primary databases.
On any remaining secondary replicas, every secondary database is suspended until manually resumed. When the
former primary replica becomes available, it will switch to the secondary role, and its databases will become
suspended secondary databases.

NOTE
A failover command returns as soon as the failover target has accepted the command. However, database recovery occurs
asynchronously after the availability group has finished failing over.

For information about the limitations, prerequisites and recommendations for forcing failover and the effect of a
forced failover on the former primary databases in the availability group, see Perform a Forced Manual Failover of
an Availability Group (SQL Server).
ADD LISTENER ‘dns_name’( <add_listener_option> )
Defines a new availability group listener for this availability group. Supported only on the primary replica.

IMPORTANT
Before you create your first listener, we strongly recommend that you read Create or Configure an Availability Group
Listener (SQL Server).
After you create a listener for a given availability group, we strongly recommend that you do the following:
Ask your network administrator to reserve the listener's IP address for its exclusive use.
Give the listener's DNS host name to application developers to use in connection strings when requesting client
connections to this availability group.
dns_name
Specifies the DNS host name of the availability group listener. The DNS name of the listener must be unique in
the domain and in NetBIOS.
dns_name is a string value. This name can contain only alphanumeric characters, dashes (-), and hyphens (_), in
any order. DNS host names are case insensitive. The maximum length is 63 characters.
We recommend that you specify a meaningful string. For example, for an availability group named AG1 ,a
meaningful DNS host name would be ag1-listener .

IMPORTANT
NetBIOS recognizes only the first 15 chars in the dns_name. If you have two WSFC clusters that are controlled by the same
Active Directory and you try to create availability group listeners in both of clusters using names with more than 15
characters and an identical 15 character prefix, you will get an error reporting that the Virtual Network Name resource could
not be brought online. For information about prefix naming rules for DNS names, see Assigning Domain Names.

JOIN AVAIL ABILITY GROUP ON


Joins to a distributed availability group. When you create a distributed availability group, the availability group on
the cluster where it is created is the primary availability group. When you execute JOIN, the local server instance's
availability group is the secondary availability group.
<ag_name>
Specifies the name of the availability group that makes up one half of the distributed availability group.
LISTENER_URL ='TCP://system -address:port'
Specifies the URL path for the listener associated with the availability group.
The LISTENER_URL clause is required.
'TCP://system -address:port'
Specifies a URL for the listener associated with the availability group. The URL parameters are as follows:
system -address
Is a string, such as a system name, a fully qualified domain name, or an IP address, that unambiguously identifies
the listener.
port
Is a port number that is associated with the mirroring endpoint of the availability group. Note that this is not the
port for client connectivity that is configured on the listener.
AVAIL ABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT }
Specifies whether the primary replica has to wait for the secondary availability group to acknowledge the
hardening (writing) of the log records to disk before the primary replica can commit the transaction on a given
primary database.
SYNCHRONOUS_COMMIT
Specifies that the primary replica will wait to commit transactions until they have been hardened on the
secondary availability group. You can specify SYNCHRONOUS_COMMIT for up to two availability groups,
including the primary availability group.
ASYNCHRONOUS_COMMIT
Specifies that the primary replica commits transactions without waiting for this secondary availability group to
harden the log. You can specify ASYNCHRONOUS_COMMIT for up to two availability groups, including the
primary availability group.
The AVAIL ABILITY_MODE clause is required.
FAILOVER_MODE = { MANUAL }
Specifies the failover mode of the distributed availability group.
MANUAL
Enables planned manual failover or forced manual failover (typically called forced failover) by the database
administrator.
Automatic failover to the secondary availability group is not supported.
SEEDING_MODE= { AUTOMATIC | MANUAL }
Specifies how the secondary availability group will be initially seeded.
AUTOMATIC
Enables automatic seeding. This method will seed the secondary availability group over the network. This method
does not require you to backup and restore a copy of the primary database on the replicas of the secondary
availability group.
MANUAL
Specifies manual seeding. This method requires you to create a backup of the database on the primary replica and
manually restore that backup on the replica(s) of the secondary availability group.
MODIFY AVAIL ABILITY GROUP ON
Modifies any of the availability group settings of a distributed availability group. The list of availability groups to
be modified contains the availability group name and a WITH (…) clause for each availability group.

IMPORTANT
This command must be repeated on both the primary availability group and secondary availability group instances.

GRANT CREATE ANY DATABASE


Permits the availability group to create databases on behalf of the primary replica, which supports direct seeding
(SEEDING_MODE = AUTOMATIC ). This parameter should be run on every secondary replica that supports
direct seeding after that secondary joins the availability group. Requires the CREATE ANY DATABASE
permission.
DENY CREATE ANY DATABASE
Removes the ability of the availability group to create databases on behalf of the primary replica.
<add_listener_option>
ADD LISTENER takes one of the following options:
WITH DHCP [ ON { (‘four_part_ipv4_address’,‘four_part_ipv4_mask’) } ]
Specifies that the availability group listener will use the Dynamic Host Configuration Protocol (DHCP ). Optionally,
use the ON clause to identify the network on which this listener will be created. DHCP is limited to a single subnet
that is used for every server instances that hosts an availability replica in the availability group.

IMPORTANT
We do not recommend DHCP in production environment. If there is a down time and the DHCP IP lease expires, extra time
is required to register the new DHCP network IP address that is associated with the listener DNS name and impact the client
connectivity. However, DHCP is good for setting up your development and testing environment to verify basic functions of
availability groups and for integration with your applications.

For example:
WITH DHCP ON ('10.120.19.0','255.255.254.0')
WITH IP ( { (‘four_part_ipv4_address’,‘four_part_ipv4_mask’) | (‘ipv6_address’) } [ , ...n ] ) [ , PORT =listener_port ]
Specifies that, instead of using DHCP, the availability group listener will use one or more static IP addresses. To
create an availability group across multiple subnets, each subnet requires one static IP address in the listener
configuration. For a given subnet, the static IP address can be either an IPv4 address or an IPv6 address. Contact
your network administrator to get a static IP address for each subnet that will host an availability replica for the
new availability group.
For example:
WITH IP ( ('10.120.19.155','255.255.254.0') )

four_part_ipv4_address
Specifies an IPv4 four-part address for an availability group listener. For example, 10.120.19.155 .
four_part_ipv4_mask
Specifies an IPv4 four-part mask for an availability group listener. For example, 255.255.254.0 .
ipv6_address
Specifies an IPv6 address for an availability group listener. For example, 2001::4898:23:1002:20f:1fff:feff:b3a3 .
PORT = listener_port
Specifies the port number—listener_port—to be used by an availability group listener that is specified by a WITH
IP clause. PORT is optional.
The default port number, 1433, is supported. However, if you have security concerns, we recommend using a
different port number.
For example: WITH IP ( ('2001::4898:23:1002:20f:1fff:feff:b3a3') ) , PORT = 7777

MODIFY LISTENER ‘dns_name’( <modify_listener_option> )


Modifies an existing availability group listener for this availability group. Supported only on the primary replica.
<modify_listener_option>
MODIFY LISTENER takes one of the following options:
ADD IP { (‘four_part_ipv4_address’,‘four_part_ipv4_mask’) | (‘dns_nameipv6_address’) }
Adds the specified IP address to the availability group listener specified by dns_name.
PORT = listener_port
See the description of this argument earlier in this section.
RESTART LISTENER ‘dns_name’
Restarts the listener that is associated with the specified DNS name. Supported only on the primary replica.
REMOVE LISTENER ‘dns_name’
Removes the listener that is associated with the specified DNS name. Supported only on the primary replica.
OFFLINE
Takes an online availability group offline. There is no data loss for synchronous-commit databases.
After an availability group goes offline, its databases become unavailable to clients, and you cannot bring the
availability group back online. Therefore, use the OFFLINE option only during a cross-cluster migration of Always
On availability groups, when migrating availability group resources to a new WSFC cluster.
For more information, see Take an Availability Group Offline (SQL Server).

Prerequisites and Restrictions


For information about prerequisites and restrictions on availability replicas and on their host server instances and
computers, see Prerequisites, Restrictions, and Recommendations for Always On Availability Groups (SQL
Server).
For information about restrictions on the AVAIL ABILITY GROUP Transact-SQL statements, see Overview of
Transact-SQL Statements for Always On Availability Groups (SQL Server).

Security
Permissions
Requires ALTER AVAIL ABILITY GROUP permission on the availability group, CONTROL AVAIL ABILITY
GROUP permission, ALTER ANY AVAIL ABILITY GROUP permission, or CONTROL SERVER permission. Also
requires ALTER ANY DATABASE permission.

Examples
A. Joining a secondary replica to an availability group
The following example joins a secondary replica to which you are connected to the AccountsAG availability group.

ALTER AVAILABILITY GROUP AccountsAG JOIN;


GO

B. Forcing failover of an availability group


The following example forces the AccountsAG availability group to fail over to the secondary replica to which you
are connected.

ALTER AVAILABILITY GROUP AccountsAG FORCE_FAILOVER_ALLOW_DATA_LOSS;


GO

See Also
CREATE AVAIL ABILITY GROUP (Transact-SQL )
ALTER DATABASE SET HADR (Transact-SQL )
DROP AVAIL ABILITY GROUP (Transact-SQL )
sys.availability_replicas (Transact-SQL )
sys.availability_groups (Transact-SQL )
Troubleshoot Always On Availability Groups Configuration (SQL Server)
Overview of Always On Availability Groups (SQL Server)
Availability Group Listeners, Client Connectivity, and Application Failover (SQL Server)
ALTER BROKER PRIORITY (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a Service Broker conversation priority.
Transact-SQL Syntax Conventions

Syntax
ALTER BROKER PRIORITY ConversationPriorityName
FOR CONVERSATION
{ SET ( [ CONTRACT_NAME = {ContractName | ANY } ]
[ [ , ] LOCAL_SERVICE_NAME = {LocalServiceName | ANY } ]
[ [ , ] REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY } ]
[ [ , ] PRIORITY_LEVEL = { PriorityValue | DEFAULT } ]
)
}
[;]

Arguments
ConversationPriorityName
Specifies the name of the conversation priority to be changed. The name must refer to a conversation priority in
the current database.
SET
Specifies the criteria for determining if the conversation priority applies to a conversation. SET is required and
must contain at least one criterion: CONTRACT_NAME, LOCAL_SERVICE_NAME, REMOTE_SERVICE_NAME,
or PRIORITY_LEVEL.
CONTRACT_NAME = {ContractName | ANY }
Specifies the name of a contract to be used as a criterion for determining if the conversation priority applies to a
conversation. ContractName is a Database Engine identifier, and must specify the name of a contract in the current
database.
ContractName
Specifies that the conversation priority can be applied only to conversations where the BEGIN DIALOG statement
that started the conversation specified ON CONTRACT ContractName.
ANY
Specifies that the conversation priority can be applied to any conversation, regardless of which contract it uses.
If CONTRACT_NAME is not specified, the contract property of the conversation priority is not changed.
LOCAL_SERVICE_NAME = {LocalServiceName | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to a
conversation endpoint.
LocalServiceName is a Database Engine identifier and must specify the name of a service in the current database.
LocalServiceName
Specifies that the conversation priority can be applied to the following:
Any initiator conversation endpoint whose initiator service name matches LocalServiceName.
Any target conversation endpoint whose target service name matches LocalServiceName.
ANY
Specifies that the conversation priority can be applied to any conversation endpoint, regardless of the
name of the local service used by the endpoint.
If LOCAL_SERVICE_NAME is not specified, the local service property of the conversation priority is not
changed.
REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to a
conversation endpoint.
RemoteServiceName is a literal of type nvarchar(256). Service Broker uses a byte-by-byte comparison to
match the RemoteServiceName string. The comparison is case-sensitive and does not consider the current
collation. The target service can be in the current instance of the Database Engine, or a remote instance of
the Database Engine.
'RemoteServiceName'
Specifies the conversation priority be assigned to the following:
Any initiator conversation endpoint whose associated target service name matches RemoteServiceName.
Any target conversation endpoint whose associated initiator service name matches RemoteServiceName.
ANY
Specifies that the conversation priority applies to any conversation endpoint, regardless of the name of the
remote service associated with the endpoint.
If REMOTE_SERVICE_NAME is not specified, the remote service property of the conversation priority is
not changed.
PRIORITY_LEVEL = { PriorityValue | DEFAULT }
Specifies the priority level to assign any conversation endpoint that use the contracts and services that are
specified in the conversation priority. PriorityValue must be an integer literal from 1 (lowest priority) to 10
(highest priority).
If PRIORITY_LEVEL is not specified, the priority level property of the conversation priority is not changed.

Remarks
No properties that are changed by ALTER BROKER PRIORITY are applied to existing conversations. The existing
conversations continue with the priority that was assigned when they were started.
For more information, see CREATE BROKER PRIORITY (Transact-SQL ).

Permissions
Permission for creating a conversation priority defaults to members of the db_ddladmin or db_owner fixed
database roles, and to the sysadmin fixed server role. Requires ALTER permission on the database.

Examples
A. Changing only the priority level of an existing conversation priority.
Changes the priority level, but does not change the contract, local service, or remote service properties.

ALTER BROKER PRIORITY SimpleContractDefaultPriority


FOR CONVERSATION
SET (PRIORITY_LEVEL = 3);

B. Changing all of the properties of an existing conversation priority.


Changes the priority level, contract, local service, and remote service properties.

ALTER BROKER PRIORITY SimpleContractPriority


FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContractB,
LOCAL_SERVICE_NAME = TargetServiceB,
REMOTE_SERVICE_NAME = N'InitiatorServiceB',
PRIORITY_LEVEL = 8);

See Also
CREATE BROKER PRIORITY (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
sys.conversation_priorities (Transact-SQL )
ALTER CERTIFICATE (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the private key used to encrypt a certificate, or adds one if none is present. Changes the availability of a
certificate to Service Broker.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

ALTER CERTIFICATE certificate_name


REMOVE PRIVATE KEY
| WITH PRIVATE KEY ( <private_key_spec> [ ,... ] )
| WITH ACTIVE FOR BEGIN_DIALOG = [ ON | OFF ]

<private_key_spec> ::=
FILE = 'path_to_private_key'
| DECRYPTION BY PASSWORD = 'key_password'
| ENCRYPTION BY PASSWORD = 'password'

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

ALTER CERTIFICATE certificate_name


{
REMOVE PRIVATE KEY
| WITH PRIVATE KEY (
FILE = '<path_to_private_key>',
DECRYPTION BY PASSWORD = '<key password>' )
}

Arguments
certificate_name
Is the unique name by which the certificate is known in database.
FILE ='path_to_private_key'
Specifies the complete path, including file name, to the private key. This parameter can be a local path or a UNC
path to a network location. This file will be accessed within the security context of the SQL Server service account.
When you use this option, you must make sure that the service account has access to the specified file.
DECRYPTION BY PASSWORD ='key_password'
Specifies the password that is required to decrypt the private key.
ENCRYPTION BY PASSWORD ='password'
Specifies the password used to encrypt the private key of the certificate in the database. password must meet the
Windows password policy requirements of the computer that is running the instance of SQL Server. For more
information, see Password Policy.
REMOVE PRIVATE KEY
Specifies that the private key should no longer be maintained inside the database.
ACTIVE FOR BEGIN_DIALOG = { ON | OFF }
Makes the certificate available to the initiator of a Service Broker dialog conversation.

Remarks
The private key must correspond to the public key specified by certificate_name.
The DECRYPTION BY PASSWORD clause can be omitted if the password in the file is protected with a null
password.
When the private key of a certificate that already exists in the database is imported from a file, the private key will
be automatically protected by the database master key. To protect the private key with a password, use the
ENCRYPTION BY PASSWORD phrase.
The REMOVE PRIVATE KEY option will delete the private key of the certificate from the database. You can do this
when the certificate will be used to verify signatures or in Service Broker scenarios that do not require a private
key. Do not remove the private key of a certificate that protects a symmetric key.
You do not have to specify a decryption password when the private key is encrypted by using the database master
key.

IMPORTANT
Always make an archival copy of a private key before removing it from a database. For more information, see BACKUP
CERTIFICATE (Transact-SQL).

The WITH PRIVATE KEY option is not available in a contained database.

Permissions
Requires ALTER permission on the certificate.

Examples
A. Changing the password of a certificate

ALTER CERTIFICATE Shipping04


WITH PRIVATE KEY (DECRYPTION BY PASSWORD = 'pGF$5DGvbd2439587y',
ENCRYPTION BY PASSWORD = '4-329578thlkajdshglXCSgf');
GO

B. Changing the password that is used to encrypt the private key

ALTER CERTIFICATE Shipping11


WITH PRIVATE KEY (ENCRYPTION BY PASSWORD = '34958tosdgfkh##38',
DECRYPTION BY PASSWORD = '95hkjdskghFDGGG4%');
GO

C. Importing a private key for a certificate that is already present in the database
ALTER CERTIFICATE Shipping13
WITH PRIVATE KEY (FILE = 'c:\\importedkeys\Shipping13',
DECRYPTION BY PASSWORD = 'GDFLKl8^^GGG4000%');
GO

D. Changing the protection of the private key from a password to the database master key

ALTER CERTIFICATE Shipping15


WITH PRIVATE KEY (DECRYPTION BY PASSWORD = '95hk000eEnvjkjy#F%');
GO

See Also
CREATE CERTIFICATE (Transact-SQL )
DROP CERTIFICATE (Transact-SQL )
BACKUP CERTIFICATE (Transact-SQL )
Encryption Hierarchy
EVENTDATA (Transact-SQL )
ALTER COLUMN ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a column encryption key in a database, adding or dropping an encrypted value. A CEK can have up to two
values which allows for the rotation of the corresponding column master key. A CEK is used when encrypting
columns using the Always Encrypted (Database Engine) feature. Before adding a CEK value, you must define the
column master key that was used to encrypt the value by using SQL Server Management Studio or the CREATE
MASTER KEY statement.
Transact-SQL Syntax Conventions

Syntax
ALTER COLUMN ENCRYPTION KEY key_name
[ ADD | DROP ] VALUE
(
COLUMN_MASTER_KEY = column_master_key_name
[, ALGORITHM = 'algorithm_name' , ENCRYPTED_VALUE = varbinary_literal ]
) [;]

Arguments
key_name
The column encryption key that you are changing.
column_master_key_name
Specifies the name of the column master key (CMK) used for encrypting the column encryption key (CEK).
algorithm_name
Name of the encryption algorithm used to encrypt the value. The algorithm for the system providers must be
RSA_OAEP. This argument is not valid when dropping a column encryption key value.
varbinary_literal
The CEK BLOB encrypted with the specified master encryption key. . This argument is not valid when dropping a
column encryption key value.

WARNING
Never pass plaintext CEK values in this statement. Doing so will comprise the benefit of this feature.

Remarks
Typically, a column encryption key is created with just one encrypted value. When a column master key needs to
be rotated (the current column master key needs to be replaced with the new column master key), you can add a
new value of the column encryption key, encrypted with the new column master key. This will allow you to ensure
client applications can access data encrypted with the column encryption key, while the new column master key is
being made available to client applications. An Always Encrypted enabled driver in a client application that does
not have access to the new master key, will be able to use the column encryption key value encrypted with the old
column master key to access sensitive data. The encryption algorithms, Always Encrypted supports, require the
plaintext value to have 256 bits. An encrypted value should be generated using a key store provider that
encapsulates the key store holding the column master key.
Use sys.columns (Transact-SQL ), sys.column_encryption_keys (Transact-SQL ) and
sys.column_encryption_key_values (Transact-SQL ) to view information about column encryption keys.

Permissions
Requires ALTER ANY COLUMN ENCRYPTION KEY permission on the database.

Examples
A. Adding a column encryption key value
The following example alters a column encryption key called MyCEK .

ALTER COLUMN ENCRYPTION KEY MyCEK


ADD VALUE
(
COLUMN_MASTER_KEY = MyCMK2,
ALGORITHM = 'RSA_OAEP',
ENCRYPTED_VALUE =
0x016E000001630075007200720065006E00740075007300650072002F006D0079002F0064006500650063006200660034006100340031
00300038003400620035003300320036006600320063006200620035003000360038006500390062006100300032003000360061003700
3800310066001DDA6134C3B73A90D349C8905782DD819B428162CF5B051639BA46EC69A7C8C8F81591A92C395711493B25DCBCCC57836E
5B9F17A0713E840721D098F3F8E023ABCDFE2F6D8CC4339FC8F88630ED9EBADA5CA8EEAFA84164C1095B12AE161EABC1DF778C07F07D41
3AF1ED900F578FC00894BEE705EAC60F4A5090BBE09885D2EFE1C915F7B4C581D9CE3FDAB78ACF4829F85752E9FC985DEB8773889EE4A1
945BD554724803A6F5DC0A2CD5EFE001ABED8D61E8449E4FAA9E4DD392DA8D292ECC6EB149E843E395CDE0F98D04940A28C4B05F747149
B34A0BAEC04FFF3E304C84AF1FF81225E615B5F94E334378A0A888EF88F4E79F66CB377E3C21964AACB5049C08435FE84EEEF39D20A665
C17E04898914A85B3DE23D56575EBC682D154F4F15C37723E04974DB370180A9A579BC84F6BC9B5E7C223E5CBEE721E57EE07EFDCC0A32
57BBEBF9ADFFB00DBF7EF682EC1C4C47451438F90B4CF8DA709940F72CFDC91C6EB4E37B4ED7E2385B1FF71B28A1D2669FBEB18EA89F9D
391D2FDDEA0ED362E6A591AC64EF4AE31CA8766C259ECB77D01A7F5C36B8418F91C1BEADDD4491C80F0016B66421B4B788C55127135DA2
FA625FB7FD195FB40D90A6C67328602ECAF3EC4F5894BFD84A99EB4753BE0D22E0D4DE6A0ADFEDC80EB1B556749B4A8AD00E73B329C958
27AB91C0256347E85E3C5FD6726D0E1FE82C925D3DF4A9
);
GO

B. Dropping a column encryption key value


The following example alters a column encryption key called MyCEK by dropping a value.

ALTER COLUMN ENCRYPTION KEY MyCEK


DROP VALUE
(
COLUMN_MASTER_KEY = MyCMK
);
GO

See Also
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL )
Always Encrypted (Database Engine)
sys.column_encryption_keys (Transact-SQL )
sys.column_encryption_key_values (Transact-SQL )
sys.columns (Transact-SQL )
ALTER CREDENTIAL (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Changes the properties of a credential.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
ALTER CREDENTIAL credential_name WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]

Arguments
credential_name
Specifies the name of the credential that is being altered.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server.
SECRET ='secret'
Specifies the secret required for outgoing authentication. secret is optional.

Remarks
When a credential is changed, the values of both identity_name and secret are reset. If the optional SECRET
argument is not specified, the value of the stored secret will be set to NULL.
The secret is encrypted by using the service master key. If the service master key is regenerated, the secret is
reencrypted by using the new service master key.
Information about credentials is visible in the sys.credentials catalog view.

Permissions
Requires ALTER ANY CREDENTIAL permission. If the credential is a system credential, requires CONTROL
SERVER permission.

Examples
A. Changing the password of a credential
The following example changes the secret stored in a credential called Saddles . The credential contains the
Windows login RettigB and its password. The new password is added to the credential using the SECRET clause.

ALTER CREDENTIAL Saddles WITH IDENTITY = 'RettigB',


SECRET = 'sdrlk8$40-dksli87nNN8';
GO

B. Removing the password from a credential


The following example removes the password from a credential named Frames . The credential contains Windows
login Aboulrus8 and a password. After the statement is executed, the credential will have a NULL password
because the SECRET option is not specified.

ALTER CREDENTIAL Frames WITH IDENTITY = 'Aboulrus8';


GO

See Also
Credentials (Database Engine)
CREATE CREDENTIAL (Transact-SQL )
DROP CREDENTIAL (Transact-SQL )
ALTER DATABASE SCOPED CREDENTIAL (Transact-SQL )
CREATE LOGIN (Transact-SQL )
sys.credentials (Transact-SQL )
ALTER CRYPTOGRAPHIC PROVIDER (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a cryptographic provider within SQL Server from an Extensible Key Management (EKM ) provider.
Transact-SQL Syntax Conventions

Syntax
ALTER CRYPTOGRAPHIC PROVIDER provider_name
[ FROM FILE = path_of_DLL ]
ENABLE | DISABLE

Arguments
provider_name
Name of the Extensible Key Management provider.
Path_of_DLL
Path of the .dll file that implements the SQL Server Extensible Key Management interface.
ENABLE | DISABLE
Enables or disables a provider.

Remarks
If the provider changes the .dll file that is used to implement Extensible Key Management in SQL Server, you must
use the ALTER CRYPTOGRAPHIC PROVIDER statement.
When the .dll file path is updated by using the ALTER CRYPTOGRAPHIC PROVIDER statement, SQL Server
performs the following actions:
Disables the provider.
Verifies the DLL signature and ensures that the .dll file has the same GUID as the one recorded in the catalog.
Updates the DLL version in the catalog.
When an EKM provider is set to DISABLE, any attempts on new connections to use the provider with encryption
statements will fail.
To disable a provider, all sessions that use the provider must be terminated.
When an EKM provider dll does not implement all of the necessary methods, ALTER CRYPTOGRAPHIC
PROVIDER can return error 33085:
One or more methods cannot be found in cryptographic provider library '%.*ls'.

When the header file used to create the EKM provider dll is out of date, ALTER CRYPTOGRAPHIC PROVIDER
can return error 33032:
SQL Crypto API version '%02d.%02d' implemented by provider is not supported. Supported version is '%02d.%02d'.
Permissions
Requires CONTROL permission on the cryptographic provider.

Examples
The following example alters a cryptographic provider, called SecurityProvider in SQL Server, to a newer version
of a .dll file. This new version is named c:\SecurityProvider\SecurityProvider_v2.dll and is installed on the server.
The provider's certificate must be installed on the server.
1. Disable the provider to perform the upgrade. This will terminate all open cryptographic sessions.

ALTER CRYPTOGRAPHIC PROVIDER SecurityProvider


DISABLE;
GO

2. Upgrade the provider .dll file. The GUID must the same as the previous version, but the version can be
different.

ALTER CRYPTOGRAPHIC PROVIDER SecurityProvider


FROM FILE = 'c:\SecurityProvider\SecurityProvider_v2.dll';
GO

3. Enable the upgraded provider.

ALTER CRYPTOGRAPHIC PROVIDER SecurityProvider


ENABLE;
GO

See Also
Extensible Key Management (EKM )
CREATE CRYPTOGRAPHIC PROVIDER (Transact-SQL )
DROP CRYPTOGRAPHIC PROVIDER (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Extensible Key Management Using Azure Key Vault (SQL Server)
ALTER DATABASE (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies a database, or the files and filegroups associated with the database. Adds or removes files and
filegroups from a database, changes the attributes of a database or its files and filegroups, changes the database
collation, and sets database options. Database snapshots cannot be modified. To modify database options
associated with replication, use sp_replicationdboption.
Because of its length, the ALTER DATABASE syntax is separated into the following topics:
ALTER DATABASE
The current topic provides the syntax for changing the name and the collation of a database.
ALTER DATABASE File and Filegroup Options
Provides the syntax for adding and removing files and filegroups from a database, and for changing the
attributes of the files and filegroups.
ALTER DATABASE SET Options
Provides the syntax for changing the attributes of a database by using the SET options of ALTER DATABASE.
ALTER DATABASE Database Mirroring
Provides the syntax for the SET options of ALTER DATABASE that are related to database mirroring.
ALTER DATABASE SET HADR
Provides the syntax for the Always On availability groups options of ALTER DATABASE for configuring a
secondary database on a secondary replica of an Always On availability group.
ALTER DATABASE Compatibility Level
Provides the syntax for the SET options of ALTER DATABASE that are related to database compatibility levels.
Transact-SQL Syntax Conventions
For Azure SQL Database, see ALTER DATABASE (Azure SQL Database)
For Azure SQL Data Warehouse, see ALTER DATABASE (Azure SQL Data Warehouse).
For Parallel Data Warehouse, see ALTER DATABASE (Parallel Data Warehouse).

Syntax
-- SQL Server Syntax
ALTER DATABASE { database_name | CURRENT }
{
MODIFY NAME = new_database_name
| COLLATE collation_name
| <file_and_filegroup_options>
| <set_database_options>
}
[;]

<file_and_filegroup_options >::=
<add_or_modify_files>::=
<filespec>::=
<add_or_modify_filegroups>::=
<filegroup_updatability_option>::=

<set_database_options>::=
<optionspec>::=
<auto_option> ::=
<change_tracking_option> ::=
<cursor_option> ::=
<database_mirroring_option> ::=
<date_correlation_optimization_option> ::=
<db_encryption_option> ::=
<db_state_option> ::=
<db_update_option> ::=
<db_user_access_option> ::= <delayed_durability_option> ::= <external_access_option> ::=
<FILESTREAM_options> ::=
<HADR_options> ::=
<parameterization_option> ::=
<query_store_options> ::=
<recovery_option> ::=
<service_broker_option> ::=
<snapshot_option> ::=
<sql_option> ::=
<termination> ::=

Arguments
database_name
Is the name of the database to be modified.

NOTE
This option is not available in a Contained Database.

CURRENT
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Designates that the current database in use should be altered.
MODIFY NAME =new_database_name
Renames the database with the name specified as new_database_name.
COLL ATE collation_name
Specifies the collation for the database. collation_name can be either a Windows collation name or a SQL
collation name. If not specified, the database is assigned the collation of the instance of SQL Server.
When creating databases with other than the default collation, the data in the database always respects the
specified collation. For SQL Server, when creating a contained database, the internal catalog information is
maintained using the SQL Server default collation, Latin1_General_100_CI_AS_WS_KS_SC.
For more information about the Windows and SQL collation names, see COLL ATE (Transact-SQL ).
<delayed_durability_option> ::=
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
For more information see ALTER DATABASE SET Options (Transact-SQL ) and Control Transaction Durability.
<file_and_filegroup_options>::=
For more information, see ALTER DATABASE File and Filegroup Options (Transact-SQL ).

Remarks
To remove a database, use DROP DATABASE.
To decrease the size of a database, use DBCC SHRINKDATABASE.
The ALTER DATABASE statement must run in autocommit mode (the default transaction management mode)
and is not allowed in an explicit or implicit transaction.
The state of a database file (for example, online or offline), is maintained independently from the state of the
database. For more information, see File States. The state of the files within a filegroup determines the
availability of the whole filegroup. For a filegroup to be available, all files within the filegroup must be online. If a
filegroup is offline, any try to access the filegroup by an SQL statement will fail with an error. When you build
query plans for SELECT statements, the query optimizer avoids nonclustered indexes and indexed views that
reside in offline filegroups. This enables these statements to succeed. However, if the offline filegroup contains
the heap or clustered index of the target table, the SELECT statements fail. Additionally, any INSERT, UPDATE,
or DELETE statement that modifies a table with any index in an offline filegroup will fail.
When a database is in the RESTORING state, most ALTER DATABASE statements will fail. The exception is
setting database mirroring options. A database may be in the RESTORING state during an active restore
operation or when a restore operation of a database or log file fails because of a corrupted backup file.
The plan cache for the instance of SQL Server is cleared by setting one of the following options.

OFFLINE READ_WRITE

ONLINE MODIFY FILEGROUP DEFAULT

MODIFY_NAME MODIFY FILEGROUP READ_WRITE

COLLATE MODIFY FILEGROUP READ_ONLY

READ_ONLY PAGE_VERIFY

Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server error
log contains the following informational message: " SQL Server has encountered %d occurrence(s) of
cachestore flush for the '%s' cachestore (part of plan cache) due to some database maintenance or reconfigure
operations". This message is logged every five minutes as long as the cache is flushed within that time interval.
The procedure cache is also flushed in the following scenarios:
A database has the AUTO_CLOSE database option set to ON. When no user connection references or
uses the database, the background task tries to close and shut down the database automatically.
You run several queries against a database that has default options. Then, the database is dropped.
A database snapshot for a source database is dropped.
You successfully rebuild the transaction log for a database.
You restore a database backup.
You detach a database.

Changing the Database Collation


Before you apply a different collation to a database, make sure that the following conditions are in place:
You are the only one currently using the database.
No schema-bound object depends on the collation of the database.
If the following objects, which depend on the database collation, exist in the database, the ALTER
DATABASEdatabase_nameCOLL ATE statement will fail. SQL Server will return an error message for
each object blocking the ALTER action:
User-defined functions and views created with SCHEMABINDING.
Computed columns.
CHECK constraints.
Table-valued functions that return tables with character columns with collations inherited from the
default database collation.
Dependency information for non-schema-bound entities is automatically updated when the
database collation is changed.
Changing the database collation does not create duplicates among any system names for the database
objects. If duplicate names result from the changed collation, the following namespaces may cause the
failure of a database collation change:
Object names such as a procedure, table, trigger, or view.
Schema names.
Principals such as a group, role, or user.
Scalar-type names such as system and user-defined types.
Full-text catalog names.
Column or parameter names within an object.
Index names within a table.
Duplicate names resulting from the new collation will cause the change action to fail, and SQL Server will return
an error message specifying the namespace where the duplicate was found.

Viewing Database Information


You can use catalog views, system functions, and system stored procedures to return information about
databases, files, and filegroups.

Permissions
Requires ALTER permission on the database.
Examples
A. Changing the name of a database
The following example changes the name of the AdventureWorks2012 database to Northwind .

USE master;
GO
ALTER DATABASE AdventureWorks2012
Modify Name = Northwind ;
GO

B. Changing the collation of a database


The following example creates a database named testdb with the SQL_Latin1_General_CP1_CI_A S collation, and
then changes the collation of the testdb database to COLLATE French_CI_AI .
Applies to: SQL Server 2008 through SQL Server 2017.

USE master;
GO

CREATE DATABASE testdb


COLLATE SQL_Latin1_General_CP1_CI_AS ;
GO

ALTER DATABASE testDB


COLLATE French_CI_AI ;
GO

See Also
ALTER DATABASE (Azure SQL Database)
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
DROP DATABASE (Transact-SQL )
SET TRANSACTION ISOL ATION LEVEL (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_configure (Transact-SQL )
sp_spaceused (Transact-SQL )
sys.databases (Transact-SQL )
sys.database_files (Transact-SQL )
sys.database_mirroring_witnesses (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.filegroups (Transact-SQL )
sys.master_files (Transact-SQL )
System Databases
ALTER DATABASE (Azure SQL Database)
5/16/2018 • 13 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Modifies a Azure SQL Database. Changes the name of a database, the edition and service objective of a database,
join an elastic pool, and sets database options.
Transact-SQL Syntax Conventions

Syntax
-- Azure SQL Database Syntax
ALTER DATABASE { database_name }
{
MODIFY NAME = new_database_name
| MODIFY ( <edition_options> [, ... n] )
| SET { <option_spec> [ ,... n ] }
| ADD SECONDARY ON SERVER <partner_server_name>
[WITH ( <add-secondary-option>::= [, ... n] ) ]
| REMOVE SECONDARY ON SERVER <partner_server_name>
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
}
[;]

<edition_options> ::=
{

MAXSIZE = { 100 MB | 250 MB | 500 MB | 1 … 1024 … 4096 GB }


| EDITION = { 'basic' | 'standard' | 'premium' | 'GeneralPurpose' | 'BusinessCritical'}
| SERVICE_OBJECTIVE =
{ <service-objective>
| { ELASTIC_POOL (name = <elastic_pool_name>) }
}
}

<add-secondary-option> ::=
{
ALLOW_CONNECTIONS = { ALL | NO }
| SERVICE_OBJECTIVE =
{ <service-objective>
| { ELASTIC_POOL ( name = <elastic_pool_name>) }
}
}

<service-objective> ::= { 'S0' | 'S1' | 'S2' | 'S3'| 'S4'| 'S6'| 'S7'| 'S9'| 'S12' |
| 'P1' | 'P2' | 'P4'| 'P6' | 'P11' | 'P15'
| 'GP_GEN4_1' | 'GP_GEN4_2' | 'GP_GEN4_4' | 'GP_GEN4_8' | 'GP_GEN4_16' | 'GP_GEN4_24' |
| 'BC_GEN4_1' | 'BC_GEN4_2' | 'BC_GEN4_4' | 'BC_GEN4_8' | 'BC_GEN4_16' | 'BC_GEN4_24' |
| 'GP_GEN5_2' | 'GP_GEN5_4' | 'GP_GEN5_8' | 'GP_GEN5_16' | 'GP_GEN5_24' | 'GP_GEN5_32' | 'GP_GEN5_48' |
'GP_GEN5_80' |
| 'BC_GEN5_2' | 'BC_GEN5_4' | 'BC_GEN5_8' | 'BC_GEN5_16' | 'BC_GEN5_24' | 'BC_GEN5_32' | 'BC_GEN5_48' |
'BC_GEN5_80' |
}

-- SET OPTIONS AVAILABLE FOR SQL Database


-- Full descriptions of the set options are available in the topic
-- ALTER DATABASE SET Options. The supported syntax is listed here.
-- ALTER DATABASE SET Options. The supported syntax is listed here.

<option_spec> ::=
{
<auto_option>
| <change_tracking_option>
| <cursor_option>
| <db_encryption_option>
| <db_update_option>
| <db_user_access_option>
| <delayed_durability_option>
| <parameterization_option>
| <query_store_options>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <termination>
| <temporal_history_retention>
}

<auto_option> ::=
{
AUTO_CREATE_STATISTICS { OFF | ON [ ( INCREMENTAL = { ON | OFF } ) ] }
| AUTO_SHRINK { ON | OFF }
| AUTO_UPDATE_STATISTICS { ON | OFF }
| AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}

<change_tracking_option> ::=
{
CHANGE_TRACKING
{
= OFF
| = ON [ ( <change_tracking_option_list > [,...n] ) ]
| ( <change_tracking_option_list> [,...n] )
}
}

<change_tracking_option_list> ::=
{
AUTO_CLEANUP = { ON | OFF }
| CHANGE_RETENTION = retention_period { DAYS | HOURS | MINUTES }
}

<cursor_option> ::=
{
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
}

<db_encryption_option> ::=
ENCRYPTION { ON | OFF }

<db_update_option> ::=
{ READ_ONLY | READ_WRITE }

<db_user_access_option> ::=
{ RESTRICTED_USER | MULTI_USER }

<delayed_durability_option> ::= DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }

<parameterization_option> ::=
PARAMETERIZATION { SIMPLE | FORCED }

<query_store_options> ::=
{
QUERY_STORE
{
= OFF
| = ON [ ( <query_store_option_list> [,... n] ) ]
| ( < query_store_option_list> [,... n] )
| CLEAR [ ALL ]
}
}

<query_store_option_list> ::=
{
OPERATION_MODE = { READ_WRITE | READ_ONLY }
| CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = number )
| DATA_FLUSH_INTERVAL_SECONDS = number
| MAX_STORAGE_SIZE_MB = number
| INTERVAL_LENGTH_MINUTES = number
| SIZE_BASED_CLEANUP_MODE = [ AUTO | OFF ]
| QUERY_CAPTURE_MODE = [ ALL | AUTO | NONE ]
| MAX_PLANS_PER_QUERY = number
}

<snapshot_option> ::=
{
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
| READ_COMMITTED_SNAPSHOT {ON | OFF }
| MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT {ON | OFF }
}
<sql_option> ::=
{
ANSI_NULL_DEFAULT { ON | OFF }
| ANSI_NULLS { ON | OFF }
| ANSI_PADDING { ON | OFF }
| ANSI_WARNINGS { ON | OFF }
| ARITHABORT { ON | OFF }
| COMPATIBILITY_LEVEL = { 100 | 110 | 120 | 130 | 140 }
| CONCAT_NULL_YIELDS_NULL { ON | OFF }
| NUMERIC_ROUNDABORT { ON | OFF }
| QUOTED_IDENTIFIER { ON | OFF }
| RECURSIVE_TRIGGERS { ON | OFF }
}

<termination> ::=
{
ROLLBACK AFTER integer [ SECONDS ]
| ROLLBACK IMMEDIATE
| NO_WAIT
}

<temporal_history_retention> ::= TEMPORAL_HISTORY_RETENTION { ON | OFF }

For full descriptions of the set options, see ALTER DATABASE SET Options (Transact-SQL ) and ALTER
DATABASE Compatibility Level (Transact-SQL ).

Arguments
database_name
Is the name of the database to be modified.
CURRENT
Designates that the current database in use should be altered.
MODIFY NAME =new_database_name
Renames the database with the name specified as new_database_name. The following example changes the name
of a database db1 to db2 :
ALTER DATABASE db1
MODIFY Name = db2 ;

MODIFY (EDITION = ['basic' | 'standard' | 'premium' |'GeneralPurpose' | 'BusinessCritical'])


Changes the service tier of the database. Support for 'premiumrs' has been removed. For questions, use this e-
mail alias: premium-rs@microsoft.com.
The following example changes edition to premium :

ALTER DATABASE current


MODIFY (EDITION = 'premium');

EDITION change fails if the MAXSIZE property for the database is set to a value outside the valid range
supported by that edition.
MODIFY (MAXSIZE = [100 MB | 500 MB | 1 | 1024…4096] GB )
Specifies the maximum size of the database. The maximum size must comply with the valid set of values for the
EDITION property of the database. Changing the maximum size of the database may cause the database
EDITION to be changed. Following table lists the supported MAXSIZE values and the defaults (D ) for the SQL
Database service tiers.
DTU -based model

MAXSIZE BASIC S0-S2 S3-S12 P1-P6 P11-P15

100 MB √ √ √ √ √

250 MB √ √ √ √ √

500 MB √ √ √ √ √

1 GB √ √ √ √ √

2 GB √ (D) √ √ √ √

5 GB N/A √ √ √ √

10 GB N/A √ √ √ √

20 GB N/A √ √ √ √

30 GB N/A √ √ √ √

40 GB N/A √ √ √ √

50 GB N/A √ √ √ √

100 GB N/A √ √ √ √

150 GB N/A √ √ √ √
MAXSIZE BASIC S0-S2 S3-S12 P1-P6 P11-P15

200 GB N/A √ √ √ √

250 GB N/A √ (D) √ (D) √ √

300 GB N/A √ √ √ √

400 GB N/A √ √ √ √

500 GB N/A √ √ √ (D) √

750 GB N/A √ √ √ √

1024 GB N/A √ √ √ √ (D)

From 1024 GB N/A N/A N/A N/A √


up to 4096 GB in
increments of
256 GB*

* P11 and P15 allow MAXSIZE up to 4 TB with 1024 GB being the default size. P11 and P15 can use up to 4 TB of
included storage at no additional charge. In the Premium tier, MAXSIZE greater than 1 TB is currently available in
the following regions: US East2, West US, US Gov Virginia, West Europe, Germany Central, South East Asia,
Japan East, Australia East, Canada Central, and Canada East. For additional details regarding resource limitations
for the DTU -based model, see DTU -based resource limits.
The MAXSIZE value for the DTU -based model, if specified, has to be a valid value shown in the table above for the
service tier specified.
vCore-based model
General Purpose service tier - Generation 4 compute platform

MAXSIZE GP_GEN4_1 GP_GEN4_2 GP_GEN4_4 GP_GEN4_8 GP_GEN4_16 GP4_24

Max data size 1024 1024 1536 3072 4096 4096


(GB)

General Purpose service tier - Generation 5 compute platform

GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_


MAXSIZE 2 4 8 16 24 32 48 80

Max data 1024 1024 1536 3072 4096 4096 4096 4096
size (GB)

Business Critical service tier - Generation 4 compute platform

PERFORMANCE
LEVEL BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16

Max data size 1024 1024 1024 1024 1024


(GB)

Business Critical service tier - Generation 5 compute platform


BC_GEN5_ BC_GEN5_ BC_GEN5_ BC_GEN5_1 BC_GEN5_2 BC_GEN5_3 BC_GEN5_4 BC_GEN5_8
MAXSIZE 2 4 8 6 4 2 8 0

Max data 1024 1024 1024 1024 2048 4096 4096 4096
size (GB)

If no MAXSIZE value is set when using the vCore model, the default is 32 GB. For additional details regarding
resource limitsations for vCore-based model, see vCore-based resource limits.
The following rules apply to MAXSIZE and EDITION arguments:
If EDITION is specified but MAXSIZE is not specified, the default value for the edition is used. For example,
is the EDITION is set to Standard, and the MAXSIZE is not specified, then the MAXSIZE is automatically
set to 500 MB.
If neither MAXSIZE nor EDITION is specified, the EDITION is set to Standard (S0), and MAXSIZE is set to
250 GB.
MODIFY (SERVICE_OBJECTIVE = <service-objective>)
Specifies the performance level. The following example changes service objective of a premium database to P6 :

ALTER DATABASE current


MODIFY (SERVICE_OBJECTIVE = 'P6');

Specifies the performance level. Available values for service objective are: S0 , S1 , S2 , S3 , S4 , S6 , S7 , S9 ,
S12 , P1 , P2 , P4 , P6 , P11 , P15 , GP_GEN4_1 , GP_GEN4_2 , GP_GEN4_4 , GP_GEN4_8 , GP_GEN4_16 , GP_GEN4_24 ,
BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16 , BC_GEN4_24 , GP_Gen5_2 , GP_Gen5_4 , GP_Gen5_8 ,
GP_Gen5_16 , GP_Gen5_24 , GP_Gen5_32 , GP_Gen5_48 , GP_Gen5_80 , BC_Gen5_2 , BC_Gen5_4 , BC_Gen5_8 , BC_Gen5_16 ,
BC_Gen5_24 , BC_Gen5_32 , BC_Gen5_48 , BC_Gen5_80 .

For service objective descriptions and more information about the size, editions, and the service objectives
combinations, see Azure SQL Database Service Tiers and Performance Levels, DTU -based resource limits and
vCore-based resource limits. Support for PRS service objectives have been removed. For questions, use this e-
mail alias: premium-rs@microsoft.com.
MODIFY (SERVICE_OBJECTIVE = EL ASTIC_POOL (name = <elastic_pool_name>)
To add an existing database to an elastic pool, set the SERVICE_OBJECTIVE of the database to EL ASTIC_POOL
and provide the name of the elastic pool. You can also use this option to change the database to a different elastic
pool within the same server. For more information, see Create and manage a SQL Database elastic pool. To
remove a database from an elastic pool, use ALTER DATABASE to set the SERVICE_OBJECTIVE to a single
database performance level.
ADD SECONDARY ON SERVER <partner_server_name>
Creates a geo-replication secondary database with the same name on a partner server, making the local database
into a geo-replication primary, and begins asynchronously replicating data from the primary to the new
secondary. If a database with the same name already exists on the secondary, the command fails. The command is
executed on the master database on the server hosting the local database that becomes the primary.
WITH ALLOW_CONNECTIONS { ALL | NO }
When ALLOW_CONNECTIONS is not specified, it is set to ALL by default. If it is set ALL, it is a read-only
database that allows all logins with the appropriate permissions to connect.
WITH SERVICE_OBJECTIVE { S0 , S1 , S2 , S3 , S4 , S6 , S7 , S9 , S12 , P1 , P2 , P4 , P6 , P11 , P15 ,
GP_GEN4_1 , GP_GEN4_2 , GP_GEN4_4 , GP_GEN4_8 , GP_GEN4_16 ,
, BC_GEN4_1 BC_GEN4_2 BC_GEN4_4
GP_GEN4_24
BC_GEN4_8 BC_GEN4_16 , BC_GEN4_24 , GP_Gen5_2 , GP_Gen5_4 , GP_Gen5_8 , GP_Gen5_16 , GP_Gen5_24 , GP_Gen5_32 ,
GP_Gen5_48 , GP_Gen5_80 , BC_Gen5_2 , BC_Gen5_4 , BC_Gen5_8 , BC_Gen5_16 , BC_Gen5_24 , BC_Gen5_32 , BC_Gen5_48 ,
BC_Gen5_80 }

When SERVICE_OBJECTIVE is not specified, the secondary database is created at the same service level as the
primary database. When SERVICE_OBJECTIVE is specified, the secondary database is created at the specified
level. This option supports creating geo-replicated secondaries with less expensive service levels. The
SERVICE_OBJECTIVE specified must be within the same edition as the source. For example, you cannot specify
S0 if the edition is premium.
EL ASTIC_POOL (name = <elastic_pool_name>)
When EL ASTIC_POOL is not specified, the secondary database is not created in an elastic pool. When
EL ASTIC_POOL is specified, the secondary database is created in the specified pool.

IMPORTANT
The user executing the ADD SECONDARY command must be DBManager on primary server, have db_owner membership in
local database, and DBManager on secondary server.

REMOVE SECONDARY ON SERVER <partner_server_name>


Removes the specified geo-replicated secondary database on the specified server. The command is executed on
the master database on the server hosting the primary database.

IMPORTANT
The user executing the REMOVE SECONDARY command must be DBManager on the primary server.

FAILOVER
Promotes the secondary database in geo-replication partnership on which the command is executed to become
the primary and demotes the current primary to become the new secondary. As part of this process, the geo-
replication mode is temporarily switched from asynchronous mode to synchronous mode. During the failover
process:
1. The primary stops taking new transactions.
2. All outstanding transactions are flushed to the secondary.
3. The secondary becomes the primary and begins asynchronous geo-replication with the old primary / the
new secondary.
This sequence ensures that no data loss occurs. The period during which both databases are unavailable is on the
order of 0-25 seconds while the roles are switched. The total operation should take no longer than about one
minute. If the primary database is unavailable when this command is issued, the command fails with an error
message indicating that the primary database is not available. If the failover process does not complete and
appears stuck, you can use the force failover command and accept data loss - and then, if you need to recover the
lost data, call devops (CSS ) to recover the lost data.

IMPORTANT
The user executing the FAILOVER command must be DBManager on both the primary server and the secondary server.
FORCE_FAILOVER_ALLOW_DATA_LOSS
Promotes the secondary database in geo-replication partnership on which the command is executed to become
the primary and demotes the current primary to become the new secondary. Use this command only when the
current primary is no longer available. It is designed for disaster recovery only, when restoring availability is
critical, and some data loss is acceptable.
During a forced failover:
1. The specified secondary database immediately becomes the primary database and begins accepting new
transactions.
2. When the original primary can reconnect with the new primary, an incremental backup is taken on the
original primary, and the original primary becomes a new secondary.
3. To recover data from this incremental backup on the old primary, the user engages devops/CSS.
4. If there are additional secondaries, they are automatically reconfigured to become secondaries of the new
primary. This process is asynchronous and there may be a delay until this process completes. Until the
reconfiguration has completed, the secondaries continue to be secondaries of the old primary.

IMPORTANT
The user executing the FORCE_FAILOVER_ALLOW_DATA_LOSS command must be DBManager on both the primary server
and the secondary server.

Remarks
To remove a database, use DROP DATABASE.
To decrease the size of a database, use DBCC SHRINKDATABASE.
The ALTER DATABASE statement must run in autocommit mode (the default transaction management mode) and
is not allowed in an explicit or implicit transaction.
Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server error
log contains the following informational message: " SQL Server has encountered %d occurrence(s) of cachestore
flush for the '%s' cachestore (part of plan cache) due to some database maintenance or reconfigure operations".
This message is logged every five minutes as long as the cache is flushed within that time interval.
The procedure cache is also flushed in the following scenarios:
A database has the AUTO_CLOSE database option set to ON. When no user connection references or uses
the database, the background task tries to close and shut down the database automatically.
You run several queries against a database that has default options. Then, the database is dropped.
You successfully rebuild the transaction log for a database.
You restore a database backup.
You detach a database.

Viewing Database Information


You can use catalog views, system functions, and system stored procedures to return information about databases,
files, and filegroups.
Permissions
Only the server-level principal login (created by the provisioning process) or members of the dbmanager database
role can alter a database.

IMPORTANT
The owner of the database cannot alter the database unless they are a member of the dbmanager role.

Examples
A. Check the edition options and change them:

SELECT Edition = DATABASEPROPERTYEX('db1', 'EDITION'),


ServiceObjective = DATABASEPROPERTYEX('db1', 'ServiceObjective'),
MaxSizeInBytes = DATABASEPROPERTYEX('db1', 'MaxSizeInBytes');

ALTER DATABASE [db1] MODIFY (EDITION = 'Premium', MAXSIZE = 1024 GB, SERVICE_OBJECTIVE = 'P15');

B. Moving a database to a different elastic pool


Moves an existing database into a pool named pool1:

ALTER DATABASE db1


MODIFY ( SERVICE_OBJECTIVE = ELASTIC_POOL ( name = pool1 ) ) ;

C. Add a Geo -Replication Secondary


Creates a readable secondary database db1 on server secondaryserver of the db1 on the local server.

ALTER DATABASE db1


ADD SECONDARY ON SERVER secondaryserver
WITH ( ALLOW_CONNECTIONS = ALL )

D. Remove a Geo -Replication Secondary


Removes the secondary database db1 on server secondaryserver .

ALTER DATABASE db1


REMOVE SECONDARY ON SERVER testsecondaryserver

E. Failover to a Geo -Replication Secondary


Promotes a secondary database db1 on server secondaryserver to become the new primary database when
executed on server secondaryserver .

ALTER DATABASE db1 FAILOVER

See also
CREATE DATABASE - Azure SQL Database
DATABASEPROPERTYEX
DROP DATABASE
SET TRANSACTION ISOL ATION LEVEL
EVENTDATA
sp_configure
sp_spaceused
sys.databases
sys.database_files
sys.database_mirroring_witnesses
sys.data_spaces
sys.filegroups
sys.master_files
System Databases
ALTER DATABASE (Azure SQL Data Warehouse)
5/4/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the name, maximum size, or service objective for a database.
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE database_name

MODIFY NAME = new_database_name


| MODIFY ( <edition_option> [, ... n] )

<edition_option> ::=
MAXSIZE = {
250 | 500 | 750 | 1024 | 5120 | 10240 | 20480
| 30720 | 40960 | 51200 | 61440 | 71680 | 81920
| 92160 | 102400 | 153600 | 204800 | 245760
} GB
| SERVICE_OBJECTIVE = {
'DW100' | 'DW200' | 'DW300' | 'DW400' | 'DW500'
| 'DW600' | 'DW1000' | 'DW1200' | 'DW1500' | 'DW2000'
| 'DW3000' | 'DW6000' | 'DW1000c' | 'DW1500c' | 'DW2000c'
| 'DW2500c' | 'DW3000c' | 'DW5000c' | 'DW6000c' | 'DW7500c'
| 'DW10000c' | 'DW15000c' | 'DW30000c'
}

Arguments
database_name
Specifies the name of the database to be modified.
MODIFY NAME = new_database_name
Renames the database with the name specified as new_database_name.
MAXSIZE
The default is 245,760 GB (240 TB ).
Applies to: Optimized for Elasticity performance tier
The maximum allowable size for the database. The database cannot grow beyond MAXSIZE.
Applies to: Optimized for Compute performance tier
The maximum allowable size for rowstore data in the database. Data stored in rowstore tables, a columnstore
index's deltastore, or a nonclustered index on a clustered columnstore index cannot grow beyond MAXSIZE. Data
compressed into columnstore format does not have a size limit and is not constrained by MAXSIZE.
SERVICE_OBJECTIVE
Specifies the performance level. For more information about service objectives for SQL Data Warehouse, see
Performance Tiers.
Permissions
Requires these permissions:
Server-level principal login (the one created by the provisioning process), or
Member of the dbmanager database role.
The owner of the database cannot alter the database unless the owner is a member of the dbmanager role.

General Remarks
The current database must be a different database than the one you are altering, therefore ALTER must be run
while connected to the master database.
SQL Data Warehouse is set to COMPATIBILITY_LEVEL 130 and cannot be changed. For more details, see
Improved Query Performance with Compatibility Level 130 in Azure SQL Database.
To decrease the size of a database, use DBCC SHRINKDATABASE.

Limitations and Restrictions


To run ALTER DATABASE, the database must be online and cannot be in a paused state.
The ALTER DATABASE statement must run in autocommit mode, which is the default transaction management
mode. This is set in the connection settings.
The ALTER DATABASE statement cannot be part of a user-defined transaction.
You cannot change the database collation.

Examples
Before you run these examples, make sure the database you are altering is not the current database. The current
database must be a different database than the one you are altering, therefore ALTER must be run while
connected to the master database.
A. Change the name of the database

ALTER DATABASE AdventureWorks2012


MODIFY NAME = Northwind;

B. Change max size for the database

ALTER DATABASE dw1 MODIFY ( MAXSIZE=10240 GB );

C. Change the performance level

ALTER DATABASE dw1 MODIFY ( SERVICE_OBJECTIVE= 'DW1200' );

D. Change the max size and the performance level

ALTER DATABASE dw1 MODIFY ( MAXSIZE=10240 GB, SERVICE_OBJECTIVE= 'DW1200' );

See Also
CREATE DATABASE (Azure SQL Data Warehouse) SQL Data Warehouse list of reference topics
ALTER DATABASE (Parallel Data Warehouse)
5/25/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Modifies the maximum database size options for replicated tables, distributed tables, and the transaction log in
Parallel Data Warehouse. Use this statement to manage disk space allocations for a database as it grows or shrinks
in size. The topic also describes syntax related to setting database options in Parallel Data Warehouse.
Transact-SQL Syntax Conventions (Transact-SQL )

Syntax
-- Parallel Data Warehouse
ALTER DATABASE database_name
SET ( <set_database_options> | <db_encryption_option> )
[;]

<set_database_options> ::=
{
AUTOGROW = { ON | OFF }
| REPLICATED_SIZE = size [GB]
| DISTRIBUTED_SIZE = size [GB]
| LOG_SIZE = size [GB]
| SET AUTO_CREATE_STATISTICS { ON | OFF }
| SET AUTO_UPDATE_STATISTICS { ON | OFF }
| SET AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}

<db_encryption_option> ::=
ENCRYPTION { ON | OFF }

Arguments
database_name
The name of the database to be modified. To display a list of databases on the appliance, use sys.databases
(Transact-SQL ).
AUTOGROW = { ON | OFF }
Updates the AUTOGROW option. When AUTOGROW is ON, Parallel Data Warehouse automatically increases
the allocated space for replicated tables, distributed tables, and the transaction log as necessary to accommodate
growth in storage requirements. When AUTOGROW is OFF, Parallel Data Warehouse returns an error if replicated
tables, distributed tables, or the transaction log exceeds the maximum size setting.
REPLICATED_SIZE = size [GB ]
Specifies the new maximum gigabytes per Compute node for storing all of the replicated tables in the database
being altered. If you are planning for appliance storage space, you will need to multiply REPLICATED_SIZE by the
number of Compute nodes in the appliance.
DISTRIBUTED_SIZE = size [GB ]
Specifies the new maximum gigabytes per database for storing all of the distributed tables in the database being
altered. The size is distributed across all of the Compute nodes in the appliance.
LOG_SIZE = size [GB ]
Specifies the new maximum gigabytes per database for storing all of the transaction logs in the database being
altered. The size is distributed across all of the Compute nodes in the appliance.
ENCRYPTION { ON | OFF }
Sets the database to be encrypted (ON ) or not encrypted (OFF ). Encryption can only be configured for Parallel
Data Warehouse when sp_pdw_database_encryption has been set to 1. A database encryption key must be created
before transparent data encryption can be configured. For more information about database encryption, see
Transparent Data Encryption (TDE ).
SET AUTO_CREATE_STATISTICS { ON | OFF } When the automatic create statistics option,
AUTO_CREATE_STATISTICS, is ON, the Query Optimizer creates statistics on individual columns in the query
predicate, as necessary, to improve cardinality estimates for the query plan. These single-column statistics are
created on columns that do not already have a histogram in an existing statistics object.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created prior to
the upgrade.
For more information about statistics, see Statistics
SET AUTO_UPDATE_STATISTICS { ON | OFF } When the automatic update statistics option,
AUTO_UPDATE_STATISTICS, is ON, the query optimizer determines when statistics might be out-of-date and
then updates them when they are used by a query. Statistics become out-of-date after operations insert, update,
delete, or merge change the data distribution in the table or indexed view. The query optimizer determines when
statistics might be out-of-date by counting the number of data modifications since the last statistics update and
comparing the number of modifications to a threshold. The threshold is based on the number of rows in the table
or indexed view.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created prior to
the upgrade.
For more information about statistics, see Statistics.
SET AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF } The asynchronous statistics update option,
AUTO_UPDATE_STATISTICS_ASYNC, determines whether the Query Optimizer uses synchronous or
asynchronous statistics updates. The AUTO_UPDATE_STATISTICS_ASYNC option applies to statistics objects
created for indexes, single columns in query predicates, and statistics created with the CREATE STATISTICS
statement.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created prior to
the upgrade.
For more information about statistics, see Statistics.

Permissions
Requires the ALTER permission on the database.

Error Messages
If auto-stats is disabled and you try to alter the statistics settings, PDW gives the error "This option is not
supported in PDW." The system administrator can enable auto-stats by enabling the feature switch
AutoStatsEnabled.

General Remarks
The values for REPLICATED_SIZE, DISTRIBUTED_SIZE, and LOG_SIZE can be greater than, equal to, or less than
the current values for the database.
Limitations and Restrictions
Grow and shrink operations are approximate. The resulting actual sizes can vary from the size parameters.
Parallel Data Warehouse does not perform the ALTER DATABASE statement as an atomic operation. If the
statement is aborted during execution, changes that have already occurred will remain.
The statistics settings only work if the administrator has enable auto-stats. If you are an administrator, use the
feature switch AutoStatsEnabled to enable or disable auto-stats.

Locking Behavior
Takes a shared lock on the DATABASE object. You cannot alter a database that is in use by another user for reading
or writing. This includes sessions that have issued a USE statement on the database.

Performance
Shrinking a database can take a large amount of time and system resources, depending on the size of the actual
data within the database, and the amount of fragmentation on disk. For example, shrinking a database could take
serveral hours or more.

Determining Encryption Progress


Use the following query to determine progress of database transparent data encryption as a percent:
WITH
database_dek AS (
SELECT ISNULL(db_map.database_id, dek.database_id) AS database_id,
dek.encryption_state, dek.percent_complete,
dek.key_algorithm, dek.key_length, dek.encryptor_thumbprint,
type
FROM sys.dm_pdw_nodes_database_encryption_keys AS dek
INNER JOIN sys.pdw_nodes_pdw_physical_databases AS node_db_map
ON dek.database_id = node_db_map.database_id
AND dek.pdw_node_id = node_db_map.pdw_node_id
LEFT JOIN sys.pdw_database_mappings AS db_map
ON node_db_map .physical_name = db_map.physical_name
INNER JOIN sys.dm_pdw_nodes nodes
ON nodes.pdw_node_id = dek.pdw_node_id
WHERE dek.encryptor_thumbprint <> 0x
),
dek_percent_complete AS (
SELECT database_dek.database_id, AVG(database_dek.percent_complete) AS percent_complete
FROM database_dek
WHERE type = 'COMPUTE'
GROUP BY database_dek.database_id
)
SELECT DB_NAME( database_dek.database_id ) AS name,
database_dek.database_id,
ISNULL(
(SELECT TOP 1 dek_encryption_state.encryption_state
FROM database_dek AS dek_encryption_state
WHERE dek_encryption_state.database_id = database_dek.database_id
ORDER BY (CASE encryption_state
WHEN 3 THEN -1
ELSE encryption_state
END) DESC), 0)
AS encryption_state,
dek_percent_complete.percent_complete,
database_dek.key_algorithm, database_dek.key_length, database_dek.encryptor_thumbprint
FROM database_dek
INNER JOIN dek_percent_complete
ON dek_percent_complete.database_id = database_dek.database_id
WHERE type = 'CONTROL';

For a comprehensive example demonstrating all the steps in implementing TDE, see Transparent Data Encryption
(TDE ).

Examples: Parallel Data Warehouse


A. Altering the AUTOGROW setting
Set AUTOGROW to ON for database CustomerSales .

ALTER DATABASE CustomerSales


SET ( AUTOGROW = ON );

B. Altering the maximum storage for replicated tables


The following example sets the replicated table storage limit to 1 GB for the database CustomerSales . This is the
storage limit per Compute node.

ALTER DATABASE CustomerSales


SET ( REPLICATED_SIZE = 1 GB );

C. Altering the maximum storage for distributed tables


The following example sets the distributed table storage limit to 1000 GB (one terabyte) for the database
CustomerSales . This is the combined storage limit across the appliance for all of the Compute nodes, not the
storage limit per Compute node.

ALTER DATABASE CustomerSales


SET ( DISTRIBUTED_SIZE = 1000 GB );

D. Altering the maximum storage for the transaction log


The following example updates the database CustomerSales to have a maximum SQL Server transaction log size
of 10 GB for the appliance.

ALTER DATABASE CustomerSales


SET ( LOG_SIZE = 10 GB );

E. Check for current statistics values


The following query returns the current statistics values for all databases. The value 1 means the feature is on, and
a 0 means the feature is off.

SELECT NAME,
is_auto_create_stats_on,
is_auto_update_stats_on,
is_auto_update_stats_async_on
FROM sys.databases;

F. Enable auto -create and auto -update stats for a database


Use the following statement to enable create and update statistics automatically and asynchronously for database,
CustomerSales. This creates and updates single-column statistics as necessary to create high quality query plans.

ALTER DATABASE CustomerSales


SET AUTO_CREATE_STATISTICS ON;
ALTER DATABASE CustomerSales
SET AUTO_UPDATE_STATISTICS ON;
ALTER DATABASE CustomerSales
SET AUTO_UPDATE_STATISTICS_ASYNC ON;

See Also
CREATE DATABASE (Parallel Data Warehouse)
DROP DATABASE (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a database audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE AUDIT SPECIFICATION audit_specification_name
{
[ FOR SERVER AUDIT audit_name ]
[ { { ADD | DROP } (
{ <audit_action_specification> | audit_action_group_name }
)
} [, ...n] ]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]
<audit_action_specification>::=
{
<action_specification>[ ,...n ] ON [ class :: ] securable
BY principal [ ,...n ]
}

Arguments
audit_specification_name
The name of the audit specification.
audit_name
The name of the audit to which this specification is applied.
audit_action_specification
Name of one or more database-level auditable actions. For a list of audit action groups, see SQL Server Audit
Action Groups and Actions.
audit_action_group_name
Name of one or more groups of database-level auditable actions. For a list of audit action groups, see SQL Server
Audit Action Groups and Actions.
class
Class name (if applicable) on the securable.
securable
Table, view, or other securable object in the database on which to apply the audit action or audit action group. For
more information, see Securables.
column
Column name (if applicable) on the securable.
principal
Name of SQL Server principal on which to apply the audit action or audit action group. For more information, see
Principals (Database Engine).
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification. Audit specification state changes
must be done outside a user transaction and may not have other changes in the same statement when the
transition is ON to OFF.

Remarks
Database audit specifications are non-securable objects that reside in a given database. You must set the state of
an audit specification to the OFF option in order to make changes to a database audit specification. If ALTER
DATABASE AUDIT SPECIFICATION is executed when an audit is enabled with any options other than
STATE=OFF, you will receive an error message. For more information, see tempdb Database.

Permissions
Users with the ALTER ANY DATABASE AUDIT permission can alter database audit specifications and bind them
to any audit.
After a database audit specification is created, it can be viewed by principals with the CONTROL SERVER,or
ALTER ANY DATABASE AUDIT permissions, the sysadmin account, or principals having explicit access to the
audit.

Examples
The following example alters a database audit specification called HIPPA_Audit_DB_Specification that audits the
SELECT statements by the dbo user, for a SQL Server audit called HIPPA_Audit .

ALTER DATABASE AUDIT SPECIFICATION HIPPA_Audit_DB_Specification


FOR SERVER AUDIT HIPPA_Audit
ADD (SELECT
ON OBJECT::dbo.Table1
BY dbo)
WITH (STATE = ON);
GO

For a full example about how to create an audit, see SQL Server Audit (Database Engine).

See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
ALTER DATABASE (Transact-SQL) Compatibility Level
5/16/2018 • 29 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets certain database behaviors to be compatible with the specified version of SQL Server. For other ALTER
DATABASE options, see ALTER DATABASE (Transact-SQL ).

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 | 90 }

Arguments
database_name
Is the name of the database to be modified.
COMPATIBILITY_LEVEL { 140 | 130 | 120 | 110 | 100 | 90 | 80 }
Is the version of SQL Server with which the database is to be made compatible. The following compatibility level
values can be configured:

COMPATIBILITY LEVEL SUPPORTED COMPATIBILITY


PRODUCT DATABASE ENGINE VERSION DESIGNATION LEVEL VALUES

SQL Server 2017 (14.x) 14 140 140, 130, 120, 110, 100

Azure SQL Database 12 130 140, 130, 120, 110, 100

SQL Server 2016 (13.x) 13 130 130, 120, 110, 100

SQL Server 2014 (12.x) 12 120 120, 110, 100

SQL Server 2012 (11.x) 11 110 110, 100, 90

SQL Server 2008 R2 10.5 100 100, 90, 80

SQL Server 2008 10 100 100, 90, 80

SQL Server 2005 9 90 90, 80


COMPATIBILITY LEVEL SUPPORTED COMPATIBILITY
PRODUCT DATABASE ENGINE VERSION DESIGNATION LEVEL VALUES

SQL Server 2000 8 80 80

NOTE
As of January 2018, in Azure SQL Database, the default compatibility level is 140 for newly created databases. We do not
update database compatibility level for existing databases. This is up to customers to do at their own discretion. With that
said, we highly recommend customers plan on moving to the latest compatibility level in order to leverage the latest
improvements.
If you want to leverage database compatibility level 140 for your database overall, but you have reason to prefer the
cardinality estimation model of SQL Server 2012 (11.x), mapping to database compatibility level 110, see ALTER
DATABASE SCOPED CONFIGURATION (Transact-SQL), and in particular its keyword LEGACY_CARDINALITY_ESTIMATION = ON
.
For details about how to assess the performance differences of your most important queries, between two compatibility
levels on Azure SQL Database, see Improved Query Performance with Compatibility Level 130 in Azure SQL Database. Note
that this article refers to compatibility level 130 and SQL Server, but the same methodology applies for moves to 140 for
SQL Server and Azure SQL Database .

Execute the following query to determine the version of the Database Engine that you are connected to.

SELECT SERVERPROPERTY('ProductVersion');

NOTE
Not all features that vary by compatibility level are supported on Azure SQL Database.

To determine the current compatibility level, query the compatibility_level column of sys.databases (Transact-
SQL ).

SELECT name, compatibility_level FROM sys.databases;

Remarks
For all installations of SQL Server, the default compatibility level is set to the version of the Database Engine.
Databases are set to this level unless the model database has a lower compatibility level. When a database is
upgraded from any earlier version of SQL Server, the database retains its existing compatibility level, if it is at least
minimum allowed for that instance of SQL Server. Upgrading a database with a compatibility level lower than the
allowed level, automatically sets the database to the lowest compatibility level allowed. This applies to both system
and user databases.
The below behaviors are expected for SQL Server 2017 (14.x) when a database is attached or restored, and after
an in-place upgrade:
If the compatibility level of a user database was 100 or higher before the upgrade, it remains the same after
upgrade.
If the compatibility level of a user database was 90 before upgrade, in the upgraded database, the compatibility
level is set to 100, which is the lowest supported compatibility level in SQL Server 2017 (14.x).
The compatibility levels of the tempdb, model, msdb and Resource databases are set to the current
compatibility level after upgrade.
The master system database retains the compatibility level it had before upgrade.
Use ALTER DATABASE to change the compatibility level of the database. The new compatibility level setting for a
database takes effect when a USE <database> command is issued, or a new login is processed with that database
as the default database context.
To view the current compatibility level of a database, query the compatibility_level column in the sys.databases
catalog view.

NOTE
A distribution database that was created in an earlier version of SQL Server and is upgraded to SQL Server 2016 (13.x) RTM
or Service Pack 1 has a compatibility level of 90, which is not supported for other databases. This does not have an impact
on the functionality of replication. Upgrading to later service packs and versions of SQL Server will result in the compatibility
level of the distribution database to be increased to match that of the master database.

Compatibility Levels and SQL Server Upgrades


Database compatibility level is a valuable tool to assist in database modernization, by allowing the SQL Server
Database Engine to be upgraded, while keeping connecting applications functional status by maintaining the same
pre-upgrade database compatibility level. As long as the application does not need to leverage enhancements that
are only available in a higher database compatibility level, it is a valid approach to upgrade the SQL Server
Database Engine and maintain the previous database compatibility level. For more information on using
compatibility level for backward compatibility, see the Using Compatibility Level for Backward Compatibility later
in this article.
For new development work, or when an existing application requires use of new features, as well as performance
improvements done in the query optimizer space, plan to upgrade the database compatibility level to the latest
available in SQL Server, and certify your application to work with that compatibility level. For more details on
upgrading the database compatibility level, see the Best Practices for upgrading Database Compatibility Level
later in the article.

TIP
If an application was tested and certified on a given SQL Server version, then it was implicitly tested and certified on that
SQL Server version native database compatibility level.
So, database compatibility level provides an easy certification path for an existing application, when using the database
compatibility level corresponding to the tested SQL Server version.
For more information about differences between compatibility levels, see the appropriate sections later in this article.

To upgrade the SQL Server Database Engine to the latest version, while maintaining the database compatibility
level that existed before the upgrade and its supportability status, it is recommended to perform static functional
surface area validation of the application code in the database, by using the Microsoft Data Migration Assistant
tool (DMA). The absence of errors in the DMA tool output, about missing or incompatible functionality, protects
application from any functional regressions on the new target version. For more information on the DMA tool, see
here.

NOTE
DMA supports database compatibility level 100 and above. SQL Server 2005 as source version is excluded.
IMPORTANT
Microsoft recommends that some minimal testing is done to validate the success of an upgrade, while maintaining the
previous database compatibility level. You should determine what minimal testing means for your own application and
scenario.

NOTE
Microsoft provides query plan shape protection when:
The new SQL Server version (target) runs on hardware that is comparable to the hardware where the previous SQL
Server version (source) was running.
The same supported database compatibility level is used both at the target SQL Server and source SQL Server.
Any query plan shape regression (as compared to the source SQL Server) that occurs in the above conditions will be
addressed. Please contact Microsoft Customer Support if this is the case.

Using Compatibility Level for Backward Compatibility


The database compatibility level setting affects behaviors only for the specified database, not for the entire server.
Database compatibility level provides only partial backward compatibility with earlier versions of SQL Server.
Starting with compatibility mode 130, any new query plan affecting features have been intentionally added only
to the new compatibility level. This has been done in order to minimize the risk during upgrades that arise from
performance degradation due to query plan changes.
From an application perspective, the goal should still be to upgrade to the latest compatibility level at some point
in time, in order to inherit some of the new features, as well as performance improvements done in the query
optimizer space, but to do so in a controlled way. Use the lower compatibility level as a safer migration aid to work
around version differences, in the behaviors that are controlled by the relevant compatibility level setting. For
more details, including the recommended workflow for upgrading database compatibility level, see the Best
Practices for upgrading Database Compatibility Level later in the article.

IMPORTANT
Discontinued functionality introduced in a given SQL Server version is not protected by compatibility level. This refers to
fucntionality that was removed from the SQL Server Database Engine.
For example, the FASTFIRSTROW hint was discontinued in SQL Server 2012 (11.x) and replaced with the
OPTION (FAST n ) hint. Setting the database compatibility level to 110 will not restore the discontinued hint. For more
information on discontinued functionality, see Discontinued Database Engine Functionality in SQL Server 2016, Discontinued
Database Engine Functionality in SQL Server 2014, Discontinued Database Engine Functionality in SQL Server 2012, and
Discontinued Database Engine Functionality in SQL Server 2008.
IMPORTANT
Breaking changes introduced in a given SQL Server version may not be protected by compatibility level. This refers to
behavior changes between versions of the SQL Server Database Engine. Transact-SQL behavior is usually protected by
compatibility level. However, changed or removed system objects are not protected by compatibility level.
An example of a breaking change protected by compatibility level is an implicit conversion from datetime to datetime2 data
types. Under database compatibility level 130, these show improved accuracy by accounting for the fractional milliseconds,
resulting in different converted values. To restore previous conversion behavior, set the database compatibility level to 120
or lower.
Examples of breaking changes not protected by compatibility level are:
Changed column names in system objects. In SQL Server 2012 (11.x) the column single_pages_kb in sys.dm_os_sys_info
was renamed to pages_kb. Regardless of the compatibility level, the query
SELECT single_pages_kb FROM sys.dm_os_sys_info will produce error 207 (Invalid column name).
Removed system objects. In SQL Server 2012 (11.x) the sp_dboption was removed. Regardless of the compatibility
level, the statement EXEC sp_dboption 'AdventureWorks2016CTP3', 'autoshrink', 'FALSE'; will produce error 2812
(Could not find stored procedure 'sp_dboption').
For more information on breaking changes, see Breaking Changes to Database Engine Features in SQL Server 2017,
Breaking Changes to Database Engine Features in SQL Server 2016, Breaking Changes to Database Engine Features in SQL
Server 2014, Breaking Changes to Database Engine Features in SQL Server 2012, and Breaking Changes to Database
Engine Features in SQL Server 2008.

Best Practices for upgrading Database Compatibility Level


For the recommended workflow for upgrading the compatibility level, see Change the Database Compatibility
Mode and Use the Query Store.

Compatibility Levels and Stored Procedures


When a stored procedure executes, it uses the current compatibility level of the database in which it is defined.
When the compatibility setting of a database is changed, all of its stored procedures are automatically recompiled
accordingly.

Differences Between Compatibility Level 130 and Level 140


This section describes new behaviors introduced with compatibility level 140.

COMPATIBILITY-LEVEL SETTING OF 130 OR LOWER COMPATIBILITY-LEVEL SETTING OF 140

Cardinality estimates for statements referencing multi- Cardinality estimates for eligible statements referencing multi-
statement table valued functions use a fixed row guess. statement table valued functions will use the actual cardinality
of the function output. This is enabled via interleaved
execution for multi-statement table valued functions.

Batch-mode queries that request insufficient memory grant Batch-mode queries that request insufficient memory grant
sizes that result in spills to disk may continue to have issues sizes that result in spills to disk may have improved
on consecutive executions. performance on consecutive executions. This is enabled via
batch mode memory grant feedback which will update
the memory grant size of a cached plan if spills have occurred
for batch mode operators.
COMPATIBILITY-LEVEL SETTING OF 130 OR LOWER COMPATIBILITY-LEVEL SETTING OF 140

Batch-mode queries that request an excessive memory grant Batch-mode queries that request an excessive memory grant
size that results in concurrency issues may continue to have size that results in concurrency issues may have improved
issues on consecutive executions. concurrency on consecutive executions. This is enabled via
batch mode memory grant feedback which will update
the memory grant size of a cached plan if an excessive
amount was originally requested.

Batch-mode queries that contain join operators are eligible There is an additional join operator called adaptive join. If
for three physical join algorithms, including nested loop, hash cardinality estimates are incorrect for the outer build join
join and merge join. If cardinality estimates are incorrect for input, an inappropriate join algorithm may be selected. If this
join inputs, an inappropriate join algorithm may be selected. If occurs and the statement is eligible for an adaptive join, a
this occurs, performance will suffer and the inappropriate join nested loop will be used for smaller join inputs and a hash join
algorithm will remain in-use until the cached plan is will be used for larger join inputs dynamically without
recompiled. requiring recompilation.

Trivial plans referencing Columnstore indexes are not eligible A trivial plan referencing Columnstore indexes will be
for batch mode execution. discarded in favor of a plan that is eligible for batch mode
execution.

The sp_execute_external_script UDX operator can only The sp_execute_external_script UDX operator is eligible
run in row mode. for batch mode execution.

Multi-statement table-valued functions (TVF's) do not have Interleaved execution for multi-statement TVFs to improve
interleaved execution plan quality .

Fixes that were under trace flag 4199 in earlier versions of SQL Server prior to SQL Server 2017 are now
enabled by default. With compatibility mode 140. Trace flag 4199 will still be applicable for new query optimizer
fixes that are released after SQL Server 2017. For information about Trace Flag 4199, see Trace Flag 4199.

Differences Between Compatibility Level 120 and Level 130


This section describes new behaviors introduced with compatibility level 130.

COMPATIBILITY-LEVEL SETTING OF 120 OR LOWER COMPATIBILITY-LEVEL SETTING OF 130

The Insert in an Insert-select statement is single-threaded. The Insert in an Insert-select statement is multi-threaded or
can have a parallel plan.

Queries on a memory-optimized table execute single- Queries on a memory-optimized table can now have parallel
threaded. plans.

Introduced the SQL 2014 Cardinality estimator Further cardinality estimation ( CE) Improvements with the
CardinalityEstimationModelVersion="120" Cardinality Estimation Model 130 which is visible from a
Query plan. CardinalityEstimationModelVersion="130"
COMPATIBILITY-LEVEL SETTING OF 120 OR LOWER COMPATIBILITY-LEVEL SETTING OF 130

Batch mode versus Row Mode changes with Columnstore Batch mode versus Row Mode changes with Columnstore
indexes indexes

Sorts on a table with Columnstore index are in Row mode Sorts on a table with a Columnstore index are now in batch
mode
Windowing function aggregates operate in row mode such as
LAG or LEAD Windowing aggregates now operate in batch mode such as
LAG or LEAD
Queries on Columnstore tables with Multiple distinct clauses
operated in Row mode Queries on Columnstore tables with Multiple distinct clauses
operate in Batch mode
Queries running under MAXDOP 1 or with a serial plan
executed in Row mode Queries running under Maxdop1 or with a serial plan execute
in Batch Mode

Statistics can be automatically updated. The logic which automatically updates statistics is more
aggressive on large tables. In practice, this should reduce
cases where customers have seen performance issues on
queries where newly inserted rows are queried frequently but
where the statistics had not been updated to include those
values.

Trace 2371 is OFF by default in SQL Server 2014 (12.x). Trace 2371 is ON by default in SQL Server 2016 (13.x). Trace
flag 2371 tells the auto statistics updater to sample a smaller
yet wiser subset of rows, in a table that has a great many
rows.

One improvement is to include in the sample more rows that


were inserted recently.

Another improvement is to let queries run while the update


statistics process is running, rather than blocking the query.

For level 120, statistics are sampled by a single-threaded For level 130, statistics are sampled by a multi-threaded
process. process.

253 incoming foreign keys is the limit. A given table can be referenced by up to 10,000 incoming
foreign keys or similar references. For restrictions, see Create
Foreign Key Relationships.

The deprecated MD2, MD4, MD5, SHA, and SHA1 hash Only SHA2_256 and SHA2_512 hash algorithms are
algorithms are permitted. permitted.

SQL Server 2016 (13.x) includes improvements in some data


types conversions and some (mostly uncommon) operations.
For details see SQL Server 2016 improvements in handling
some data types and uncommon operations.

Fixes that were under trace flag 4199 in earlier versions of SQL Server prior to SQL Server 2016 (13.x) are now
enabled by default. With compatibility mode 130. Trace flag 4199 will still be applicable for new query optimizer
fixes that are released after SQL Server 2016 (13.x). To use the older query optimizer in SQL Database you must
select compatibility level 110. For information about Trace Flag 4199, see Trace Flag 4199.

Differences Between Lower Compatibility Levels and Level 120


This section describes new behaviors introduced with compatibility level 120.
COMPATIBILITY-LEVEL SETTING OF 110 OR LOWER COMPATIBILITY-LEVEL SETTING OF 120

The older query optimizer is used. SQL Server 2014 (12.x) includes substantial improvements to
the component that creates and optimizes query plans. This
new query optimizer feature is dependent upon use of the
database compatibility level 120. New database applications
should be developed using database compatibility level 120
to take advantage of these improvements. Applications that
are migrated from earlier versions of SQL Server should be
carefully tested to confirm that good performance is
maintained or improved. If performance degrades, you can set
the database compatibility level to 110 or earlier to use the
older query optimizer methodology.

Database compatibility level 120 uses a new cardinality


estimator that is tuned for modern data warehousing and
OLTP workloads. Before setting database compatibility level to
110 because of performance issues, see the recommendations
in the Query Plans section of the SQL Server 2014 (12.x)
What's New in Database Engine topic.

In compatibility levels lower than 120, the language setting is The language setting is not ignored when converting a date
ignored when converting a date value to a string value. Note value to a string value.
that this behavior is specific only to the date type. See
example B in the Examples section below.

Recursive references on the right-hand side of an EXCEPT Recursive references in an EXCEPT clause generates an error
clause create an infinite loop. Example C in the Examples in compliance with the ANSI SQL standard.
section below demonstrates this behavior.

Recursive common table expression (CTE) allows duplicate Recursive CTE do not allow duplicate column names.
column names.

Disabled triggers are enabled if the triggers are altered. Altering a trigger does not change the state (enabled or
disabled) of the trigger.

The OUTPUT INTO table clause ignores the You cannot insert explicit values for an identity column in a
IDENTITY_INSERT SETTING = OFF and allows explicit values table when IDENTITY_INSERT is set to OFF.
to be inserted.

When the database containment is set to partial, validating The collation of the values returned by the $action clause
the $action field in the OUTPUT clause of a MERGE of a MERGE statement is the database collation instead of the
statement can return a collation error. server collation and a collation conflict error is not returned.

A SELECT INTO statement always creates a single-threaded A SELECT INTO statement can create a parallel insert
insert operation. operation. When inserting a large numbers of rows, the
parallel operation can improve performance.

Differences Between Lower Compatibility Levels and Levels 110 and


120
This section describes new behaviors introduced with compatibility level 110. This section also applies to level
120.
COMPATIBILITY-LEVEL SETTING OF 100 OR LOWER COMPATIBILITY-LEVEL SETTING OF AT LEAST 110

Common language runtime (CLR) database objects are CLR database objects are executed with version 4 of the CLR.
executed with version 4 of the CLR. However, some behavior
changes introduced in version 4 of the CLR are avoided. For
more information, see What's New in CLR Integration.

The XQuery functions string-length and substring count The XQuery functions string-length and substring count
each surrogate as two characters. each surrogate as one character.

PIVOT is allowed in a recursive common table expression PIVOT is not allowed in a recursive common table expression
(CTE) query. However, the query returns incorrect results (CTE) query. An error is returned.
when there are multiple rows per grouping.

The RC4 algorithm is only supported for backward New material cannot be encrypted using RC4 or RC4_128.
compatibility. New material can only be encrypted using RC4 Use a newer algorithm such as one of the AES algorithms
or RC4_128 when the database is in compatibility level 90 or instead. In SQL Server 2012 (11.x), material encrypted using
100. (Not recommended.) In SQL Server 2012 (11.x), material RC4 or RC4_128 can be decrypted in any compatibility level.
encrypted using RC4 or RC4_128 can be decrypted in any
compatibility level.

The default style for CAST and CONVERT operations on Under compatibility level 110, the default style for CAST and
time and datetime2 data types is 121 except when either CONVERT operations on time and datetime2 data types is
type is used in a computed column expression. For computed always 121. If your query relies on the old behavior, use a
columns, the default style is 0. This behavior impacts compatibility level less than 110, or explicitly specify the 0
computed columns when they are created, used in queries style in the affected query.
involving auto-parameterization, or used in constraint
definitions. Upgrading the database to compatibility level 110 will not
change user data that has been stored to disk. You must
Example D in the Examples section below shows the difference manually correct this data as appropriate. For example, if you
between styles 0 and 121. It does not demonstrate the used SELECT INTO to create a table from a source that
behavior described above. For more information about date contained a computed column expression described above,
and time styles, see CAST and CONVERT (Transact-SQL). the data (using style 0) would be stored rather than the
computed column definition itself. You would need to
manually update this data to match style 121.

Any columns in remote tables of type smalldatetime that Any columns in remote tables of type smalldatetime that are
are referenced in a partitioned view are mapped as datetime. referenced in a partitioned view are mapped as
Corresponding columns in local tables (in the same ordinal smalldatetime. Corresponding columns in local tables (in the
position in the select list) must be of type datetime. same ordinal position in the select list) must be of type
smalldatetime.

After upgrading to 110, the distributed partitioned view will


fail because of the data type mismatch. You can resolve this
by changing the data type on the remote table to datetime
or setting the compatibility level of the local database to 100
or lower.
COMPATIBILITY-LEVEL SETTING OF 100 OR LOWER COMPATIBILITY-LEVEL SETTING OF AT LEAST 110

SOUNDEX function implements the following rules: SOUNDEX function implements the following rules:

1) Upper-case H or upper-case W are ignored when 1) If upper-case H or upper-case W separate two consonants
separating two consonants that have the same number in the that have the same number in the SOUNDEX code, the
SOUNDEX code. consonant to the right is ignored

2) If the first 2 characters of character_expression have the 2) If a set of side-by-side consonants have same number in
same number in the SOUNDEX code, both characters are the SOUNDEX code, all of them are excluded except the first.
included. Else, if a set of side-by-side consonants have same
number in the SOUNDEX code, all of them are excluded
except the first.
The additional rules may cause the values computed by the
SOUNDEX function to be different than the values computed
under earlier compatibility levels. After upgrading to
compatibility level 110, you may need to rebuild the indexes,
heaps, or CHECK constraints that use the SOUNDEX function.
For more information, see SOUNDEX (Transact-SQL).

Differences Between Compatibility Level 90 and Level 100


This section describes new behaviors introduced with compatibility level 100.

COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

The QUOTED_IDENTIFER setting is The QUOTED IDENTIFIER session Medium


always set to ON for multistatement setting is honored when
table-valued functions when they are multistatement table-valued functions
created regardless of the session level are created.
setting.

When you create or alter a partition The current language setting is used to Medium
function, datetime and smalldatetime evaluate datetime and smalldatetime
literals in the function are evaluated literals in the partition function.
assuming US_English as the language
setting.

The FOR BROWSE clause is allowed (and The FOR BROWSE clause is not allowed Medium
ignored) in INSERT and in INSERT and SELECT INTO
SELECT INTO statements. statements.

Full-text predicates are allowed in the Full-text predicates are not allowed in Low
OUTPUT clause. the OUTPUT clause.

CREATE FULLTEXT STOPLIST , CREATE FULLTEXT STOPLIST , Low


ALTER FULLTEXT STOPLIST , and ALTER FULLTEXT STOPLIST , and
DROP FULLTEXT STOPLIST are not DROP FULLTEXT STOPLIST are
supported. The system stoplist is supported.
automatically associated with new full-
text indexes.

MERGE is not enforced as a reserved MERGE is a fully reserved keyword. The Low
keyword. MERGE statement is supported under
both 100 and 90 compatibility levels.
COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

Using the <dml_table_source> You can capture the results of an Low


argument of the INSERT statement OUTPUT clause in a nested INSERT,
raises a syntax error. UPDATE, DELETE, or MERGE statement,
and insert those results into a target
table or view. This is done using the
<dml_table_source> argument of the
INSERT statement.

Unless NOINDEX is specified, Unless NOINDEX is specified, Low


DBCC CHECKDB or DBCC CHECKTABLE DBCC CHECKDB or DBCC CHECKTABLE
performs both physical and logical performs both physical and logical
consistency checks on a single table or consistency checks on a single table
indexed view and on all its nonclustered and on all its nonclustered indexes.
and XML indexes. Spatial indexes are However, on XML indexes, spatial
not supported. indexes, and indexed views, only
physical consistency checks are
performed by default.

If WITH EXTENDED_LOGICAL_CHECKS is
specified, logical checks are performed
on indexed views, XML indexes, and
spatial indexes, where present. By
default, physical consistency checks are
performed before the logical
consistency checks. If NOINDEX is also
specified, only the logical checks are
performed.

When an OUTPUT clause is used with a When an OUTPUT clause is used with a Low
data manipulation language (DML) data manipulation language (DML)
statement and a run-time error occurs statement and a run-time error occurs
during statement execution, the entire during statement execution, the
transaction is terminated and rolled behavior depends on the
back. SET XACT_ABORT setting. If
SET XACT_ABORT is OFF, a statement
abort error generated by the DML
statement using the OUTPUT clause
will terminate the statement, but the
execution of the batch continues and
the transaction is not rolled back. If
SET XACT_ABORT is ON, all run-time
errors generated by the DML
statement using the OUTPUT clause will
terminate the batch, and the
transaction is rolled back.

CUBE and ROLLUP are not enforced as CUBE and ROLLUP are reserved Low
reserved keywords. keywords within the GROUP BY clause.

Strict validation is applied to elements Lax validation is applied to elements of Low


of the XML anyType type. the anyType type. For more
information, see Wildcard Components
and Content Validation.
COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

The special attributes xsi:nil and The special attributes xsi:nil and Low
xsi:type cannot be queried or modified xsi:type are stored as regular
by data manipulation language attributes and can be queried and
statements. modified.

This means that /e/@xsi:nil fails For example, executing the query
while /e/@* ignores the xsi:nil and SELECT x.query('a/b/@*') returns
xsi:type attributes. However, /e all attributes including xsi:nil and
returns the xsi:nil and xsi:type xsi:type. To exclude these types in the
attributes for consistency with query, replace @* with
SELECT xmlCol , even if @*[namespace-uri(.) != " insert xsi
xsi:nil = "false" . namespace uri " and not
(local-name(.) = "type" or
local-name(.) ="nil".

A user-defined function that converts A user-defined function that converts Low


an XML constant string value to a SQL an XML constant string value to a SQL
Server datetime type is marked as Server datetime type is marked as non-
deterministic. deterministic.

The XML union and list types are not The union and list types are fully Low
fully supported. supported including the following
functionality:

Union of list

Union of union

List of atomic types

List of union

The SET options required for an xQuery The SET options required for an xQuery Low
method are not validated when the method are validated when the method
method is contained in a view or inline is contained in a view or inline table-
table-valued function. valued function. An error is raised if the
SET options of the method are set
incorrectly.

XML attribute values that contain end- XML attribute values that contain end- Low
of-line characters (carriage return and of-line characters (carriage return and
line feed) are not normalized according line feed) are normalized according to
to the XML standard. That is, both the XML standard. That is, all line
characters are returned instead of a breaks in external parsed entities
single line-feed character. (including the document entity) are
normalized on input by translating both
the two-character sequence #xD #xA
and any #xD that is not followed by
#xA to a single #xA character.

Applications that use attributes to


transport string values that contain
end-of-line characters will not receive
these characters back as they are
submitted. To avoid the normalization
process, use the XML numeric character
entities to encode all end-of-line
characters.
COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

The column properties ROWGUIDCOL The column properties ROWGUIDCOL Low


and IDENTITY can be incorrectly and IDENTITY cannot be named as a
named as a constraint. For example the constraint. Error 156 is returned.
statement
CREATE TABLE T (C1 int CONSTRAINT
MyConstraint IDENTITY)
executes, but the constraint name is
not preserved and is not accessible to
the user.

Updating columns by using a two-way Updating columns by using a two-way Low


assignment such as assignment produces expected results
UPDATE T1 SET @v = column_name = because only the statement starting
<expression> value of the column is accessed during
can produce unexpected results statement execution.
because the live value of the variable
can be used in other clauses such as
the WHER E and ON clause during
statement execution instead of the
statement starting value. This can
cause the meanings of the predicates
to change unpredictably on a per-row
basis.

This behavior is applicable only when


the compatibility level is set to 90.

See example E in the Examples section See example F in the Examples section Low
below. below.

The ODBC function {fn CONVERT()} The ODBC function {fn CONVERT()} Low
uses the default date format of the uses style 121 (a language-
language. For some languages, the independent YMD format) when
default format is YDM, which can result converting to the ODBC data types
in conversion errors when CONVERT() SQL_TIMESTAMP, SQL_DATE, SQL_TIME,
is combined with other functions, such SQLDATE, SQL_TYPE_TIME, and
as {fn CURDATE()} , that expect a SQL_TYPE_TIMESTAMP.
YMD format.

Datetime intrinsics such as DATEPART Datetime intrinsics such as DATEPART Low


do not require string input values to be require string input values to be valid
valid datetime literals. For example, datetime literals. Error 241 is returned
SELECT DATEPART (year, '2007/05- when an invalid datetime literal is used.
30')
compiles successfully.

Reserved Keywords
The compatibility setting also determines the keywords that are reserved by the Database Engine. The following
table shows the reserved keywords that are introduced by each of the compatibility levels.

COMPATIBILITY-LEVEL SETTING RESERVED KEYWORDS

130 To be determined.

120 None.
COMPATIBILITY-LEVEL SETTING RESERVED KEYWORDS

110 WITHIN GROUP, TRY_CONVERT, SEMANTICKEYPHRASETABLE,


SEMANTICSIMILARITYDETAILSTABLE,
SEMANTICSIMILARITYTABLE

100 CUBE, MERGE, ROLLUP

90 EXTERNAL, PIVOT, UNPIVOT, REVERT, TABLESAMPLE

At a given compatibility level, the reserved keywords include all of the keywords introduced at or below that level.
Thus, for instance, for applications at level 110, all of the keywords listed in the preceding table are reserved. At
the lower compatibility levels, level-100 keywords remain valid object names, but the level-110 language features
corresponding to those keywords are unavailable.
Once introduced, a keyword remains reserved. For example, the reserved keyword PIVOT, which was introduced
in compatibility level 90, is also reserved in levels 100, 110, and 120.
If an application uses an identifier that is reserved as a keyword for its compatibility level, the application will fail.
To work around this, enclose the identifier between either brackets ([]) or quotation marks (""); for example, to
upgrade an application that uses the identifier EXTERNAL to compatibility level 90, you could change the
identifier to either [EXTERNAL ] or "EXTERNAL".
For more information, see Reserved Keywords (Transact-SQL ).

Permissions
Requires ALTER permission on the database.

Examples
A. Changing the compatibility level
The following example changes the compatibility level of the AdventureWorks2012 database to 110, SQL
Server 2012 (11.x).

ALTER DATABASE AdventureWorks2012


SET COMPATIBILITY_LEVEL = 110;
GO

The following example returns the compatibility level of the current database.

SELECT name, compatibility_level


FROM sys.databases
WHERE name = db_name();

B. Ignoring the SET LANGUAGE statement except under compatibility level 120
The following query ignores the SET L ANGUAGE statement except under compatibility level 120.
SET DATEFORMAT dmy;
DECLARE @t2 date = '12/5/2011' ;
SET LANGUAGE dutch;
SELECT CONVERT(varchar(11), @t2, 106);

-- Results when the compatibility level is less than 120.


12 May 2011

-- Results when the compatibility level is set to 120).


12 mei 2011

C.
For compatibility-level setting of 110 or lower, recursive references on the right-hand side of an EXCEPT clause
create an infinite loop.

WITH
cte AS (SELECT * FROM (VALUES (1),(2),(3)) v (a)),
r
AS (SELECT a FROM Table1
UNION ALL
(SELECT a FROM Table1 EXCEPT SELECT a FROM r) )
SELECT a
FROM r;

D.
This example shows the difference between styles 0 and 121. For more information about date and time styles,
see CAST and CONVERT (Transact-SQL ).

CREATE TABLE t1 (c1 time(7), c2 datetime2);

INSERT t1 (c1,c2) VALUES (GETDATE(), GETDATE());

SELECT CONVERT(nvarchar(16),c1,0) AS TimeStyle0


,CONVERT(nvarchar(16),c1,121)AS TimeStyle121
,CONVERT(nvarchar(32),c2,0) AS Datetime2Style0
,CONVERT(nvarchar(32),c2,121)AS Datetime2Style121
FROM t1;

-- Returns values such as the following.


TimeStyle0 TimeStyle121
Datetime2Style0 Datetime2Style121
---------------- ----------------
-------------------- --------------------------
3:15PM 15:15:35.8100000
Jun 7 2011 3:15PM 2011-06-07 15:15:35.8130000

E.
Variable assignment is allowed in a statement containing a top-level UNION operator, but returns unexpected
results. For example, in the following statements, local variable @v is assigned the value of the column
BusinessEntityID from the union of two tables. By definition, when the SELECT statement returns more than one
value, the variable is assigned the last value that is returned. In this case, the variable is correctly assigned the last
value, however, the result set of the SELECT UNION statement is also returned.
ALTER DATABASE AdventureWorks2012
SET compatibility_level = 90;
GO
USE AdventureWorks2012;
GO
DECLARE @v int;
SELECT @v = BusinessEntityID FROM HumanResources.Employee
UNION ALL
SELECT @v = BusinessEntityID FROM HumanResources.EmployeeAddress;
SELECT @v;

F.
Variable assignment is not allowed in a statement containing a top-level UNION operator. Error 10734 is
returned. To resolve the error, rewrite the query as shown in the following example.

DECLARE @v int;
SELECT @v = BusinessEntityID FROM
(SELECT BusinessEntityID FROM HumanResources.Employee
UNION ALL
SELECT BusinessEntityID FROM HumanResources.EmployeeAddress) AS Test;
SELECT @v;

See Also
ALTER DATABASE (Transact-SQL )
Reserved Keywords (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
sys.databases (Transact-SQL )
sys.database_files (Transact-SQL )
View or Change the Compatibility Level of a Database
ALTER DATABASE (Transact-SQL) Database Mirroring
5/4/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse

NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature. Use Always On availability groups instead.

Controls database mirroring for a database. Values specified with the database mirroring options apply to both
copies of the database and to the database mirroring session as a whole. Only one <database_mirroring_option>
is permitted per ALTER DATABASE statement.

NOTE
We recommend that you configure database mirroring during off-peak hours because configuration can affect performance.

For ALTER DATABASE options, see ALTER DATABASE (Transact-SQL ). For ALTER DATABASE SET options, see
ALTER DATABASE SET Options (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE database_name
SET { <partner_option> | <witness_option> }
<partner_option> ::=
PARTNER { = 'partner_server'
| FAILOVER
| FORCE_SERVICE_ALLOW_DATA_LOSS
| OFF
| RESUME
| SAFETY { FULL | OFF }
| SUSPEND
| TIMEOUT integer
}
<witness_option> ::=
WITNESS { = 'witness_server'
| OFF
}

Arguments
IMPORTANT
A SET PARTNER or SET WITNESS command can complete successfully when entered, but fail later.
NOTE
ALTER DATABASE database mirroring options are not available for a contained database.

database_name
Is the name of the database to be modified.
PARTNER <partner_option>
Controls the database properties that define the failover partners of a database mirroring session and their
behavior. Some SET PARTNER options can be set on either partner; others are restricted to the principal server or
to the mirror server. For more information, see the individual PARTNER options that follow. A SET PARTNER
clause affects both copies of the database, regardless of the partner on which it is specified.
To execute a SET PARTNER statement, the STATE of the endpoints of both partners must be set to STARTED.
Note, also, that the ROLE of the database mirroring endpoint of each partner server instance must be set to either
PARTNER or ALL. For information about how to specify an endpoint, see Create a Database Mirroring Endpoint
for Windows Authentication (Transact-SQL ). To learn the role and state of the database mirroring endpoint of a
server instance, on that instance, use the following Transact-SQL statement:

SELECT role_desc, state_desc FROM sys.database_mirroring_endpoints

<partner_option> ::=

NOTE
Only one <partner_option> is permitted per SET PARTNER clause.

' partner_server '


Specifies the server network address of an instance of SQL Server to act as a failover partner in a new database
mirroring session. Each session requires two partners: one starts as the principal server, and the other starts as the
mirror server. We recommend that these partners reside on different computers.
This option is specified one time per session on each partner. Initiating a database mirroring session requires two
ALTER DATABASE database SET PARTNER ='partner_server' statements. Their order is significant. First, connect
to the mirror server, and specify the principal server instance as partner_server (SET PARTNER
='principal_server'). Second, connect to the principal server, and specify the mirror server instance as
partner_server (SET PARTNER ='mirror_server'); this starts a database mirroring session between these two
partners. For more information, see Setting Up Database Mirroring (SQL Server).
The value of partner_server is a server network address. This has the following syntax:
TCP://<system -address>:<port>
where
<system -address> is a string, such as a system name, a fully qualified domain name, or an IP address, that
unambiguously identifies the destination computer system.
<port> is a port number that is associated with the mirroring endpoint of the partner server instance.
For more information, see Specify a Server Network Address (Database Mirroring).
The following example illustrates the SET PARTNER ='partner_server' clause:
'TCP://MYSERVER.mydomain.Adventure-Works.com:7777'

IMPORTANT
If a session is set up by using the ALTER DATABASE statement instead of SQL Server Management Studio, the session is set
to full transaction safety by default (SAFETY is set to FULL) and runs in high-safety mode without automatic failover. To allow
automatic failover, configure a witness; to run in high-performance mode, turn off transaction safety (SAFETY OFF).

FAILOVER
Manually fails over the principal server to the mirror server. You can specify FAILOVER only on the principal
server. This option is valid only when the SAFETY setting is FULL (the default).
The FAILOVER option requires master as the database context.
FORCE_SERVICE_ALLOW_DATA_LOSS
Forces database service to the mirror database after the principal server fails with the database in an
unsynchronized state or in a synchronized state when automatic failover does not occur.
We strongly recommend that you force service only if the principal server is no longer running. Otherwise, some
clients might continue to access the original principal database instead of the new principal database.
FORCE_SERVICE_ALLOW_DATA_LOSS is available only on the mirror server and only under all the following
conditions:
The principal server is down.
WITNESS is set to OFF or the witness is connected to the mirror server.
Force service only if you are willing to risk losing some data in order to restore service to the database
immediately.
Forcing service suspends the session, temporarily preserving all the data in the original principal database.
Once the original principal is in service and able to communicate with the new principal server, the
database administrator can resume service. When the session resumes, any unsent log records and the
corresponding updates are lost.
OFF
Removes a database mirroring session and removes mirroring from the database. You can specify OFF on
either partner. For information, see about the impact of removing mirroring, see Removing Database
Mirroring (SQL Server).
RESUME
Resumes a suspended database mirroring session. You can specify RESUME only on the principal server.
SAFETY { FULL | OFF }
Sets the level of transaction safety. You can specify SAFETY only on the principal server.
The default is FULL. With full safety, the database mirroring session runs synchronously (in high-safety
mode). If SAFETY is set to OFF, the database mirroring session runs asynchronously (in high-performance
mode).
The behavior of high-safety mode depends partly on the witness, as follows:
When safety is set to FULL and a witness is set for the session, the session runs in high-safety mode with
automatic failover. When the principal server is lost, the session automatically fails over if the database is
synchronized and the mirror server instance and witness are still connected to each other (that is, they have
quorum). For more information, see Quorum: How a Witness Affects Database Availability (Database
Mirroring).
If a witness is set for the session but is currently disconnected, the loss of the mirror server causes the
principal server to go down.
When safety is set to FULL and the witness is set to OFF, the session runs in high-safety mode without
automatic failover. If the mirror server instance goes down, the principal server instance is unaffected. If the
principal server instance goes down, you can force service (with possible data loss) to the mirror server
instance.
If SAFETY is set to OFF, the session runs in high-performance mode, and automatic failover and manual
failover are not supported. However, problems on the mirror do not affect the principal, and if the principal
server instance goes down, you can, if necessary, force service (with possible data loss) to the mirror server
instance—if WITNESS is set to OFF or the witness is currently connected to the mirror. For more
information on forcing service, see "FORCE_SERVICE_ALLOW_DATA_LOSS" earlier in this section.

IMPORTANT
High-performance mode is not intended to use a witness. However, whenever you set SAFETY to OFF, we strongly
recommend that you ensure that WITNESS is set to OFF.

SUSPEND
Pauses a database mirroring session.
You can specify SUSPEND on either partner.
TIMEOUT integer
Specifies the time-out period in seconds. The time-out period is the maximum time that a server instance waits to
receive a PING message from another instance in the mirroring session before considering that other instance to
be disconnected.
You can specify the TIMEOUT option only on the principal server. If you do not specify this option, by default, the
time period is 10 seconds. If you specify 5 or greater, the time-out period is set to the specified number of seconds.
If you specify a time-out value of 0 to 4 seconds, the time-out period is automatically set to 5 seconds.

IMPORTANT
We recommend that you keep the time-out period at 10 seconds or greater. Setting the value to less than 10 seconds
creates the possibility of a heavily loaded system missing PINGs and declaring a false failure.

For more information, see Possible Failures During Database Mirroring.


WITNESS <witness_option>
Controls the database properties that define a database mirroring witness. A SET WITNESS clause affects both
copies of the database, but you can specify SET WITNESS only on the principal server. If a witness is set for a
session, quorum is required to serve the database, regardless of the SAFETY setting; for more information, see
Quorum: How a Witness Affects Database Availability (Database Mirroring).
We recommend that the witness and failover partners reside on separate computers. For information about the
witness, see Database Mirroring Witness.
To execute a SET WITNESS statement, the STATE of the endpoints of both the principal and witness server
instances must be set to STARTED. Note, also, that the ROLE of the database mirroring endpoint of a witness
server instance must be set to either WITNESS or ALL. For information about specifying an endpoint, see The
Database Mirroring Endpoint (SQL Server).
To learn the role and state of the database mirroring endpoint of a server instance, on that instance, use the
following Transact-SQL statement:

SELECT role_desc, state_desc FROM sys.database_mirroring_endpoints

NOTE
Database properties cannot be set on the witness.

<witness_option> ::=

NOTE
Only one <witness_option> is permitted per SET WITNESS clause.

' witness_server '


Specifies an instance of the Database Engine to act as the witness server for a database mirroring session. You can
specify SET WITNESS statements only on the principal server.
In a SET WITNESS ='witness_server' statement, the syntax of witness_server is the same as the syntax of
partner_server.
OFF
Removes the witness from a database mirroring session. Setting the witness to OFF disables automatic failover. If
the database is set to FULL SAFETY and the witness is set to OFF, a failure on the mirror server causes the
principal server to make the database unavailable.

Remarks
Examples
A. Creating a database mirroring session with a witness
Setting up database mirroring with a witness requires configuring security and preparing the mirror database, and
also using ALTER DATABASE to set the partners. For an example of the complete setup process, see Setting Up
Database Mirroring (SQL Server).
B. Manually failing over a database mirroring session
Manual failover can be initiated from either database mirroring partner. Before failing over, you should verify that
the server you believe to be the current principal server actually is the principal server. For example, for the
AdventureWorks2012 database, on that server instance that you think is the current principal server, execute the
following query:

SELECT db.name, m.mirroring_role_desc


FROM sys.database_mirroring m
JOIN sys.databases db
ON db.database_id = m.database_id
WHERE db.name = N'AdventureWorks2012';
GO

If the server instance is in fact the principal, the value of mirroring_role_desc is Principal . If this server instance
were the mirror server, the SELECT statement would return Mirror .
The following example assumes that the server is the current principal.
1. Manually fail over to the database mirroring partner:

ALTER DATABASE AdventureWorks2012 SET PARTNER FAILOVER;


GO

2. To verify the results of the failover on the new mirror, execute the following query:

SELECT db.name, m.mirroring_role_desc


FROM sys.database_mirroring m
JOIN sys.databases db
ON db.database_id = m.database_id
WHERE db.name = N'AdventureWorks2012';
GO

The current value of mirroring_role_desc is now Mirror .

See Also
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
sys.database_mirroring_witnesses (Transact-SQL )
ALTER DATABASE ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters an encryption key and certificate that is used for transparently encrypting a database. For more information
about transparent database encryption, see Transparent Data Encryption (TDE ).
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

ALTER DATABASE ENCRYPTION KEY


REGENERATE WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
|
ENCRYPTION BY SERVER
{
CERTIFICATE Encryptor_Name |
ASYMMETRIC KEY Encryptor_Name
}
[ ; ]

-- Syntax for Parallel Data Warehouse

ALTER DATABASE ENCRYPTION KEY


{
{
REGENERATE WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
[ ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name ]
}
|
ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name
}
[ ; ]

Arguments
REGENERATE WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
Specifies the encryption algorithm that is used for the encryption key.
ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name
Specifies the name of the certificate used to encrypt the database encryption key.
ENCRYPTION BY SERVER ASYMMETRIC KEY Encryptor_Name
Specifies the name of the asymmetric key used to encrypt the database encryption key.

Remarks
The certificate or asymmetric key that is used to encrypt the database encryption key must be located in the
master system database.
When the database owner (dbo) is changed, the database encryption key does not have to be regenerated.
After a database encryption key has been modified twice, a log backup must be performed before the database
encryption key can be modified again.

Permissions
Requires CONTROL permission on the database and VIEW DEFINITION permission on the certificate or
asymmetric key that is used to encrypt the database encryption key.

Examples
The following example alters the database encryption key to use the AES_256 algorithm.

-- Uses AdventureWorks

ALTER DATABASE ENCRYPTION KEY


REGENERATE WITH ALGORITHM = AES_256;
GO

See Also
Transparent Data Encryption (TDE )
SQL Server Encryption
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
ALTER DATABASE SET Options (Transact-SQL )
CREATE DATABASE ENCRYPTION KEY (Transact-SQL )
DROP DATABASE ENCRYPTION KEY (Transact-SQL )
sys.dm_database_encryption_keys (Transact-SQL )
ALTER DATABASE (Transact-SQL) File and Filegroup
Options
5/3/2018 • 18 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Modifies the files and filegroups associated with the database in SQL Server. Adds or removes files and filegroups
from a database, and changes the attributes of a database or its files and filegroups. For other ALTER DATABASE
options, see ALTER DATABASE (Transact-SQL ).

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE database_name
{
<add_or_modify_files>
| <add_or_modify_filegroups>
}
[;]

<add_or_modify_files>::=
{
ADD FILE <filespec> [ ,...n ]
[ TO FILEGROUP { filegroup_name } ]
| ADD LOG FILE <filespec> [ ,...n ]
| REMOVE FILE logical_file_name
| MODIFY FILE <filespec>
}

<filespec>::=
(
NAME = logical_file_name
[ , NEWNAME = new_logical_name ]
[ , FILENAME = {'os_file_name' | 'filestream_path' | 'memory_optimized_data_path' } ]
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB| % ] ]
[ , OFFLINE ]
)

<add_or_modify_filegroups>::=
{
| ADD FILEGROUP filegroup_name
[ CONTAINS FILESTREAM | CONTAINS MEMORY_OPTIMIZED_DATA ]
| REMOVE FILEGROUP filegroup_name
| MODIFY FILEGROUP filegroup_name
{ <filegroup_updatability_option>
| DEFAULT
| NAME = new_filegroup_name
| { AUTOGROW_SINGLE_FILE | AUTOGROW_ALL_FILES }
}
}
<filegroup_updatability_option>::=
{
{ READONLY | READWRITE }
| { READ_ONLY | READ_WRITE }
}

Arguments
<add_or_modify_files>::=
Specifies the file to be added, removed, or modified.
database_name
Is the name of the database to be modified.
ADD FILE
Adds a file to the database.
TO FILEGROUP { filegroup_name }
Specifies the filegroup to which to add the specified file. To display the current filegroups and which filegroup is
the current default, use the sys.filegroups catalog view.
ADD LOG FILE
Adds a log file be added to the specified database.
REMOVE FILE logical_file_name
Removes the logical file description from an instance of SQL Server and deletes the physical file. The file cannot
be removed unless it is empty.
logical_file_name
Is the logical name used in SQL Server when referencing the file.

WARNING
Removing a database file that has FILE_SNAPSHOT backups associated with it will succeed, but any associated snapshots will
not be deleted to avoid invalidating the backups referring to the database file. The file will be truncated, but will not be
physically deleted in order to keep the FILE_SNAPSHOT backups intact. For more information, see SQL Server Backup and
Restore with Microsoft Azure Blob Storage Service. Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server
2017).

MODIFY FILE
Specifies the file that should be modified. Only one <filespec> property can be changed at a time. NAME must
always be specified in the <filespec> to identify the file to be modified. If SIZE is specified, the new size must be
larger than the current file size.
To modify the logical name of a data file or log file, specify the logical file name to be renamed in the NAME clause,
and specify the new logical name for the file in the NEWNAME clause. For example:

MODIFY FILE ( NAME = logical_file_name, NEWNAME = new_logical_name )

To move a data file or log file to a new location, specify the current logical file name in the NAME clause and specify
the new path and operating system file name in the FILENAME clause. For example:

MODIFY FILE ( NAME = logical_file_name, FILENAME = ' new_path/os_file_name ' )

When you move a full-text catalog, specify only the new path in the FILENAME clause. Do not specify the
operating-system file name.
For more information, see Move Database Files.
For a FILESTREAM filegroup, NAME can be modified online. FILENAME can be modified online; however, the
change does not take effect until after the container is physically relocated and the server is shutdown and then
restarted.
You can set a FILESTREAM file to OFFLINE. When a FILESTREAM file is offline, its parent filegroup will be
internally marked as offline; therefore, all access to FILESTREAM data within that filegroup will fail.

NOTE
<add_or_modify_files> options are not available in a Contained Database.

<filespec>::=
Controls the file properties.
NAME logical_file_name
Specifies the logical name of the file.
logical_file_name
Is the logical name used in an instance of SQL Server when referencing the file.
NEWNAME new_logical_file_name
Specifies a new logical name for the file.
new_logical_file_name
Is the name to replace the existing logical file name. The name must be unique within the database and comply
with the rules for identifiers. The name can be a character or Unicode constant, a regular identifier, or a delimited
identifier.
FILENAME { 'os_file_name' | 'filestream_path' | 'memory_optimized_data_path'}
Specifies the operating system (physical) file name.
' os_file_name '
For a standard (ROWS ) filegroup, this is the path and file name that is used by the operating system when you
create the file. The file must reside on the server on which SQL Server is installed. The specified path must exist
before executing the ALTER DATABASE statement.
SIZE, MAXSIZE, and FILEGROWTH parameters cannot be set when a UNC path is specified for the file.

NOTE
System databases cannot reside in UNC share directories.

Data files should not be put on compressed file systems unless the files are read-only secondary files, or if the
database is read-only. Log files should never be put on compressed file systems.
If the file is on a raw partition, os_file_name must specify only the drive letter of an existing raw partition. Only one
file can be put on each raw partition.
' filestream_path '
For a FILESTREAM filegroup, FILENAME refers to a path where FILESTREAM data will be stored. The path up to
the last folder must exist, and the last folder must not exist. For example, if you specify the path
C:\MyFiles\MyFilestreamData, C:\MyFiles must exist before you run ALTER DATABASE, but the
MyFilestreamData folder must not exist.
The SIZE and FILEGROWTH properties do not apply to a FILESTREAM filegroup.
' memory_optimized_data_path '
For a memory-optimized filegroup, FILENAME refers to a path where memory-optimized data will be stored. The
path up to the last folder must exist, and the last folder must not exist. For example, if you specify the path
C:\MyFiles\MyData, C:\MyFiles must exist before you run ALTER DATABASE, but the MyData folder must not
exist.
The filegroup and file ( <filespec> ) must be created in the same statement.
The SIZE, MAXSIZE, and FILEGROWTH properties do not apply to a memory-optimized filegroup.
SIZE size
Specifies the file size. SIZE does not apply to FILESTREAM filegroups.
size
Is the size of the file.
When specified with ADD FILE, size is the initial size for the file. When specified with MODIFY FILE, size is the
new size for the file, and must be larger than the current file size.
When size is not supplied for the primary file, the SQL Server uses the size of the primary file in the model
database. When a secondary data file or log file is specified but size is not specified for the file, the Database
Engine makes the file 1 MB.
The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes, or terabytes. The default is
MB. Specify a whole number and do not include a decimal. To specify a fraction of a megabyte, convert the value
to kilobytes by multiplying the number by 1024. For example, specify 1536 KB instead of 1.5 MB (1.5 x 1024 =
1536).
MAXSIZE { max_size| UNLIMITED }
Specifies the maximum file size to which the file can grow.
max_size
Is the maximum file size. The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes,
or terabytes. The default is MB. Specify a whole number and do not include a decimal. If max_size is not specified,
the file size will increase until the disk is full.
UNLIMITED
Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a
maximum size of 2 TB, and a data file has a maximum size of 16 TB. There is no maximum size when this option is
specified for a FILESTREAM container. It continues to grow until the disk is full.
FILEGROWTH growth_increment
Specifies the automatic growth increment of the file. The FILEGROWTH setting for a file cannot exceed the
MAXSIZE setting. FILEGROWTH does not apply to FILESTREAM filegroups.
growth_increment
Is the amount of space added to the file every time new space is required.
The value can be specified in MB, KB, GB, TB, or percent (%). If a number is specified without an MB, KB, or %
suffix, the default is MB. When % is specified, the growth increment size is the specified percentage of the size of
the file at the time the increment occurs. The size specified is rounded to the nearest 64 KB.
A value of 0 indicates that automatic growth is set to off and no additional space is allowed.
If FILEGROWTH is not specified, the default values are:

VERSION DEFAULT VALUES

Starting with SQL Server 2016 (13.x) Data 64 MB. Log files 64 MB.

Starting with SQL Server 2005 Data 1 MB. Log files 10%.

Prior to SQL Server 2005 Data 10%. Log files 10%.

OFFLINE
Sets the file offline and makes all objects in the filegroup inaccessible.
Cau t i on

Use this option only when the file is corrupted and can be restored. A file set to OFFLINE can only be set online by
restoring the file from backup. For more information about restoring a single file, see RESTORE (Transact-SQL ).

NOTE
<filespec> options are not available in a Contained Database.

<add_or_modify_filegroups>::=
Add, modify, or remove a filegroup from the database.
ADD FILEGROUP filegroup_name
Adds a filegroup to the database.
CONTAINS FILESTREAM
Specifies that the filegroup stores FILESTREAM binary large objects (BLOBs) in the file system.
CONTAINS MEMORY_OPTIMIZED_DATA
Applies to: SQL Server ( SQL Server 2014 (12.x) through SQL Server 2017)
Specifies that the filegroup stores memory optimized data in the file system. For more information, see In-
Memory OLTP (In-Memory Optimization). Only one MEMORY_OPTIMIZED_DATA filegroup is allowed per
database. For creating memory optimized tables, the filegroup cannot be empty. There must be at least one file.
filegroup_name refers to a path. The path up to the last folder must exist, and the last folder must not exist.
The following example creates a filegroup that is added to a database named xtp_db, and adds a file to the
filegroup. The filegroup stores memory_optimized data.

ALTER DATABASE xtp_db ADD FILEGROUP xtp_fg CONTAINS MEMORY_OPTIMIZED_DATA;


GO
ALTER DATABASE xtp_db ADD FILE (NAME='xtp_mod', FILENAME='d:\data\xtp_mod') TO FILEGROUP xtp_fg;

REMOVE FILEGROUP filegroup_name


Removes a filegroup from the database. The filegroup cannot be removed unless it is empty. Remove all files from
the filegroup first. For more information, see "REMOVE FILE logical_file_name," earlier in this topic.

NOTE
Unless the FILESTREAM Garbage Collector has removed all the files from a FILESTREAM container, the ALTER DATABASE
REMOVE FILE operation to remove a FILESTREAM container will fail and return an error. See the "Remove FILESTREAM
Container" section in Remarks later in this topic.

MODIFY FILEGROUP filegroup_name { <filegroup_updatability_option> | DEFAULT | NAME


=new_filegroup_name } Modifies the filegroup by setting the status to READ_ONLY or READ_WRITE, making the
filegroup the default filegroup for the database, or changing the filegroup name.
<filegroup_updatability_option>
Sets the read-only or read/write property to the filegroup.
DEFAULT
Changes the default database filegroup to filegroup_name. Only one filegroup in the database can be the default
filegroup. For more information, see Database Files and Filegroups.
NAME = new_filegroup_name
Changes the filegroup name to the new_filegroup_name.
AUTOGROW_SINGLE_FILE
Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017)
When a file in the filegroup meets the autogrow threshold, only that file grows. This is the default.
AUTOGROW_ALL_FILES
Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017)
When a file in the filegroup meets the autogrow threshold, all files in the filegroup grow.
<filegroup_updatability_option>::=
Sets the read-only or read/write property to the filegroup.
READ_ONLY | READONLY
Specifies the filegroup is read-only. Updates to objects in it are not allowed. The primary filegroup cannot be made
read-only. To change this state, you must have exclusive access to the database. For more information, see the
SINGLE_USER clause.
Because a read-only database does not allow data modifications:
Automatic recovery is skipped at system startup.
Shrinking the database is not possible.
No locking occurs in read-only databases. This can cause faster query performance.

NOTE
The keyword READONLY will be removed in a future version of Microsoft SQL Server. Avoid using READONLY in new
development work, and plan to modify applications that currently use READONLY. Use READ_ONLY instead.

READ_WRITE | READWRITE
Specifies the group is READ_WRITE. Updates are enabled for the objects in the filegroup. To change this state, you
must have exclusive access to the database. For more information, see the SINGLE_USER clause.

NOTE
The keyword READWRITE will be removed in a future version of Microsoft SQL Server. Avoid using READWRITE in new
development work, and plan to modify applications that currently use READWRITE to use READ_WRITE instead.

The status of these options can be determined by examining the is_read_only column in the sys.databases
catalog view or the Updateability property of the DATABASEPROPERTYEX function.

Remarks
To decrease the size of a database, use DBCC SHRINKDATABASE.
You cannot add or remove a file while a BACKUP statement is running.
A maximum of 32,767 files and 32,767 filegroups can be specified for each database.
Starting with SQL Server 2005, the state of a database file (for example, online or offline), is maintained
independently from the state of the database. For more information, see File States.
The state of the files within a filegroup determines the availability of the whole filegroup. For a filegroup to be
available, all files within the filegroup must be online.
If a filegroup is offline, any try to access the filegroup by an SQL statement will fail with an error. When you
build query plans for SELECT statements, the query optimizer avoids nonclustered indexes and indexed views
that reside in offline filegroups. This enables these statements to succeed. However, if the offline filegroup
contains the heap or clustered index of the target table, the SELECT statements fail. Additionally, any INSERT ,
UPDATE , or DELETE statement that modifies a table with any index in an offline filegroup will fail.

Moving Files
You can move system or user-defined data and log files by specifying the new location in FILENAME. This may be
useful in the following scenarios:
Failure recovery. For example, the database is in suspect mode or shutdown caused by hardware failure.
Planned relocation.
Relocation for scheduled disk maintenance.
For more information, see Move Database Files.

Initializing Files
By default, data and log files are initialized by filling the files with zeros when you perform one of the following
operations:
Create a database.
Add files to an existing database.
Increase the size of an existing file.
Restore a database or filegroup.
Data files can be initialized instantaneously. This enables for fast execution of these file operations. For more
information, see Database File Initialization.

Removing a FILESTREAM Container


Even though FILESTREAM container may have been emptied using the “DBCC SHRINKFILE” operation, the
database may still need to maintain references to the deleted files for various system maintenance reasons.
sp_filestream_force_garbage_collection (Transact-SQL ) will run the FILESTREAM Garbage Collector to remove
these files when it is safe to do so. Unless the FILESTREAM Garbage Collector has removed all the files from a
FILESTREAM container, the ALTER DATABASEREMOVE FILE operation will fail to remove a FILESTREAM
container and will return an error. The following process is recommended to remove a FILESTREAM container.
1. Run DBCC SHRINKFILE (Transact-SQL ) with the EMPTYFILE option to move the active contents of this
container to other containers.
2. Ensure that Log backups have been taken, in the FULL or BULK_LOGGED recovery model.
3. Ensure that the replication log reader job has been run, if relevant.
4. Run sp_filestream_force_garbage_collection (Transact-SQL ) to force the garbage collector to delete any files
that are no longer needed in this container.
5. Execute ALTER DATABASE with the REMOVE FILE option to remove this container.
6. Repeat steps 2 through 4 once more to complete the garbage collection.
7. Use ALTER Database...REMOVE FILE to remove this container.

Examples
A. Adding a file to a database
The following example adds a 5-MB data file to the AdventureWorks2012 database.
USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = Test1dat2,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat2.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO

B. Adding a filegroup with two files to a database


The following example creates the filegroup Test1FG1 in the AdventureWorks2012 database and adds two 5-MB
files to the filegroup.

USE master
GO
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP Test1FG1;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = test1dat3,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat3.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1dat4,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat4.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
)
TO FILEGROUP Test1FG1;
GO

C. Adding two log files to a database


The following example adds two 5-MB log files to the AdventureWorks2012 database.
USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD LOG FILE
(
NAME = test1log2,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\test2log.ldf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1log3,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\test3log.ldf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO

D. Removing a file from a database


The following example removes one of the files added in example B.

USE master;
GO
ALTER DATABASE AdventureWorks2012
REMOVE FILE test1dat4;
GO

E. Modifying a file
The following example increases the size of one of the files added in example B.
The ALTER DATABASE with MODIFY FILE command can only make a file size bigger, so if you need to make the
file size smaller you need to use DBCC SHRINKFILE.

USE master;
GO

ALTER DATABASE AdventureWorks2012


MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO

This example shrinks the size of a data file to 100 MB, and then specifies the size at that amount.

USE AdventureWorks2012;
GO

DBCC SHRINKFILE (AdventureWorks2012_data, 100);


GO

USE master;
GO

ALTER DATABASE AdventureWorks2012


MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO
F. Moving a file to a new location
The following example moves the Test1dat2 file created in example A to a new directory.

NOTE
You must physically move the file to the new directory before running this example. Afterward, stop and start the instance of
SQL Server or take the AdventureWorks2012 database OFFLINE and then ONLINE to implement the change.

USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(
NAME = Test1dat2,
FILENAME = N'c:\t1dat2.ndf'
);
GO

G. Moving tempdb to a new location


The following example moves tempdb from its current location on the disk to another disk location. Because
tempdb is re-created each time the MSSQLSERVER service is started, you do not have to physically move the
data and log files. The files are created when the service is restarted in step 3. Until the service is restarted, tempdb
continues to function in its existing location.
1. Determine the logical file names of the tempdb database and their current location on disk.

SELECT name, physical_name


FROM sys.master_files
WHERE database_id = DB_ID('tempdb');
GO

2. Change the location of each file by using ALTER DATABASE .

USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'E:\SQLData\tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'E:\SQLData\templog.ldf');
GO

3. Stop and restart the instance of SQL Server.


4. Verify the file change.

SELECT name, physical_name


FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

5. Delete the tempdb.mdf and templog.ldf files from their original location.
H. Making a filegroup the default
The following example makes the Test1FG1 filegroup created in example B the default filegroup. Then, the default
filegroup is reset to the PRIMARY filegroup. Note that PRIMARY must be delimited by brackets or quotation marks.
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP Test1FG1 DEFAULT;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP [PRIMARY] DEFAULT;
GO

I. Adding a Filegroup Using ALTER DATABASE


The following example adds a FILEGROUP that contains the FILESTREAM clause to the FileStreamPhotoDB database.

--Create and add a FILEGROUP that CONTAINS the FILESTREAM clause to


--the FileStreamPhotoDB database.
ALTER DATABASE FileStreamPhotoDB
ADD FILEGROUP TodaysPhotoShoot
CONTAINS FILESTREAM;
GO

--Add a file for storing database photos to FILEGROUP


ALTER DATABASE FileStreamPhotoDB
ADD FILE
(
NAME= 'PhotoShoot1',
FILENAME = 'C:\Users\Administrator\Pictures\TodaysPhotoShoot.ndf'
)
TO FILEGROUP TodaysPhotoShoot;
GO

J. Change filegroup so that when a file in the filegroup meets the autogrow threshold, all files in the filegroup
grow
The following example generates the required ALTER DATABASE statements to modify read-write filegroups with the
AUTOGROW_ALL_FILES setting.
--Generate ALTER DATABASE ... MODIFY FILEGROUP statements
--so that all read-write filegroups grow at the same time.
SET NOCOUNT ON;

DROP TABLE IF EXISTS #tmpdbs


CREATE TABLE #tmpdbs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, isdone bit);

DROP TABLE IF EXISTS #tmpfgs


CREATE TABLE #tmpfgs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, fgname sysname, isdone bit);

INSERT INTO #tmpdbs ([dbid], [dbname], [isdone])


SELECT database_id, name, 0 FROM master.sys.databases (NOLOCK) WHERE is_read_only = 0 AND state = 0;

DECLARE @dbid int, @query VARCHAR(1000), @dbname sysname, @fgname sysname

WHILE (SELECT COUNT(id) FROM #tmpdbs WHERE isdone = 0) > 0


BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid] FROM #tmpdbs WHERE isdone = 0

SET @query = 'SELECT ' + CAST(@dbid AS NVARCHAR) + ', ''' + @dbname + ''', [name], 0 FROM [' + @dbname +
'].sys.filegroups WHERE [type] = ''FG'' AND is_read_only = 0;'
INSERT INTO #tmpfgs
EXEC (@query)

UPDATE #tmpdbs
SET isdone = 1
WHERE [dbid] = @dbid
END;

IF (SELECT COUNT(ID) FROM #tmpfgs) > 0


BEGIN
WHILE (SELECT COUNT(id) FROM #tmpfgs WHERE isdone = 0) > 0
BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid], @fgname = fgname FROM #tmpfgs WHERE isdone = 0

SET @query = 'ALTER DATABASE [' + @dbname + '] MODIFY FILEGROUP [' + @fgname + '] AUTOGROW_ALL_FILES;'

PRINT @query

UPDATE #tmpfgs
SET isdone = 1
WHERE [dbid] = @dbid AND fgname = @fgname
END
END;
GO

See Also
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
DROP DATABASE (Transact-SQL )
sp_spaceused (Transact-SQL )
sys.databases (Transact-SQL )
sys.database_files (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.filegroups (Transact-SQL )
sys.master_files (Transact-SQL )
Binary Large Object (Blob) Data (SQL Server)
DBCC SHRINKFILE (Transact-SQL )
sp_filestream_force_garbage_collection (Transact-SQL )
Database File Initialization
ALTER DATABASE (Transact-SQL) SET HADR
5/4/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This topic contains the ALTER DATABASE syntax for setting Always On availability groups options on a
secondary database. Only one SET HADR option is permitted per ALTER DATABASE statement. These options
are supported only on secondary replicas.
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE database_name
SET HADR
{
{ AVAILABILITY GROUP = group_name | OFF }
| { SUSPEND | RESUME }
}
[;]

Arguments
database_name
Is the name of the secondary database to be modified.
SET HADR
Executes the specified Always On availability groups command on the specified database.
{ AVAIL ABILITY GROUP =group_name | OFF }
Joins or removes the availability database from the specified availability group, as follows:
group_name
Joins the specified database on the secondary replica that is hosted by the server instance on which you execute
the command to the availability group specified by group_name.
The prerequisites for this operation are as follows:
The database must already have been added to the availability group on the primary replica.
The primary replica must be active. For information about how troubleshoot an inactive primary replica,
see Troubleshooting Always On Availability Groups Configuration (SQL Server).
The primary replica must be online, and the secondary replica must be connected to the primary replica.
The secondary database must have been restored using WITH NORECOVERY from recent database and
log backups of the primary database, ending with a log backup that is recent enough to permit the
secondary database to catch up to the primary database.
NOTE
To add a database to the availability group, connect to the server instance that hosts the primary replica, and use
the ALTER AVAILABILITY GROUPgroup_name ADD DATABASE database_name statement.

For more information, see Join a Secondary Database to an Availability Group (SQL Server).
OFF
Removes the specified secondary database from the availability group.
Removing a secondary database can be useful if it has fallen far behind the primary database, and you do
not want to wait for the secondary database to catch up. After removing the secondary database, you can
update it by restoring a sequence of backups ending with a recent log backup (using RESTORE … WITH
NORECOVERY ).

IMPORTANT
To completely remove an availability database from an availability group, connect to the server instance that hosts the
primary replica, and use the ALTER AVAILABILITY GROUPgroup_name REMOVE DATABASE availability_database_name
statement. For more information, see Remove a Primary Database from an Availability Group (SQL Server).

SUSPEND
Suspends data movement on a secondary database. A SUSPEND command returns as soon as it has been
accepted by the replica that hosts the target database, but actually suspending the database occurs
asynchronously.
The scope of the impact depends on where you execute the ALTER DATABASE statement:
If you suspend a secondary database on a secondary replica, only the local secondary database is
suspended. Existing connections on the readable secondary remain usable. New connections to the
suspended database on the readable secondary are not allowed until data movement is resumed.
If you suspend a database on the primary replica, data movement is suspended to the corresponding
secondary databases on every secondary replica. Existing connections on a readable secondary remain
usable, and new read-intent connections will not connect to readable secondary replicas.
When data movement is suspended due to a forced manual failover, connections to the new secondary
replica are not allowed while data movement is suspended.
When a database on a secondary replica is suspended, both the database and replica become
unsynchronized and are marked as NOT SYNCHRONIZED.

IMPORTANT
While a secondary database is suspended, the send queue of the corresponding primary database will accumulate unsent
transaction log records. Connections to the secondary replica return data that was available at the time the data movement
was suspended.
NOTE
Suspending and resuming an Always On secondary database does not directly affect the availability of the primary
database, though suspending a secondary database can impact redundancy and failover capabilities for the primary
database, until the suspended secondary database is resumed. This is in contrast to database mirroring, where the mirroring
state is suspended on both the mirror database and the principal database until mirroring is resumed. Suspending an
Always On primary database suspends data movement on all the corresponding secondary databases, and redundancy and
failover capabilities cease for that database until the primary database is resumed.

For more information, see Suspend an Availability Database (SQL Server).


RESUME
Resumes suspended data movement on the specified secondary database. A RESUME command returns as soon
as it has been accepted by the replica that hosts the target database, but actually resuming the database occurs
asynchronously.
The scope of the impact depends on where you execute the ALTER DATABASE statement:
If you resume a secondary database on a secondary replica, only the local secondary database is resumed.
Data movement is resumed unless the database has also been suspended on the primary replica.
If you resume a database on the primary replica, data movement is resumed to every secondary replica on
which the corresponding secondary database has not also been suspended locally. To resume a secondary
database that was individually suspended on a secondary replica, connect to the server instance that hosts
the secondary replica and resume the database there.
Under synchronous-commit mode, the database state changes to SYNCHRONIZING. If no other database
is currently suspended, the replica state also changes to SYNCHRONIZING.
For more information, see Resume an Availability Database (SQL Server).

Database States
When a secondary database is joined to an availability group, the local secondary replica changes the state of that
secondary database from RESTORING to ONLINE. If a secondary database is removed from the availability
group, it is set back to the RESTORING state by the local secondary replica. This allows you to apply subsequent
log backups from the primary database to that secondary database.

Restrictions
Execute ALTER DATABASE statements outside of both transactions and batches.

Security
Permissions
Requires ALTER permission on the database. Joining a database to an availability group requires membership in
the db_owner fixed database role.

Examples
The following example joins the secondary database, AccountsDb1 , to the local secondary replica of the
AccountsAG availability group.

ALTER DATABASE AccountsDb1 SET HADR AVAILABILITY GROUP = AccountsAG;


NOTE
To see this Transact-SQL statement used in context, see Create an Availability Group (Transact-SQL).

See Also
ALTER DATABASE (Transact-SQL )
ALTER AVAIL ABILITY GROUP (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
Overview of AlwaysOn Availability Groups (SQL Server) Troubleshoot AlwaysOn Availability Groups
Configuration (SQL Server)
ALTER DATABASE SCOPED CREDENTIAL (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Changes the properties of a database scoped credential.
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE SCOPED CREDENTIAL credential_name WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]

Arguments
credential_name
Specifies the name of the database scoped credential that is being altered.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server. To import a file from Azure Blob
storage, the identity name must be SHARED ACCESS SIGNATURE . For more information about shared access
signatures, see Using Shared Access Signatures (SAS ).
SECRET ='secret'
Specifies the secret required for outgoing authentication. secret is required to import a file from Azure Blob
storage. secret may be optional for other purposes.

WARNING
The SAS key value might begin with a '?' (question mark). When you use the SAS key, you must remove the leading '?'.
Otherwise your efforts might be blocked.

Remarks
When a database scoped credential is changed, the values of both identity_name and secret are reset. If the
optional SECRET argument is not specified, the value of the stored secret will be set to NULL.
The secret is encrypted by using the service master key. If the service master key is regenerated, the secret is
reencrypted by using the new service master key.
Information about database scoped credentials is visible in the sys.database_scoped_credentials catalog view.

Permissions
Requires ALTER permission on the credential.
Examples
A. Changing the password of a database scoped credential
The following example changes the secret stored in a database scoped credential called Saddles . The database
scoped credential contains the Windows login RettigB and its password. The new password is added to the
database scoped credential using the SECRET clause.

ALTER DATABASE SCOPED CREDENTIAL AppCred WITH IDENTITY = 'RettigB',


SECRET = 'sdrlk8$40-dksli87nNN8';
GO

B. Removing the password from a credential


The following example removes the password from a database scoped credential named Frames . The database
scoped credential contains Windows login Aboulrus8 and a password. After the statement is executed, the
database scoped credential will have a NULL password because the SECRET option is not specified.

ALTER DATABASE SCOPED CREDENTIAL Frames WITH IDENTITY = 'Aboulrus8';


GO

See Also
Credentials (Database Engine)
CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.database_scoped_credentials
CREATE CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
ALTER DATABASE SCOPED CONFIGURATION
(Transact-SQL)
5/16/2018 • 13 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This statement enables several database configuration settings at the individual database level. This statement is
available in Azure SQL Database and in SQL Server beginning with SQL Server 2016 (13.x). Those settings are:
Clear procedure cache.
Set the MAXDOP parameter to an arbitrary value (1,2, ...) for the primary database based on what works best
for that particular database and set a different value (e.g. 0) for all secondary database used (such as for
reporting queries).
Set the query optimizer cardinality estimation model independent of the database to compatibility level.
Enable or disable parameter sniffing at the database level.
Enable or disable query optimization hotfixes at the database level.
Enable or disable the identity cache at the database level.
Enable or disable a compiled plan stub to be stored in cache when a batch is compiled for the first time.
Enable or disable collection of execution statistics for natively compiled T-SQL modules.
Enable or disable online by default options for DDL statements that support the ONLINE= syntax.
Enable or disable resumable by default options for DDL statements that support the RESUMABLE= syntax.
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE SCOPED CONFIGURATION
{
{ [ FOR SECONDARY] SET <set_options> }
}
| CLEAR PROCEDURE_CACHE
| SET < set_options >
[;]

< set_options > ::=


{
MAXDOP = { <value> | PRIMARY}
| LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY}
| PARAMETER_SNIFFING = { ON | OFF | PRIMARY}
| QUERY_OPTIMIZER_HOTFIXES = { ON | OFF | PRIMARY}
| IDENTITY_CACHE = { ON | OFF }
| OPTIMIZE_FOR_AD_HOC_WORKLOADS = { ON | OFF }
| XTP_PROCEDURE_EXECUTION_STATISTICS = { ON | OFF }
| XTP_QUERY_EXECUTION_STATISTICS = { ON | OFF }
| ELEVATE_ONLINE = { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }
| ELEVATE_RESUMABLE = { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }
}

Arguments
FOR SECONDARY
Specifies the settings for secondary databases (all secondary databases must have the identical values).
MAXDOP = {<value> | PRIMARY }
<value>
Specifies the default MAXDOP setting that should be used for statements. 0 is the default value and indicates that
the server configuration will be used instead. The MAXDOP at the database scope overrides (unless it is set to 0)
the max degree of parallelism set at the server level by sp_configure. Query hints can still override the DB
scoped MAXDOP in order to tune specific queries that need different setting. All these settings are limited by the
MAXDOP set for the Workload Group.
You can use the max degree of parallelism option to limit the number of processors to use in parallel plan
execution. SQL Server considers parallel execution plans for queries, index data definition language (DDL )
operations, parallel insert, online alter column, parallel stats collection, and static and keyset-driven cursor
population.
To set this option at the instance level, see Configure the max degree of parallelism Server Configuration Option.

TIP
To accomplish this at the query level, add the MAXDOP query hint.

PRIMARY
Can only be set for the secondaries, while the database in on the primary, and indicates that the configuration will
be the one set for the primary. If the configuration for the primary changes, the value on the secondaries will
change accordingly without the need to set the secondaries value explicitly. PRIMARY is the default setting for the
secondaries.
LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY }
Enables you to set the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version
independent of the compatibility level of the database. The default is OFF, which sets the query optimizer
cardinality estimation model based on the compatibility level of the database. Setting this to ON is equivalent to
enabling Trace Flag 9481.

TIP
To accomplish this at the query level, add the QUERYTRACEON query hint. Starting with SQL Server 2016 (13.x) SP1, to
accomplish this at the query level, add the USE HINT query hint instead of using the trace flag.

PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the query optimizer
cardinality estimation model setting on all secondaries will be the value set for the primary. If the configuration on
the primary for the query optimizer cardinality estimation model changes, the value on the secondaries will change
accordingly. PRIMARY is the default setting for the secondaries.
PARAMETER_SNIFFING = { ON | OFF | PRIMARY }
Enables or disables parameter sniffing. The default is ON. This is equivalent to Trace Flag 4136.

TIP
To accomplish this at the query level, see the OPTIMIZE FOR UNKNOWN query hint. Starting with SQL Server 2016 (13.x)
SP1, to accomplish this at the query level, the USE HINT query hint is also available.
PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the value for this
setting on all secondaries will be the value set for the primary. If the configuration on the primary for using
parameter sniffing changes, the value on the secondaries will change accordingly without the need to set the
secondaries value explicitly. This is the default setting for the secondaries.
QUERY_OPTIMIZER_HOTFIXES = { ON | OFF | PRIMARY }
Enables or disables query optimization hotfixes regardless of the compatibility level of the database. The default is
OFF, which disables query optimization hotfixes that were released after the highest available compatibility level
was introduced for a specific version (post-RTM ). Setting this to ON is equivalent to enabling Trace Flag 4199.

TIP
To accomplish this at the query level, add the QUERYTRACEON query hint. Starting with SQL Server 2016 (13.x) SP1, to
accomplish this at the query level, add the USE HINT query hint instead of using the trace flag.

PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the value for this
setting on all secondaries is the value set for the primary. If the configuration for the primary changes, the value on
the secondaries changes accordingly without the need to set the secondaries value explicitly. This is the default
setting for the secondaries.
CLEAR PROCEDURE_CACHE
Clears the procedure (plan) cache for the database. This can be executed both on the primary and the secondaries.
IDENTITY_CACHE = { ON | OFF }
Applies to: SQL Server 2017 (14.x) and Azure SQL Database
Enables or disables identity cache at the database level. The default is ON. Identity caching is used to improve
INSERT performance on tables with identity columns. To avoid gaps in the values of an identity column in cases
where the server restarts unexpectedly or fails over to a secondary server, disable the IDENTITY_CACHE option.
This option is similar to the existing Trace Flag 272, except that it can be set at the database level rather than only at
the server level.

NOTE
This option can only be set for the PRIMARY. For more information, see identity columns.

OPTIMIZE_FOR_AD_HOC_WORKLOADS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables a compiled plan stub to be stored in cache when a batch is compiled for the first time. The
default is OFF. Once the database scoped configuration OPTIMIZE_FOR_AD_HOC_WORKLOADS is enabled for
a database, a compiled plan stub will be stored in cache when a batch is compiled for the first time. Plan stubs have
a smaller memory footprint compared to the size of the full compiled plan. If a batch is compiled or executed again,
the compiled plan stub will be removed and replaced with a full compiled plan.
XTP_PROCEDURE_EXECUTION_STATISTICS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables collection of execution statistics at the module-level for natively compiled T-SQL modules in
the current database. The default is OFF. The execution statistics are reflected in sys.dm_exec_procedure_stats.
Module-level execution statistics for natively compiled T-SQL modules are collected if either this option is ON, or
if statistics collection is enabled through sp_xtp_control_proc_exec_stats.
XTP_QUERY_EXECUTION_STATISTICS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables collection of execution statistics at the statement-level for natively compiled T-SQL modules in
the current database. The default is OFF. The execution statistics are reflected in sys.dm_exec_query_stats and in
Query Store.
Statement-level execution statistics for natively compiled T-SQL modules are collected if either this option is ON,
or if statistics collection is enabled through sp_xtp_control_query_exec_stats.
For more details about performance monitoring of natively-compiled T-SQL modules see Monitoring
Performance of Natively Compiled Stored Procedures.
ELEVATE_ONLINE = { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }
Applies to: Azure SQL Database (feature is in public preview )
Allows you to select options to cause the engine to automatically elevate supported operations to online. The
default is OFF, which means operations will not be elevated to online unless specified in the statement.
sys.database_scoped_configurations reflects the current value of ELEVATE_ONLINE. These options will only apply
to operations that are generally supported for online.
FAIL_UNSUPPORTED
This value elevates all supported DDL operations to ONLINE. Operations that do not support online execution will
fail and throw a warning.
WHEN_SUPPORTED
This value elevates operations that support ONLINE. Operations that do not support online will be run offline.

NOTE
You can override the default setting by submitting a statement with the ONLINE option specified.

ELEVATE_RESUMABLE= { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }


Applies to: Azure SQL Database (feature is in public preview )
Allows you to select options to cause the engine to automatically elevate supported operations to resumable. The
default is OFF, which means operations are not be elevated to resumable unless specified in the statement.
sys.database_scoped_configurations reflects the current value of ELEVATE_RESUMABLE. These options only apply
to operations that are generally supported for resumable.
FAIL_UNSUPPORTED
This value elevates all supported DDL operations to RESUMABLE. Operations that do not support resumable
execution fail and throw a warning.
WHEN_SUPPORTED
This value elevates operations that support RESUMABLE. Operations that do not support resumable are run non-
resumably.
NOTE
You can override the default setting by submitting a statement with the RESUMABLE option specified.

Permissions
Requires ALTER ANY DATABASE SCOPE CONFIGURATION
on the database. This permission can be granted by a user with CONTROL permission on a database.

General Remarks
While you can configure secondary databases to have different scoped configuration settings from their primary,
all secondary databases use the same configuration. Different settings cannot be configured for individual
secondaries.
Executing this statement clears the procedure cache in the current database, which means that all queries have to
recompile.
For 3-part name queries, the settings for the current database connection for the query is honored, other than for
SQL modules (such as procedures, functions, and triggers) that are compiled in the current database context and
therefore uses the options of the database in which they reside.
The ALTER_DATABASE_SCOPED_CONFIGURATION event is added as a DDL event that can be used to fire a
DDL trigger. This is a child of the ALTER_DATABASE_EVENTS trigger group.
Database scoped configuration settings will be carried over with the database. This means that when a given
database is restored or attached, the existing configuration settings remain.

Limitations and Restrictions


MAXDOP
The granular settings can override the global ones and that resource governor can cap all other MAXDOP settings.
The logic for MAXDOP setting is the following:
Query hint overrides both the sp_configure and the database scoped setting. If the resource group
MAXDOP is set for the workload group:
If the query hint is set to 0, it is overridden by the resource governor setting.
If the query hint is not 0, it is capped by the resource governor setting.
The DB scoped setting (unless it’s 0) overrides the sp_configure setting unless there is a query hint and is
capped by the resource governor setting.
The sp_configure setting is overridden by the resource governor setting.
QUERY_OPTIMIZER_HOTFIXES
When QUERYTRACEON hint is used to enable the legacy query optimizer or query optimizer hotfixes, it would be
an OR condition between the query hint and the database scoped configuration setting, meaning if either is
enabled, the options apply.
GeoDR
Readable secondary databases, e.g. Always On Availability Groups and GeoReplication, use the secondary value by
checking the state of the database. Even though recompile does not occur on failover and technically the new
primary has queries that are using the secondary settings, the idea is that the setting between primary and
secondary only vary when the workload is different and therefore the cached queries are using the optimal
settings, whereas new queries pick the new settings that are appropriate for them.
DacFx
Since ALTER DATABASE SCOPED CONFIGURATION is a new feature in Azure SQL Database and SQL Server
beginning with SQL Server 2016 that affects the database schema, exports of the schema (with or without data)
are not be able to be imported into an older version of SQL Server e.g. SQL Server 2012 (11.x) or SQL Server
2017 (14.x). For example, an export to a DACPAC or a BACPAC from an SQL Database or SQL Server 2016 (13.x)
database that used this new feature would not be able to be imported into a down-level server.
ELEVATE_ONLINE
This option only applies to DDL statements that support the WITH(ONLINE= syntax). XML indexes are not
affected
ELEVATE_RESUMABLE
This option only applies to DDL statements that support the WITH(ONLINE= syntax). XML indexes are not
affected

Metadata

The sys.database_scoped_configurations (Transact-SQL ) system view provides information about scoped


configurations within a database. Database-scoped configuration options only show up in
sys.database_scoped_configurations as they are overrides to server-wide default settings. The sys.configurations
(Transact-SQL ) system view only shows server-wide settings.

Examples
These examples demonstrate the use of ALTER DATABASE SCOPED CONFIGURATION
A. Grant Permission
This example grant permission required to execute ALTER DATABASE SCOPED CONFIGURATION
to user [Joe].

GRANT ALTER ANY DATABASE SCOPED CONFIGURATION to [Joe] ;

B. Set MAXDOP
This example sets MAXDOP = 1 for a primary database and MAXDOP = 4 for a secondary database in a geo-
replication scenario.

ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 1 ;


ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP=4 ;

This example sets MAXDOP for a secondary database to be the same as it is set for its primary database in a geo-
replication scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP=PRIMARY ;

C. Set LEGACY_CARDINALITY_ESTIMATION
This example sets LEGACY_CARDINALITY_ESTIMATION to ON for a secondary database in a geo-replication
scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET LEGACY_CARDINALITY_ESTIMATION=ON ;

This example sets LEGACY_CARDINALITY_ESTIMATION for a secondary database as it is for its primary
database in a geo-replication scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET LEGACY_CARDINALITY_ESTIMATION=PRIMARY ;

D. Set PARAMETER_SNIFFING
This example sets PARAMETER_SNIFFING to OFF for a primary database in a geo-replication scenario.

ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SNIFFING =OFF ;

This example sets PARAMETER_SNIFFING to OFF for a primary database in a geo-replication scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET PARAMETER_SNIFFING=OFF ;

This example sets PARAMETER_SNIFFING for secondary database as it is on primary database in a geo-
replication scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET PARAMETER_SNIFFING=PRIMARY ;

E. Set QUERY_OPTIMIZER_HOTFIXES
Set QUERY_OPTIMIZER_HOTFIXES to ON for a primary database in a geo-replication scenario.

ALTER DATABASE SCOPED CONFIGURATION SET QUERY_OPTIMIZER_HOTFIXES=ON ;

F. Clear Procedure Cache


This example clears the procedure cache (possible only for a primary database).

ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE ;

G. Set IDENTITY_CACHE
Applies to: SQL Server 2017 (14.x) and SQL Database (feature is in public preview )
This example disables the identity cache.

ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE=OFF ;

H. Set OPTIMIZE_FOR_AD_HOC_WORKLOADS
Applies to: SQL Database
This example enables a compiled plan stub to be stored in cache when a batch is compiled for the first time.

ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZE_FOR_AD_HOC_WORKLOADS = ON;

I. Set ELEVATE_ONLINE
Applies to: Azure SQL Database (feature is in public preview )
This example sets ELEVATE_ONLINE to FAIL_UNSUPPORTED. tsqlCopy

ALTER DATABASE SCOPED CONFIGURATION SET ELEVATE_ONLINE=FAIL_UNSUPPORTED ;

J. Set ELEVATE_RESUMABLE
Applies to: Azure SQL Database (feature is in public preview )
This example sets ELEVEATE_RESUMABLE to WHEN_SUPPORTED. tsqlCopy

ALTER DATABASE SCOPED CONFIGURATION SET ELEVATE_RESUMABLE=WHEN_SUPPORTED ;

Additional Resources
MAXDOP Resources
Degree of Parallelism
Recommendations and guidelines for the "max degree of parallelism" configuration option in SQL Server
LEGACY_CARDINALITY_ESTIMATION Resources
Cardinality Estimation (SQL Server)
Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator
PARAMETER_SNIFFING Resources
Parameter Sniffing
"I smell a parameter!"
QUERY_OPTIMIZER_HOTFIXES Resources
Trace Flags
SQL Server query optimizer hotfix trace flag 4199 servicing model
ELEVATE_ONLINE Resources
Guidelines for Online Index Operations
ELEVATE_RESUMABLE Resources
Guidelines for Online Index Operations

More information
sys.database_scoped_configurations
sys.configurations
Databases and Files Catalog Views
Server Configuration Options sys.configurations
ALTER DATABASE SET Options (Transact-SQL)
5/3/2018 • 50 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This topic contains the ALTER DATABASE syntax that is related to setting database options in SQL Server. For
other ALTER DATABASE syntax, see the following topics.
ALTER DATABASE (Transact-SQL )
ALTER DATABASE (Azure SQL Database)
ALTER DATABASE (Azure SQL Data Warehouse)
ALTER DATABASE (Parallel Data Warehouse)
Database mirroring, Always On availability groups, and compatibility levels are SET options but are described in
separate topics because of their length. For more information, see ALTER DATABASE Database Mirroring
(Transact-SQL ), ALTER DATABASE SET HADR (Transact-SQL ), and ALTER DATABASE Compatibility Level
(Transact-SQL ).

NOTE
Many database set options can be configured for the current session by using SET Statements (Transact-SQL) and are often
configured by applications when they connect. Session level set options override the ALTER DATABASE SET values. The
database options described below are values that can be set for sessions that do not explicitly provide other set option
values.

Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE { database_name | CURRENT }
SET
{
<optionspec> [ ,...n ] [ WITH <termination> ]
}

<optionspec> ::=
{
<auto_option>
| <automatic_tuning_option>
| <change_tracking_option>
| <containment_option>
| <cursor_option>
| <database_mirroring_option>
| <date_correlation_optimization_option>
| <db_encryption_option>
| <db_state_option>
| <db_update_option>
| <db_user_access_option>
| <delayed_durability_option>
| <external_access_option>
| FILESTREAM ( <FILESTREAM_option> )
| <HADR_options>
| <mixed_page_allocation_option>
| <parameterization_option>
| <query_store_options>
| <recovery_option>
| <remote_data_archive_option>
| <service_broker_option>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <termination>
}

<auto_option> ::=
{
AUTO_CLOSE { ON | OFF }
| AUTO_CREATE_STATISTICS { OFF | ON [ ( INCREMENTAL = { ON | OFF } ) ] }
| AUTO_SHRINK { ON | OFF }
| AUTO_UPDATE_STATISTICS { ON | OFF }
| AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}

<automatic_tuning_option> ::=
{
AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = { ON | OFF } )
}

<change_tracking_option> ::=
{
CHANGE_TRACKING
{
= OFF
| = ON [ ( <change_tracking_option_list > [,...n] ) ]
| ( <change_tracking_option_list> [,...n] )
}
}

<change_tracking_option_list> ::=
{
AUTO_CLEANUP = { ON | OFF }
| CHANGE_RETENTION = retention_period { DAYS | HOURS | MINUTES }
}

<containment_option> ::=
CONTAINMENT = { NONE | PARTIAL }

<cursor_option> ::=
{
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
| CURSOR_DEFAULT { LOCAL | GLOBAL }
}

<database_mirroring_option>
ALTER DATABASE Database Mirroring

<date_correlation_optimization_option> ::=
DATE_CORRELATION_OPTIMIZATION { ON | OFF }

<db_encryption_option> ::=
ENCRYPTION { ON | OFF }

<db_state_option> ::=
{ ONLINE | OFFLINE | EMERGENCY }

<db_update_option> ::=
{ READ_ONLY | READ_WRITE }

<db_user_access_option> ::=
{ SINGLE_USER | RESTRICTED_USER | MULTI_USER }

<delayed_durability_option> ::=
<delayed_durability_option> ::=
DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }

<external_access_option> ::=
{
DB_CHAINING { ON | OFF }
| TRUSTWORTHY { ON | OFF }
| DEFAULT_FULLTEXT_LANGUAGE = { <lcid> | <language name> | <language alias> }
| DEFAULT_LANGUAGE = { <lcid> | <language name> | <language alias> }
| NESTED_TRIGGERS = { OFF | ON }
| TRANSFORM_NOISE_WORDS = { OFF | ON }
| TWO_DIGIT_YEAR_CUTOFF = { 1753, ..., 2049, ..., 9999 }
}
<FILESTREAM_option> ::=
{
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL
| DIRECTORY_NAME = <directory_name>
}
<HADR_options> ::=
ALTER DATABASE SET HADR

<mixed_page_allocation_option> ::=
MIXED_PAGE_ALLOCATION { OFF | ON }

<parameterization_option> ::=
PARAMETERIZATION { SIMPLE | FORCED }

<query_store_options> ::=
{
QUERY_STORE
{
= OFF
| = ON [ ( <query_store_option_list> [,...n] ) ]
| ( < query_store_option_list> [,...n] )
| CLEAR [ ALL ]
}
}

<query_store_option_list> ::=
{
OPERATION_MODE = { READ_WRITE | READ_ONLY }
| CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = number )
| DATA_FLUSH_INTERVAL_SECONDS = number
| MAX_STORAGE_SIZE_MB = number
| INTERVAL_LENGTH_MINUTES = number
| SIZE_BASED_CLEANUP_MODE = [ AUTO | OFF ]
| QUERY_CAPTURE_MODE = [ ALL | AUTO | NONE ]
| MAX_PLANS_PER_QUERY = number
| WAIT_STATS_CAPTURE_MODE = [ ON | OFF ]
}

<recovery_option> ::=
{
RECOVERY { FULL | BULK_LOGGED | SIMPLE }
| TORN_PAGE_DETECTION { ON | OFF }
| PAGE_VERIFY { CHECKSUM | TORN_PAGE_DETECTION | NONE }
}

<remote_data_archive_option> ::=
{
REMOTE_DATA_ARCHIVE =
{
ON ( SERVER = <server_name> ,
{ CREDENTIAL = <db_scoped_credential_name>
| FEDERATED_SERVICE_ACCOUNT = ON | OFF
}
)
| OFF
}
}
}

<service_broker_option> ::=
{
ENABLE_BROKER
| DISABLE_BROKER
| NEW_BROKER
| ERROR_BROKER_CONVERSATIONS
| HONOR_BROKER_PRIORITY { ON | OFF}
}

<snapshot_option> ::=
{
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
| READ_COMMITTED_SNAPSHOT {ON | OFF }
| MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = {ON | OFF }
}
<sql_option> ::=
{
ANSI_NULL_DEFAULT { ON | OFF }
| ANSI_NULLS { ON | OFF }
| ANSI_PADDING { ON | OFF }
| ANSI_WARNINGS { ON | OFF }
| ARITHABORT { ON | OFF }
| COMPATIBILITY_LEVEL = { 90 | 100 | 110 | 120 | 130 | 140 }
| CONCAT_NULL_YIELDS_NULL { ON | OFF }
| NUMERIC_ROUNDABORT { ON | OFF }
| QUOTED_IDENTIFIER { ON | OFF }
| RECURSIVE_TRIGGERS { ON | OFF }
}

<target_recovery_time_option> ::=
TARGET_RECOVERY_TIME = target_recovery_time { SECONDS | MINUTES }

<termination> ::=
{
ROLLBACK AFTER integer [ SECONDS ]
| ROLLBACK IMMEDIATE
| NO_WAIT
}

Arguments
database_name
Is the name of the database to be modified.
CURRENT
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
CURRENT performs the action in the current database. CURRENT is not supported for all options in all contexts. If
CURRENT fails, provide the database name.
<auto_option> ::=
Controls automatic options.
AUTO_CLOSE { ON | OFF }
ON
The database is shut down cleanly and its resources are freed after the last user exits.
The database automatically reopens when a user tries to use the database again. For example, by issuing a
USE database_name statement. If the database is shut down cleanly while AUTO_CLOSE is set to ON, the
database is not reopened until a user tries to use the database the next time the Database Engine is restarted.
OFF
The database remains open after the last user exits.
The AUTO_CLOSE option is useful for desktop databases because it allows for database files to be managed as
regular files. They can be moved, copied to make backups, or even e-mailed to other users. The AUTO_CLOSE
process is asynchronous; repeatedly opening and closing the database does not reduce performance.

NOTE
The AUTO_CLOSE option is not available in a Contained Database or on SQL Database.

The status of this option can be determined by examining the is_auto_close_on column in the sys.databases
catalog view or the IsAutoClose property of the DATABASEPROPERTYEX function.

NOTE
When AUTO_CLOSE is ON, some columns in the sys.databases catalog view and DATABASEPROPERTYEX function will
return NULL because the database is unavailable to retrieve the data. To resolve this, execute a USE statement to open the
database.

NOTE
Database mirroring requires AUTO_CLOSE OFF.

When the database is set to AUTOCLOSE = ON, an operation that initiates an automatic database shutdown
clears the plan cache for the instance of SQL Server. Clearing the plan cache causes a recompilation of all
subsequent execution plans and can cause a sudden, temporary decrease in query performance. In SQL Server
2005 Service Pack 2 and higher, for each cleared cachestore in the plan cache, the SQL Server error log contains
the following informational message: " SQL Server has encountered %d occurrence(s) of cachestore flush for the
'%s' cachestore (part of plan cache) due to some database maintenance or reconfigure operations". This
message is logged every five minutes as long as the cache is flushed within that time interval.
AUTO_CREATE_STATISTICS { ON | OFF }
ON
The query optimizer creates statistics on single columns in query predicates, as necessary, to improve query
plans and query performance. These single-column statistics are created when the query optimizer compiles
queries. The single-column statistics are created only on columns that are not already the first column of an
existing statistics object.
The default is ON. We recommend that you use the default setting for most databases.
OFF
The query optimizer does not create statistics on single columns in query predicates when it is compiling
queries. Setting this option to OFF can cause suboptimal query plans and degraded query performance.
The status of this option can be determined by examining the is_auto_create_stats_on column in the
sys.databases catalog view or the IsAutoCreateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
INCREMENTAL = ON | OFF
When AUTO_CREATE_STATISTICS is ON, and INCREMENTAL is set to ON, automatically created stats are
created as incremental whenever incremental stats is supported. The default value is OFF. For more information,
see CREATE STATISTICS (Transact-SQL ).
Applies to: SQL Server 2014 (12.x) through SQL Server 2017, SQL Database.
AUTO_SHRINK { ON | OFF }
ON
The database files are candidates for periodic shrinking.
Both data file and log files can be automatically shrunk. AUTO_SHRINK reduces the size of the transaction log
only if the database is set to SIMPLE recovery model or if the log is backed up. When set to OFF, the database
files are not automatically shrunk during periodic checks for unused space.
The AUTO_SHRINK option causes files to be shrunk when more than 25 percent of the file contains unused
space. The file is shrunk to a size where 25 percent of the file is unused space, or to the size of the file when it
was created, whichever is larger.
You cannot shrink a read-only database.
OFF
The database files are not automatically shrunk during periodic checks for unused space.
The status of this option can be determined by examining the is_auto_shrink_on column in the sys.databases
catalog view or the IsAutoShrink property of the DATABASEPROPERTYEX function.

NOTE
The AUTO_SHRINK option is not available in a Contained Database.

AUTO_UPDATE_STATISTICS { ON | OFF }
ON
Specifies that the query optimizer updates statistics when they are used by a query and when they might be out-
of-date. Statistics become out-of-date after insert, update, delete, or merge operations change the data
distribution in the table or indexed view. The query optimizer determines when statistics might be out-of-date by
counting the number of data modifications since the last statistics update and comparing the number of
modifications to a threshold. The threshold is based on the number of rows in the table or indexed view.
The query optimizer checks for out-of-date statistics before compiling a query and before executing a cached
query plan. Before compiling a query, the query optimizer uses the columns, tables, and indexed views in the
query predicate to determine which statistics might be out-of-date. Before executing a cached query plan, the
Database Engine verifies that the query plan references up-to-date statistics.
The AUTO_UPDATE_STATISTICS option applies to statistics created for indexes, single-columns in query
predicates, and statistics that are created by using the CREATE STATISTICS statement. This option also applies
to filtered statistics.
The default is ON. We recommend that you use the default setting for most databases.
Use the AUTO_UPDATE_STATISTICS_ASYNC option to specify whether the statistics are updated
synchronously or asynchronously.
OFF
Specifies that the query optimizer does not update statistics when they are used by a query and when they
might be out-of-date. Setting this option to OFF can cause suboptimal query plans and degraded query
performance.
The status of this option can be determined by examining the is_auto_update_stats_on column in the
sys.databases catalog view or the IsAutoUpdateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
ON
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are asynchronous. The query
optimizer does not wait for statistics updates to complete before it compiles queries.
Setting this option to ON has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
By default, the AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF, and the query optimizer updates
statistics synchronously.
OFF
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are synchronous. The query
optimizer waits for statistcs updates to complete before it compiles queries.
Setting this option to OFF has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
The status of this option can be determined by examining the is_auto_update_stats_async_on column in the
sys.databases catalog view.
For more information that describes when to use synchronous or asynchronous statistics updates, see the
section "Using the Database-Wide Statistics Options" in Statistics.
<automatic_tuning_option> ::=
Applies to: SQL Server 2017 (14.x).
Enables or disables FORCE_LAST_GOOD_PLAN automatic tuning option.
FORCE_L AST_GOOD_PL AN = { ON | OFF }
ON
The Database Engine automatically forces the last known good plan on the Transact-SQL queries where new
SQL plan causes performance regressions. The Database Engine continously monitors query performance of the
Transact-SQL query with the forced plan. If there are performance gains, the Database Engine will keep using
last known good plan. If performance gains are not detected, the Database Engine will produce a new SQL plan.
The statement will fail if Query Store is not enabled or if it is not in Read -Write mode.
OFF
The Database Engine reports potential query performance regressions caused by SQL plan changes in
sys.dm_db_tuning_recommendations view. However, these recommendations are not automatically applied. User
can monitor active recomendations and fix identified problems by applying Transact-SQL scripts that are shown
in the view. This is the default value.
<change_tracking_option> ::=
Applies to: SQL Server and SQL Database.
Controls change tracking options. You can enable change tracking, set options, change options, and disable
change tracking. For examples, see the Examples section later in this topic.
ON
Enables change tracking for the database. When you enable change tracking, you can also set the AUTO
CLEANUP and CHANGE RETENTION options.
AUTO_CLEANUP = { ON | OFF }
ON
Change tracking information is automatically removed after the specified retention period.
OFF
Change tracking data is not removed from the database.
CHANGE_RETENTION =retention_period { DAYS | HOURS | MINUTES }
Specifies the minimum period for keeping change tracking information in the database. Data is removed only
when the AUTO_CLEANUP value is ON.
retention_period is an integer that specifies the numerical component of the retention period.
The default retention period is 2 days. The minimum retention period is 1 minute. The default retention type is
DAYS.
OFF
Disables change tracking for the database. You must disable change tracking on all tables before you can disable
change tracking off the database.
<containment_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017. Not available in SQL Database.
Controls database containment options.
CONTAINMENT = { NONE | PARTIAL }
NONE
The database is not a contained database.
PARTIAL
The database is a contained database. Setting database containment to partial will fail if the database has
replication, change data capture, or change tracking enabled. Error checking stops after one failure. For more
information about contained databases, see Contained Databases.

NOTE
Containment cannot be configured in SQL Database. Containment is not explicitly designated, but SQL Database can use
contained features such as contained database users.

<cursor_option> ::=
Controls cursor options.
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
ON
Any cursors open when a transaction is committed or rolled back are closed.
OFF
Cursors remain open when a transaction is committed; rolling back a transaction closes any cursors except those
defined as INSENSITIVE or STATIC.
Connection-level settings that are set by using the SET statement override the default database setting for
CURSOR_CLOSE_ON_COMMIT. By default, ODBC and OLE DB clients issue a connection-level SET statement
setting CURSOR_CLOSE_ON_COMMIT to OFF for the session when connecting to an instance of SQL Server.
For more information, see SET CURSOR_CLOSE_ON_COMMIT (Transact-SQL ).
The status of this option can be determined by examining the is_cursor_close_on_commit_on column in the
sys.databases catalog view or the IsCloseCursorsOnCommitEnabled property of the DATABASEPROPERTYEX
function.
CURSOR_DEFAULT { LOCAL | GLOBAL }
Applies to: SQL Server. Not available in SQL Database.
Controls whether cursor scope uses LOCAL or GLOBAL.
LOCAL
When LOCAL is specified and a cursor is not defined as GLOBAL when created, the scope of the cursor is local
to the batch, stored procedure, or trigger in which the cursor was created. The cursor name is valid only within
this scope. The cursor can be referenced by local cursor variables in the batch, stored procedure, or trigger, or a
stored procedure OUTPUT parameter. The cursor is implicitly deallocated when the batch, stored procedure, or
trigger ends, unless it was passed back in an OUTPUT parameter. If the cursor is passed back in an OUTPUT
parameter, the cursor is deallocated when the last variable that references it is deallocated or goes out of scope.
GLOBAL
When GLOBAL is specified, and a cursor is not defined as LOCAL when created, the scope of the cursor is global
to the connection. The cursor name can be referenced in any stored procedure or batch executed by the
connection.
The cursor is implicitly deallocated only at disconnect. For more information, see DECL ARE CURSOR (Transact-
SQL ).
The status of this option can be determined by examining the is_local_cursor_default column in the sys.databases
catalog view or the IsLocalCursorsDefault property of the DATABASEPROPERTYEX function.
<database_mirroring>
Applies to: SQL Server. Not available in SQL Database.
For the argument descriptions, see ALTER DATABASE Database Mirroring (Transact-SQL ).
<date_correlation_optimization_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls the date_correlation_optimization option.
DATE_CORREL ATION_OPTIMIZATION { ON | OFF }
ON
SQL Server maintains correlation statistics between any two tables in the database that are linked by a
FOREIGN KEY constraint and have datetime columns.
OFF
Correlation statistics are not maintained.
To set DATE_CORREL ATION_OPTIMIZATION to ON, there must be no active connections to the database
except for the connection that is executing the ALTER DATABASE statement. Afterwards, multiple connections
are supported.
The current setting of this option can be determined by examining the is_date_correlation_on column in the
sys.databases catalog view.
<db_encryption_option> ::=
Controls the database encryption state.
ENCRYPTION {ON | OFF }
Sets the database to be encrypted (ON ) or not encrypted (OFF ). For more information about database
encryption, see Transparent Data Encryption (TDE ), and Transparent Data Encryption with Azure SQL Database.
When encryption is enabled at the database level all filegroups will be encrypted. Any new filegroups will inherit
the encrypted property. If any filegroups in the database are set to READ ONLY, the database encryption
operation will fail.
You can see the encryption state of the database by using the sys.dm_database_encryption_keys dynamic
management view.
<db_state_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls the state of the database.
OFFLINE
The database is closed, shut down cleanly, and marked offline. The database cannot be modified while it is
offline.
ONLINE
The database is open and available for use.
EMERGENCY
The database is marked READ_ONLY, logging is disabled, and access is limited to members of the sysadmin
fixed server role. EMERGENCY is primarily used for troubleshooting purposes. For example, a database marked
as suspect due to a corrupted log file can be set to the EMERGENCY state. This could enable the system
administrator read-only access to the database. Only members of the sysadmin fixed server role can set a
database to the EMERGENCY state.

NOTE
Permissions: ALTER DATABASE permission for the subject database is required to change a database to the offline or
emergency state. The server level ALTER ANY DATABASE permission is required to move a database from offline to online.

The status of this option can be determined by examining the state and state_desc columns in the sys.databases
catalog view or the Status property of the DATABASEPROPERTYEX function. For more information, see
Database States.
A database marked as RESTORING cannot be set to OFFLINE, ONLINE, or EMERGENCY. A database may be
in the RESTORING state during an active restore operation or when a restore operation of a database or log file
fails because of a corrupted backup file.
<db_update_option> ::=
Controls whether updates are allowed on the database.
READ_ONLY
Users can read data from the database but not modify it.

NOTE
To improve query performance, update statistics before setting a database to READ_ONLY. If additional statistics are
needed after a database is set to READ_ONLY, the Database Engine will create statistics in tempdb. For more information
about statistics for a read-only database, see Statistics.

READ_WRITE
The database is available for read and write operations.
To change this state, you must have exclusive access to the database. For more information, see the
SINGLE_USER clause.

NOTE
On SQL Database federated databases, SET { READ_ONLY | READ_WRITE } is disabled.
<db_user_access_option> ::=
Controls user access to the database.
SINGLE_USER
Applies to: SQL Server. Not available in SQL Database.
Specifies that only one user at a time can access the database. If SINGLE_USER is specified and there are other
users connected to the database the ALTER DATABASE statement will be blocked until all users disconnect from
the specified database. To override this behavior, see the WITH <termination> clause.
The database remains in SINGLE_USER mode even if the user that set the option logs off. At that point, a
different user, but only one, can connect to the database.
Before you set the database to SINGLE_USER, verify the AUTO_UPDATE_STATISTICS_ASYNC option is set to
OFF. When set to ON, the background thread used to update statistics takes a connection against the database,
and you will be unable to access the database in single-user mode. To view the status of this option, query the
is_auto_update_stats_async_on column in the sys.databases catalog view. If the option is set to ON, perform the
following tasks:
1. Set AUTO_UPDATE_STATISTICS_ASYNC to OFF.
2. Check for active asynchronous statistics jobs by querying the sys.dm_exec_background_job_queue
dynamic management view.
If there are active jobs, either allow the jobs to complete or manually terminate them by using KILL
STATS JOB.
RESTRICTED_USER
RESTRICTED_USER allows for only members of the db_owner fixed database role and dbcreator and sysadmin
fixed server roles to connect to the database, but does not limit their number. All connections to the database are
disconnected in the timeframe specified by the termination clause of the ALTER DATABASE statement. After the
database has transitioned to the RESTRICTED_USER state, connection attempts by unqualified users are
refused.
MULTI_USER
All users that have the appropriate permissions to connect to the database are allowed.
The status of this option can be determined by examining the user_access column in the sys.databases catalog
view or the UserAccess property of the DATABASEPROPERTYEX function.
<delayed_durability_option> ::=
Applies to: SQL Server 2014 (12.x) through SQL Server 2017, SQL Database.
Controls whether transactions commit fully durable or delayed durable.
DISABLED
All transactions following SET DISABLED are fully durable. Any durability options set in an atomic block or
commit statement are ignored.
ALLOWED
All transactions following SET ALLOWED are either fully durable or delayed durable, depending upon the
durability option set in the atomic block or commit statement.
FORCED
All transactions following SET FORCED are delayed durable. Any durability options set in an atomic block or
commit statement are ignored.
<external_access_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls whether the database can be accessed by external resources, such as objects from another database.
DB_CHAINING { ON | OFF }
ON
Database can be the source or target of a cross-database ownership chain.
OFF
Database cannot participate in cross-database ownership chaining.

IMPORTANT
The instance of SQL Server will recognize this setting when the cross db ownership chaining server option is 0 (OFF). When
cross db ownership chaining is 1 (ON), all user databases can participate in cross-database ownership chains, regardless of
the value of this option. This option is set by using sp_configure.

To set this option, requires CONTROL SERVER permission on the database.


The DB_CHAINING option cannot be set on these system databases: master, model, and tempdb.
The status of this option can be determined by examining the is_db_chaining_on column in the sys.databases
catalog view.
TRUSTWORTHY { ON | OFF }
ON
Database modules (for example, user-defined functions or stored procedures) that use an impersonation context
can access resources outside the database.
OFF
Database modules in an impersonation context cannot access resources outside the database.
TRUSTWORTHY is set to OFF whenever the database is attached.
By default, all system databases except the msdb database have TRUSTWORTHY set to OFF. The value cannot
be changed for the model and tempdb databases. We recommend that you never set the TRUSTWORTHY
option to ON for the master database.
To set this option, requires CONTROL SERVER permission on the database.
The status of this option can be determined by examining the is_trustworthy_on column in the sys.databases
catalog view.
DEFAULT_FULLTEXT_L ANGUAGE
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the default language value for full-text indexed columns.

IMPORTANT
This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is set to NONE, errors will
occur.

DEFAULT_L ANGUAGE
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the default language for all newly created logins. Language can be specified by providing the local id
(lcid), the language name, or the language alias. For a list of acceptable language names and aliases, see
sys.syslanguages (Transact-SQL ). This option is allowable only when CONTAINMENT has been set to PARTIAL.
If CONTAINMENT is set to NONE, errors will occur.
NESTED_TRIGGERS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies whether an AFTER trigger can cascade; that is, perform an action that initiates another trigger, which
initiates another trigger, and so on. This option is allowable only when CONTAINMENT has been set to
PARTIAL. If CONTAINMENT is set to NONE, errors will occur.
TRANSFORM_NOISE_WORDS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Used to suppress an error message if noise words, or stopwords, cause a Boolean operation on a full-text query
to fail. This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is set to
NONE, errors will occur.
TWO_DIGIT_YEAR_CUTOFF
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies an integer from 1753 to 9999 that represents the cutoff year for interpreting two-digit years as four-
digit years. This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is
set to NONE, errors will occur.
<FILESTREAM_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Controls the settings for FileTables.
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL }
OFF
Non-transactional access to FileTable data is disabled.
READ_ONLY
FILESTREAM data in FileTables in this database can be read by non-transactional processes.
FULL
Full non-transactional access to FILESTREAM data in FileTables is enabled.
DIRECTORY_NAME = <directory_name>
A windows-compatible directory name. This name should be unique among all the database-level directory
names in the SQL Server instance. Uniqueness comparison is case-insensitive, regardless of collation settings.
This option must be set before creating a FileTable in this database.
<HADR_options> ::=
Applies to: SQL Server. Not available in SQL Database.
See ALTER DATABASE SET HADR (Transact-SQL ).
<mixed_page_allocation_option> ::=
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version). Not available in SQL Database.
MIXED_PAGE_ALLOCATION { OFF | ON } controls whether the database can create initial pages using a mixed
extent for the first eight pages of a table or index.
OFF
The database always creates initial pages using uniform extents. This is the default value.
ON
The database can create initial pages using mixed extents.
This setting is ON for all system databases. tempdb is the only system database that supports OFF.
<PARAMETERIZATION_option> ::=
Controls the parameterization option.
PARAMETERIZATION { SIMPLE | FORCED }
SIMPLE
Queries are parameterized based on the default behavior of the database.
FORCED
SQL Server parameterizes all queries in the database.
The current setting of this option can be determined by examining the is_parameterization_forced column in the
sys.databases catalog view.
<query_store_options> ::=
Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017), SQL Database.
ON | OFF | CLEAR [ ALL ]
Controls if the query store is enabled in this database, and also controls removing the contents of the query
store.
ON
Enables the query store.
OFF
Disables the query store. This is the default value.
CLEAR
Remove the contents of the query store.
OPERATION_MODE
Describes the operation mode of the query store. Valid values are READ_ONLY and READ_WRITE. In
READ_WRITE mode, the query store collects and persists query plan and runtime execution statistics
information. In READ_ONLY mode, information can be read from the query store, but new information is not
added. If the maximum allocated space of the query store has been exhausted, the query store will change is
operation mode to READ_ONLY.
CLEANUP_POLICY
Describes the data retention policy of the query store. STALE_QUERY_THRESHOLD_DAYS determines the
number of days for which the information for a query is retained in the query store.
STALE_QUERY_THRESHOLD_DAYS is type bigint.
DATA_FLUSH_INTERVAL_SECONDS
Determines the frequency at which data written to the query store is persisted to disk. To optimize for
performance, data collected by the query store is asynchronously written to the disk. The frequency at which this
asynchronous transfer occurs is configured by using the DATA_FLUSH_INTERVAL_SECONDS argument.
DATA_FLUSH_INTERVAL_SECONDS is type bigint.
MAX_STORAGE_SIZE_MB
Determines the space allocated to the query store. MAX_STORAGE_SIZE_MB is type bigint.
INTERVAL_LENGTH_MINUTES
Determines the time interval at which runtime execution statistics data is aggregated into the query store. To
optimize for space usage, the runtime execution statistics in the runtime stats store are aggregated over a fixed
time window. This fixed time window is configured by using the INTERVAL_LENGTH_MINUTES argument.
INTERVAL_LENGTH_MINUTES is type bigint.
SIZE_BASED_CLEANUP_MODE
Controls whether cleanup will be automatically activated when total amount of data gets close to maximum size:
OFF
Size based cleanup won’t be automatically activated.
AUTO
Size based cleanup will be automatically activated when size on disk reaches 90% of max_storage_size_mb.
Size based cleanup removes the least expensive and oldest queries first. It stops at approximately 80% of
max_storage_size_mb. This is the default configuration value.
SIZE_BASED_CLEANUP_MODE is type nvarchar.
QUERY_CAPTURE_MODE
Designates the currently active query capture mode:
ALL All queries are captured. This is the default configuration value. This is the default configuration value for
SQL Server 2016 (13.x)
AUTO Capture relevant queries based on execution count and resource consumption. This is the default
configuration value for SQL Database
NONE Stop capturing new queries. Query Store will continue to collect compile and runtime statistics for
queries that were captured already. Use this configuration with caution since you may miss to capture important
queries.
QUERY_CAPTURE_MODE is type nvarchar.
MAX_PL ANS_PER_QUERY
An integer representing the maximum number of plans maintained for each query. Default is 200.
<recovery_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls database recovery options and disk I/O error checking.
FULL
Provides full recovery after media failure by using transaction log backups. If a data file is damaged, media
recovery can restore all committed transactions. For more information, see Recovery Models (SQL Server).
BULK_LOGGED
Provides recovery after media failure by combining the best performance and least amount of log-space use for
certain large-scale or bulk operations. For information about what operations can be minimally logged, see The
Transaction Log (SQL Server). Under the BULK_LOGGED recovery model, logging for these operations is
minimal. For more information, see Recovery Models (SQL Server).
SIMPLE
A simple backup strategy that uses minimal log space is provided. Log space can be automatically reused when
it is no longer required for server failure recovery. For more information, see Recovery Models (SQL Server).
IMPORTANT
The simple recovery model is easier to manage than the other two models but at the expense of greater data loss
exposure if a data file is damaged. All changes since the most recent database or differential database backup are lost and
must be manually reentered.

The default recovery model is determined by the recovery model of the model database. For more information
about selecting the appropriate recovery model, see Recovery Models (SQL Server).
The status of this option can be determined by examining the recovery_model and recovery_model_desc
columns in the sys.databases catalog view or the Recovery property of the DATABASEPROPERTYEX function.
TORN_PAGE_DETECTION { ON | OFF }
ON
Incomplete pages can be detected by the Database Engine.
OFF
Incomplete pages cannot be detected by the Database Engine.

IMPORTANT
The syntax structure TORN_PAGE_DETECTION ON | OFF will be removed in a future version of SQL Server. Avoid using this
syntax structure in new development work, and plan to modify applications that currently use the syntax structure. Use
the PAGE_VERIFY option instead.

PAGE_VERIFY { CHECKSUM | TORN_PAGE_DETECTION | NONE }


Discovers damaged database pages caused by disk I/O path errors. Disk I/O path errors can be the cause of
database corruption problems and are generally caused by power failures or disk hardware failures that occur at
the time the page is being written to disk.
CHECKSUM
Calculates a checksum over the contents of the whole page and stores the value in the page header when a page
is written to disk. When the page is read from disk, the checksum is recomputed and compared to the checksum
value stored in the page header. If the values do not match, error message 824 (indicating a checksum failure) is
reported to both the SQL Server error log and the Windows event log. A checksum failure indicates an I/O path
problem. To determine the root cause requires investigation of the hardware, firmware drivers, BIOS, filter
drivers (such as virus software), and other I/O path components.
TORN_PAGE_DETECTION
Saves a specific 2-bit pattern for each 512-byte sector in the 8-kilobyte (KB ) database page and stored in the
database page header when the page is written to disk. When the page is read from disk, the torn bits stored in
the page header are compared to the actual page sector information. Unmatched values indicate that only part
of the page was written to disk. In this situation, error message 824 (indicating a torn page error) is reported to
both the SQL Server error log and the Windows event log. Torn pages are typically detected by database
recovery if it is truly an incomplete write of a page. However, other I/O path failures can cause a torn page at any
time.
NONE
Database page writes will not generate a CHECKSUM or TORN_PAGE_DETECTION value. SQL Server will not
verify a checksum or torn page during a read even if a CHECKSUM or TORN_PAGE_DETECTION value is
present in the page header.
Consider the following important points when you use the PAGE_VERIFY option:
The default is CHECKSUM.
When a user or system database is upgraded to SQL Server 2005 or a later version, the PAGE_VERIFY
value (NONE or TORN_PAGE_DETECTION ) is retained. We recommend that you use CHECKSUM.

NOTE
In earlier versions of SQL Server, the PAGE_VERIFY database option is set to NONE for the tempdb database and
cannot be modified. In SQL Server 2008 and later versions, the default value for the tempdb database is
CHECKSUM for new installations of SQL Server. When upgrading an installation SQL Server, the default value
remains NONE. The option can be modified. We recommend that you use CHECKSUM for the tempdb database.

TORN_PAGE_DETECTION may use fewer resources but provides a minimal subset of the CHECKSUM
protection.
PAGE_VERIFY can be set without taking the database offline, locking the database, or otherwise impeding
concurrency on that database.
CHECKSUM is mutually exclusive to TORN_PAGE_DETECTION. Both options cannot be enabled at the
same time.
When a torn page or checksum failure is detected, you can recover by restoring the data or potentially
rebuilding the index if the failure is limited only to index pages. If you encounter a checksum failure, to
determine the type of database page or pages affected, run DBCC CHECKDB. For more information
about restore options, see RESTORE Arguments (Transact-SQL ). Although restoring the data will resolve
the data corruption problem, the root cause, for example, disk hardware failure, should be diagnosed and
corrected as soon as possible to prevent continuing errors.
SQL Server will retry any read that fails with a checksum, torn page, or other I/O error four times. If the
read is successful in any one of the retry attempts, a message will be written to the error log and the
command that triggered the read will continue. If the retry attempts fail, the command will fail with error
message 824.
For more information about error messages 823, 824 and 825, see How to troubleshoot a Msg 823 error
in SQL Server, How to troubleshoot Msg 824 in SQL Server and How to troubleshoot Msg 825 (read
retry) in SQL Server.
The current setting of this option can be determined by examining the page_verify_option column in the
sys.databases catalog view or the IsTornPageDetectionEnabled property of the DATABASEPROPERTYEX
function.
<remote_data_archive_option> ::=
Applies to: SQL Server 2016 (13.x) through SQL Server 2017. Not available in SQL Database.
Enables or disables Stretch Database for the database. For more info, see Stretch Database.
REMOTE_DATA_ARCHIVE = { ON ( SERVER = <server_name> , { CREDENTIAL =
<db_scoped_credential_name> | FEDERATED_SERVICE_ACCOUNT = ON | OFF } )| OFF ON
Enables Stretch Database for the database. For more info, including additional prerequisites, see Enable Stretch
Database for a database.
Permissions. Enabling Stretch Database for a database or a table requires db_owner permissions. Enabling
Stretch Database for a database also requires CONTROL DATABASE permissions.
SERVER = <server_name>
Specifies the address of the Azure server. Include the .database.windows.net portion of the name. For example,
MyStretchDatabaseServer.database.windows.net .

CREDENTIAL = <db_scoped_credential_name>
Specifies the database scoped credential that the instance of SQL Server uses to connect to the Azure server.
Make sure the credential exists before you run this command. For more info, see CREATE DATABASE SCOPED
CREDENTIAL (Transact-SQL ).
FEDERATED_SERVICE_ACCOUNT = ON | OFF
You can use a federated service account for the on premises SQL Server to communicate with the remote Azure
server when the following conditions are all true.
The service account under which the instance of SQL Server is running is a domain account.
The domain account belongs to a domain whose Active Directory is federated with Azure Active
Directory.
The remote Azure server is configured to support Azure Active Directory authentication.
The service account under which the instance of SQL Server is running must be configured as a
dbmanager or sysadmin account on the remote Azure server.
If you specify ON, you can't also specify the CREDENTIAL argument. If you specify OFF, you have to
provide the CREDENTIAL argument.
OFF
Disables Stretch Database for the database. For more info, see Disable Stretch Database and bring back
remote data.
You can only disable Stretch Database for a database after the database no longer contains any tables that
are enabled for Stretch Database. After you disable Stretch Database, data migration stops and query
results no longer include results from remote tables.
Disabling Stretch does not remove the remote database. If you want to delete the remote database, you
have to drop it by using the Azure management portal.
<service_broker_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls the following Service Broker options: enables or disables message delivery, sets a new Service Broker
identifier, or sets conversation priorities to ON or OFF.
ENABLE_BROKER
Specifies that Service Broker is enabled for the specified database. Message delivery is started, and the
is_broker_enabled flag is set to true in the sys.databases catalog view. The database retains the existing Service
Broker identifier. Service broker cannot be enabled while the database is the principal in a database mirroring
configuration.

NOTE
ENABLE_BROKER requires an exclusive database lock. If other sessions have locked resources in the database,
ENABLE_BROKER will wait until the other sessions release their locks. To enable Service Broker in a user database, ensure
that no other sessions are using the database before you run the ALTER DATABASE SET ENABLE_BROKER statement, such
as by putting the database in single user mode. To enable Service Broker in the msdb database, first stop SQL Server
Agent so that Service Broker can obtain the necessary lock.

DISABLE_BROKER
Specifies that Service Broker is disabled for the specified database. Message delivery is stopped, and the
is_broker_enabled flag is set to false in the sys.databases catalog view. The database retains the existing Service
Broker identifier.
NEW_BROKER
Specifies that the database should receive a new broker identifier. Because the database is considered to be a
new service broker, all existing conversations in the database are immediately removed without producing end
dialog messages. Any route that references the old Service Broker identifier must be re-created with the new
identifier.
ERROR_BROKER_CONVERSATIONS
Specifies that Service Broker message delivery is enabled. This preserves the existing Service Broker identifier
for the database. Service Broker ends all conversations in the database with an error. This enables applications to
perform regular cleanup for existing conversations.
HONOR_BROKER_PRIORITY {ON | OFF }
ON
Send operations take into consideration the priority levels that are assigned to conversations. Messages from
conversations that have high priority levels are sent before messages from conversations that are assigned low
priority levels.
OFF
Send operations run as if all conversations have the default priority level.
Changes to the HONOR_BROKER_PRIORITY option take effect immediately for new dialogs or dialogs that
have no messages waiting to be sent. Dialogs that have messages waiting to be sent when ALTER DATABASE is
run will not pick up the new setting until some of the messages for the dialog have been sent. The amount of
time before all dialogs start using the new setting can vary considerably.
The current setting of this property is reported in the is_broker_priority_honored column in the sys.databases
catalog view.
<snapshot_option> ::=
Determines the transaction isolation level.
ALLOW_SNAPSHOT_ISOL ATION { ON | OFF }
ON
Enables Snapshot option at the database level. When it is enabled, DML statements start generating row
versions even when no transaction uses Snapshot Isolation. Once this option is enabled, transactions can specify
the SNAPSHOT transaction isolation level. When a transaction runs at the SNAPSHOT isolation level, all
statements see a snapshot of data as it exists at the start of the transaction. If a transaction running at the
SNAPSHOT isolation level accesses data in multiple databases, either ALLOW_SNAPSHOT_ISOL ATION must
be set to ON in all the databases, or each statement in the transaction must use locking hints on any reference in
a FROM clause to a table in a database where ALLOW_SNAPSHOT_ISOL ATION is OFF.
OFF
Turns off the Snapshot option at the database level. Transactions cannot specify the SNAPSHOT transaction
isolation level.
When you set ALLOW_SNAPSHOT_ISOL ATION to a new state (from ON to OFF, or from OFF to ON ), ALTER
DATABASE does not return control to the caller until all existing transactions in the database are committed. If
the database is already in the state specified in the ALTER DATABASE statement, control is returned to the caller
immediately. If the ALTER DATABASE statement does not return quickly, use
sys.dm_tran_active_snapshot_database_transactions to determine whether there are long-running transactions.
If the ALTER DATABASE statement is canceled, the database remains in the state it was in when ALTER
DATABASE was started. The sys.databases catalog view indicates the state of snapshot-isolation transactions in
the database. If snapshot_isolation_state_desc = IN_TRANSITION_TO_ON, ALTER DATABASE
ALLOW_SNAPSHOT_ISOL ATION OFF will pause six seconds and retry the operation.
You cannot change the state of ALLOW_SNAPSHOT_ISOL ATION if the database is OFFLINE.
If you set ALLOW_SNAPSHOT_ISOL ATION in a READ_ONLY database, the setting will be retained if the
database is later set to READ_WRITE.
You can change the ALLOW_SNAPSHOT_ISOL ATION settings for the master, model, msdb, and tempdb
databases. If you change the setting for tempdb, the setting is retained every time the instance of the Database
Engine is stopped and restarted. If you change the setting for model, that setting becomes the default for any
new databases that are created, except for tempdb.
The option is ON, by default, for the master and msdb databases.
The current setting of this option can be determined by examining the snapshot_isolation_state column in the
sys.databases catalog view.
READ_COMMITTED_SNAPSHOT { ON | OFF }
ON
Enables Read-Committed Snapshot option at the database level. When it is enabled, DML statements start
generating row versions even when no transaction uses Snapshot Isolation. Once this option is enabled, the
transactions specifying the read committed isolation level use row versioning instead of locking. When a
transaction runs at the read committed isolation level, all statements see a snapshot of data as it exists at the
start of the statement.
OFF
Turns off Read-Committed Snapshot option at the database level. Transactions specifying the READ
COMMITTED isolation level use locking.
To set READ_COMMITTED_SNAPSHOT ON or OFF, there must be no active connections to the database
except for the connection executing the ALTER DATABASE command. However, the database does not have to
be in single-user mode. You cannot change the state of this option when the database is OFFLINE.
If you set READ_COMMITTED_SNAPSHOT in a READ_ONLY database, the setting will be retained when the
database is later set to READ_WRITE.
READ_COMMITTED_SNAPSHOT cannot be turned ON for the master, tempdb, or msdb system databases. If
you change the setting for model, that setting becomes the default for any new databases created, except for
tempdb.
The current setting of this option can be determined by examining the is_read_committed_snapshot_on column
in the sys.databases catalog view.

WARNING
When a table is created with DURABILITY = SCHEMA_ONLY, and READ_COMMITTED_SNAPSHOT is subsequently
changed using ALTER DATABASE, data in the table will be lost.

MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT { ON | OFF }
Applies to: SQL Server 2014 (12.x) through SQL Server 2017, SQL Database.
ON
When the transaction isolation level is set to any isolation level lower than SNAPSHOT (for example, READ
COMMITTED or READ UNCOMMITTED ), all interpreted Transact-SQL operations on memory-optimized
tables are performed under SNAPSHOT isolation. This is done regardless of whether the transaction isolation
level is set explicitly at the session level, or the default is used implicitly.
OFF
Does not elevate the transaction isolation level for interpreted Transact-SQL operations on memory-optimized
tables.
You cannot change the state of MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT if the database is OFFLINE.
The option is OFF, by default.
The current setting of this option can be determined by examining the
is_memory_optimized_elevate_to_snapshot_on column in the sys.databases (Transact-SQL ) catalog view.
<sql_option> ::=
Controls the ANSI compliance options at the database level.
ANSI_NULL_DEFAULT { ON | OFF }
Determines the default value, NULL or NOT NULL, of a column or CLR user-defined type for which the
nullability is not explicitly defined in CREATE TABLE or ALTER TABLE statements. Columns that are defined with
constraints follow constraint rules regardless of this setting.
ON
The default value is NULL.
OFF
The default value is NOT NULL.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_NULL_DEFAULT. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULL_DEFAULT to ON for the session when connecting to an instance of SQL Server. For more
information, see SET ANSI_NULL_DFLT_ON (Transact-SQL ).
For ANSI compatibility, setting the database option ANSI_NULL_DEFAULT to ON changes the database default
to NULL.
The status of this option can be determined by examining the is_ansi_null_default_on column in the
sys.databases catalog view or the IsAnsiNullDefault property of the DATABASEPROPERTYEX function.
ANSI_NULLS { ON | OFF }
ON
All comparisons to a null value evaluate to UNKNOWN.
OFF
Comparisons of non-UNICODE values to a null value evaluate to TRUE if both values are NULL.

IMPORTANT
In a future version of SQL Server, ANSI_NULLS will always be ON and any applications that explicitly set the option to OFF
will produce an error. Avoid using this feature in new development work, and plan to modify applications that currently use
this feature.

Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_NULLS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULLS to ON for the session when connecting to an instance of SQL Server. For more information, see
SET ANSI_NULLS (Transact-SQL ).
SET ANSI_NULLS also must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
The status of this option can be determined by examining the is_ansi_nulls_on column in the sys.databases
catalog view or the IsAnsiNullsEnabled property of the DATABASEPROPERTYEX function.
ANSI_PADDING { ON | OFF }
ON
Strings are padded to the same length before conversion or inserting to a varchar or nvarchar data type.
Trailing blanks in character values inserted into varchar or nvarchar columns and trailing zeros in binary values
inserted into varbinary columns are not trimmed. Values are not padded to the length of the column.
OFF
Trailing blanks for varchar or nvarchar and zeros for varbinary are trimmed.
When OFF is specified, this setting affects only the definition of new columns.

IMPORTANT
In a future version of SQL Server, ANSI_PADDING will always be ON and any applications that explicitly set the option to
OFF will produce an error. Avoid using this feature in new development work, and plan to modify applications that
currently use this feature. We recommend that you always set ANSI_PADDING to ON. ANSI_PADDING must be ON when
you create or manipulate indexes on computed columns or indexed views.

char(n ) and binary(n ) columns that allow for nulls are padded to the length of the column when
ANSI_PADDING is set to ON, but trailing blanks and zeros are trimmed when ANSI_PADDING is OFF. char(n )
and binary(n ) columns that do not allow nulls are always padded to the length of the column.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_PADDING. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_PADDING to ON for the session when connecting to an instance of SQL Server. For more information,
see SET ANSI_PADDING (Transact-SQL ).
The status of this option can be determined by examining the is_ansi_padding_on column in the sys.databases
catalog view or the IsAnsiPaddingEnabled property of the DATABASEPROPERTYEX function.
ANSI_WARNINGS { ON | OFF }
ON
Errors or warnings are issued when conditions such as divide-by-zero occur or null values appear in aggregate
functions.
OFF
No warnings are raised and null values are returned when conditions such as divide-by-zero occur.
SET ANSI_WARNINGS must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_WARNINGS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_WARNINGS to ON for the session when connecting to an instance of SQL Server. For more information,
see SET ANSI_WARNINGS (Transact-SQL ).
The status of this option can be determined by examining the is_ansi_warnings_on column in the sys.databases
catalog view or the IsAnsiWarningsEnabled property of the DATABASEPROPERTYEX function.
ARITHABORT { ON | OFF }
ON
A query is ended when an overflow or divide-by-zero error occurs during query execution.
OFF
A warning message is displayed when one of these errors occurs, but the query, batch, or transaction continues
to process as if no error occurred.
SET ARITHABORT must be set to ON when you create or make changes to indexes on computed columns or
indexed views.
The status of this option can be determined by examining the is_arithabort_on column in the sys.databases
catalog view or the IsArithmeticAbortEnabled property of the DATABASEPROPERTYEX function.
COMPATIBILITY_LEVEL = { 90 | 100 | 110 | 120 | 130 | 140 }
For more information, see ALTER DATABASE Compatibility Level (Transact-SQL ).
CONCAT_NULL_YIELDS_NULL { ON | OFF }
ON
The result of a concatenation operation is NULL when either operand is NULL. For example, concatenating the
character string "This is" and NULL causes the value NULL, instead of the value "This is".
OFF
The null value is treated as an empty character string.
CONCAT_NULL_YIELDS_NULL must be set to ON when you create or make changes to indexes on computed
columns or indexed views.

IMPORTANT
In a future version of SQL Server, CONCAT_NULL_YIELDS_NULL will always be ON and any applications that explicitly set
the option to OFF will produce an error. Avoid using this feature in new development work, and plan to modify
applications that currently use this feature.

Connection-level settings that are set by using the SET statement override the default database setting for
CONCAT_NULL_YIELDS_NULL. By default, ODBC and OLE DB clients issue a connection-level SET statement
setting CONCAT_NULL_YIELDS_NULL to ON for the session when connecting to an instance of SQL Server.
For more information, see SET CONCAT_NULL_YIELDS_NULL (Transact-SQL ).
The status of this option can be determined by examining the is_concat_null_yields_null_on column in the
sys.databases catalog view or the IsNullConcat property of the DATABASEPROPERTYEX function.
QUOTED_IDENTIFIER { ON | OFF }
ON
Double quotation marks can be used to enclose delimited identifiers.
All strings delimited by double quotation marks are interpreted as object identifiers. Quoted identifiers do not
have to follow the Transact-SQL rules for identifiers. They can be keywords and can include characters not
generally allowed in Transact-SQL identifiers. If a single quotation mark (') is part of the literal string, it can be
represented by double quotation marks (").
OFF
Identifiers cannot be in quotation marks and must follow all Transact-SQL rules for identifiers. Literals can be
delimited by either single or double quotation marks.
SQL Server also allows for identifiers to be delimited by square brackets ([ ]). Bracketed identifiers can always be
used, regardless of the setting of QUOTED_IDENTIFIER. For more information, see Database Identifiers.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the metadata of the table,
even if the option is set to OFF when the table is created.
Connection-level settings that are set by using the SET statement override the default database setting for
QUOTED_IDENTIFIER. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
QUOTED_IDENTIFIER to ON when connecting to an instance of SQL Server. For more information, see SET
QUOTED_IDENTIFIER (Transact-SQL ).
The status of this option can be determined by examining the is_quoted_identifier_on column in the
sys.databases catalog view or the IsQuotedIdentifiersEnabled property of the DATABASEPROPERTYEX
function.
NUMERIC_ROUNDABORT { ON | OFF }
ON
An error is generated when loss of precision occurs in an expression.
OFF
Losses of precision do not generate error messages and the result is rounded to the precision of the column or
variable storing the result.
NUMERIC_ROUNDABORT must be set to OFF when you create or make changes to indexes on computed
columns or indexed views.
The status of this option can be determined by examining the is_numeric_roundabort_on column in the
sys.databases catalog view or the IsNumericRoundAbortEnabled property of the DATABASEPROPERTYEX
function.
RECURSIVE_TRIGGERS { ON | OFF }
ON
Recursive firing of AFTER triggers is allowed.
OFF
Only direct recursive firing of AFTER triggers is not allowed. To also disable indirect recursion of AFTER triggers,
set the nested triggers server option to 0 by using sp_configure.

NOTE
Only direct recursion is prevented when RECURSIVE_TRIGGERS is set to OFF. To disable indirect recursion, you must also
set the nested triggers server option to 0.

The status of this option can be determined by examining the is_recursive_triggers_on column in the
sys.databases catalog view or the IsRecursiveTriggersEnabled property of the DATABASEPROPERTYEX
function.
<target_recovery_time_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017. Not available in SQL Database.
Specifies the frequency of indirect checkpoints on a per-database basis. Beginning with SQL Server 2016 (13.x)
the default value for new databases is 1 minute, which indicates database will use indirect checkpoints. For older
versions the default is 0, which indicates that the database will use automatic checkpoints, whose frequency
depends on the recovery interval setting of the server instance. Microsoft recommends 1 minute for most
systems.
TARGET_RECOVERY_TIME =target_recovery_time { SECONDS | MINUTES }
target_recovery_time
Specifies the maximum bound on the time to recover the specified database in the event of a crash.
SECONDS
Indicates that target_recovery_time is expressed as the number of seconds.
MINUTES
Indicates that target_recovery_time is expressed as the number of minutes.
For more information about indirect checkpoints, see Database Checkpoints (SQL Server).
WITH <termination> ::=
Specifies when to roll back incomplete transactions when the database is transitioned from one state to another.
If the termination clause is omitted, the ALTER DATABASE statement waits indefinitely if there is any lock on the
database. Only one termination clause can be specified, and it follows the SET clauses.

NOTE
Not all database options use the WITH <termination> clause. For more information, see the table under "Setting Options
of the "Remarks" section of this topic.

ROLLBACK AFTER integer [SECONDS ] | ROLLBACK IMMEDIATE


Specifies whether to roll back after the specified number of seconds or immediately.
NO_WAIT
Specifies that if the requested database state or option change cannot complete immediately without waiting for
transactions to commit or roll back on their own, the request will fail.

Setting Options
To retrieve current settings for database options, use the sys.databases catalog view or DATABASEPROPERTYEX
After you set a database option, the modification takes effect immediately.
To change the default values for any one of the database options for all newly created databases, change the
appropriate database option in the model database.
Not all database options use the WITH <termination> clause or can be specified in combination with other
options. The following table lists these options and their option and termination status.

CAN USE THE WITH <TERMINATION>


OPTIONS CATEGORY CAN BE SPECIFIED WITH OTHER OPTIONS CLAUSE

<db_state_option> Yes Yes

<db_user_access_option> Yes Yes

<db_update_option> Yes Yes

<delayed_durability_option> Yes Yes

<external_access_option> Yes No

<cursor_option> Yes No

<auto_option> Yes No

<sql_option> Yes No

<recovery_option> Yes No

<target_recovery_time_option> No Yes

<database_mirroring_option> No No

ALLOW_SNAPSHOT_ISOLATION No No
CAN USE THE WITH <TERMINATION>
OPTIONS CATEGORY CAN BE SPECIFIED WITH OTHER OPTIONS CLAUSE

READ_COMMITTED_SNAPSHOT No Yes

MEMORY_OPTIMIZED_ELEVATE_TO_S Yes Yes


NAPSHOT

<service_broker_option> Yes No

DATE_CORRELATION_OPTIMIZATION Yes Yes

<parameterization_option> Yes Yes

<change_tracking_option> Yes Yes

<db_encryption> Yes No

The plan cache for the instance of SQL Server is cleared by setting one of the following options:

OFFLINE READ_WRITE

ONLINE MODIFY FILEGROUP DEFAULT

MODIFY_NAME MODIFY FILEGROUP READ_WRITE

COLLATE MODIFY FILEGROUP READ_ONLY

READ_ONLY

The procedure cache is also flushed in the following scenarios.


A database has the AUTO_CLOSE database option set to ON. When no user connection references or
uses the database, the background task tries to close and shut down the database automatically.
You run several queries against a database that has default options. Then, the database is dropped.
A database snapshot for a source database is dropped.
You successfully rebuild the transaction log for a database.
You restore a database backup.
You detach a database.
Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server
error log contains the following informational message: " SQL Server has encountered %d occurrence(s)
of cachestore flush for the '%s' cachestore (part of plan cache) due to some database maintenance or
reconfigure operations". This message is logged every five minutes as long as the cache is flushed within
that time interval.

Examples
A. Setting options on a database
The following example sets the recovery model and data page verification options for the
AdventureWorks2012 sample database.

USE master;
GO
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL PAGE_VERIFY CHECKSUM;
GO

B. Setting the database to READ_ONLY


Changing the state of a database or filegroup to READ_ONLY or READ_WRITE requires exclusive access to the
database. The following example sets the database to SINGLE_USER mode to obtain exclusive access. The example
then sets the state of the AdventureWorks2012 database to READ_ONLY and returns access to the database to
all users.

NOTE
This example uses the termination option WITH ROLLBACK IMMEDIATE in the first ALTER DATABASE statement. All
incomplete transactions will be rolled back and any other connections to the AdventureWorks2012 database will be
immediately disconnected.

USE master;
GO
ALTER DATABASE AdventureWorks2012
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
ALTER DATABASE AdventureWorks2012
SET READ_ONLY
GO
ALTER DATABASE AdventureWorks2012
SET MULTI_USER;
GO

C. Enabling snapshot isolation on a database


The following example enables the snapshot isolation framework option for the AdventureWorks2012
database.

USE AdventureWorks2012;
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
-- Check the state of the snapshot_isolation_framework
-- in the database.
SELECT name, snapshot_isolation_state,
snapshot_isolation_state_desc AS description
FROM sys.databases
WHERE name = N'AdventureWorks2012';
GO

The result set shows that the snapshot isolation framework is enabled.
NAME SNAPSHOT_ISOLATION_STATE DESCRIPTION

AdventureWorks2012 1 ON

D. Enabling, modifying, and disabling change tracking


The following example enables change tracking for the AdventureWorks2012 database and sets the retention
period to 2 days.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING = ON
(AUTO_CLEANUP = ON, CHANGE_RETENTION = 2 DAYS);

The following example shows how to change the retention period to 3 days.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING (CHANGE_RETENTION = 3 DAYS);

The following example shows how to disable change tracking for the AdventureWorks2012 database.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING = OFF;

E. Enabling the query store


Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017), SQL Database.
The following example enables the query store and configures query store parameters.

ALTER DATABASE AdventureWorks2012


SET QUERY_STORE = ON
(
OPERATION_MODE = READ_WRITE
, CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 )
, DATA_FLUSH_INTERVAL_SECONDS = 900
, MAX_STORAGE_SIZE_MB = 1024
, INTERVAL_LENGTH_MINUTES = 60
);

See Also
ALTER DATABASE Compatibility Level (Transact-SQL )
ALTER DATABASE Database Mirroring (Transact-SQL )
ALTER DATABASE SET HADR (Transact-SQL )
Statistics
CREATE DATABASE (SQL Server Transact-SQL )
Enable and Disable Change Tracking (SQL Server)
DATABASEPROPERTYEX (Transact-SQL )
DROP DATABASE (Transact-SQL )
SET TRANSACTION ISOL ATION LEVEL (Transact-SQL )
sp_configure (Transact-SQL )
sys.databases (Transact-SQL )
sys.data_spaces (Transact-SQL )
Best Practice with the Query Store
ALTER ENDPOINT (Transact-SQL)
5/4/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Enables modifying an existing endpoint in the following ways:
By adding a new method to an existing endpoint.
By modifying or dropping an existing method from the endpoint.
By changing the properties of an endpoint.

NOTE
This topic describes the syntax and arguments that are specific to ALTER ENDPOINT. For descriptions of the arguments that
are common to both CREATE ENDPOINT and ALTER ENDPOINT, see CREATE ENDPOINT (Transact-SQL).

Native XML Web Services (SOAP/HTTP endpoints) is removed beginning in SQL Server 2012 (11.x).
Transact-SQL Syntax Conventions

Syntax
ALTER ENDPOINT endPointName [ AUTHORIZATION login ]
[ STATE = { STARTED | STOPPED | DISABLED } ]
[ AS { TCP } ( <protocol_specific_items> ) ]
[ FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING } (
<language_specific_items>
) ]

<AS TCP_protocol_specific_arguments> ::=


AS TCP (
LISTENER_PORT = listenerPort
[ [ , ] LISTENER_IP = ALL | ( 4-part-ip ) | ( "ip_address_v6" ) ]
)
<FOR SERVICE_BROKER_language_specific_arguments> ::=
FOR SERVICE_BROKER (
[ AUTHENTICATION = {
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
| CERTIFICATE certificate_name
| WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
| CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
} ]
[ , ENCRYPTION = { DISABLED
|
{{SUPPORTED | REQUIRED }
[ ALGORITHM { RC4 | AES | AES RC4 | RC4 AES } ] }
]

[ , MESSAGE_FORWARDING = {ENABLED | DISABLED} ]


[ , MESSAGE_FORWARD_SIZE = forwardSize
)

<FOR DATABASE_MIRRORING_language_specific_arguments> ::=


FOR DATABASE_MIRRORING (
[ AUTHENTICATION = {
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
| CERTIFICATE certificate_name
| WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
| CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
} ]
[ , ENCRYPTION = { DISABLED
|
{{SUPPORTED | REQUIRED }
[ ALGORITHM { RC4 | AES | AES RC4 | RC4 AES } ] }
]
[ , ] ROLE = { WITNESS | PARTNER | ALL }
)

Arguments
NOTE
The following arguments are specific to ALTER ENDPOINT. For descriptions of the remaining arguments, see CREATE
ENDPOINT (Transact-SQL).

AS { TCP }
You cannot change the transport protocol with ALTER ENDPOINT.
AUTHORIZATION login
The AUTHORIZATION option is not available in ALTER ENDPOINT. Ownership can only be assigned when
the endpoint is created.
FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING }
You cannot change the payload type with ALTER ENDPOINT.

Remarks
When you use ALTER ENDPOINT, specify only those parameters that you want to update. All properties of an
existing endpoint remain the same unless you explicitly change them.
The ENDPOINT DDL statements cannot be executed inside a user transaction.
For information on choosing an encryption algorithm for use with an endpoint, see Choose an Encryption
Algorithm.

NOTE
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or RC4_128
when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of the AES
algorithms instead. In SQL Server 2012 (11.x) and later versions, material encrypted using RC4 or RC4_128 can be decrypted
in any compatibility level.
RC4 is a relatively weak algorithm, and AES is a relatively strong algorithm. But AES is considerably slower than RC4. If
security is a higher priority for you than speed, we recommend you use AES.

Permissions
User must be a member of the sysadmin fixed server role, the owner of the endpoint, or have been granted
ALTER ANY ENDPOINT permission.
To change ownership of an existing endpoint, you must use the ALTER AUTHORIZATION statement. For more
information, see ALTER AUTHORIZATION (Transact-SQL ).
For more information, see GRANT Endpoint Permissions (Transact-SQL ).

See Also
DROP ENDPOINT (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER EVENT SESSION (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Starts or stops an event session or changes an event session configuration.
Transact-SQL Syntax Conventions

Syntax
ALTER EVENT SESSION event_session_name
ON SERVER
{
[ [ { <add_drop_event> [ ,...n] }
| { <add_drop_event_target> [ ,...n ] } ]
[ WITH ( <event_session_options> [ ,...n ] ) ]
]
| [ STATE = { START | STOP } ]
}

<add_drop_event>::=
{
[ ADD EVENT <event_specifier>
[ ( {
[ SET { event_customizable_attribute = <value> [ ,...n ] } ]
[ ACTION ( { [event_module_guid].event_package_name.action_name [ ,...n ] } ) ]
[ WHERE <predicate_expression> ]
} ) ]
]
| DROP EVENT <event_specifier> }

<event_specifier> ::=
{
[event_module_guid].event_package_name.event_name
}

<predicate_expression> ::=
{
[ NOT ] <predicate_factor> | {( <predicate_expression> ) }
[ { AND | OR } [ NOT ] { <predicate_factor> | ( <predicate_expression> ) } ]
[ ,...n ]
}

<predicate_factor>::=
{
<predicate_leaf> | ( <predicate_expression> )
}

<predicate_leaf>::=
{
<predicate_source_declaration> { = | < > | ! = | > | > = | < | < = } <value>
| [event_module_guid].event_package_name.predicate_compare_name ( <predicate_source_declaration>, <value>
)
}

<predicate_source_declaration>::=
{
event_field_name | ( [event_module_guid].event_package_name.predicate_source_name )
}
}

<value>::=
{
number | 'string'
}

<add_drop_event_target>::=
{
ADD TARGET <event_target_specifier>
[ ( SET { target_parameter_name = <value> [ ,...n] } ) ]
| DROP TARGET <event_target_specifier>
}

<event_target_specifier>::=
{
[event_module_guid].event_package_name.target_name
}

<event_session_options>::=
{
[ MAX_MEMORY = size [ KB | MB] ]
[ [,] EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS | ALLOW_MULTIPLE_EVENT_LOSS | NO_EVENT_LOSS } ]
[ [,] MAX_DISPATCH_LATENCY = { seconds SECONDS | INFINITE } ]
[ [,] MAX_EVENT_SIZE = size [ KB | MB ] ]
[ [,] MEMORY_PARTITION_MODE = { NONE | PER_NODE | PER_CPU } ]
[ [,] TRACK_CAUSALITY = { ON | OFF } ]
[ [,] STARTUP_STATE = { ON | OFF } ]
}

Arguments

Term Definition

event_session_name Is the name of an existing event session.

STATE = START | STOP Starts or stops the event session. This argument is only valid
when ALTER EVENT SESSION is applied to an event session
object.

ADD EVENT <event_specifier> Associates the event identified by <event_specifier>with the


event session.

[event_module_guid].event_package_name.event_name Is the name of an event in an event package, where:

- event_module_guid is the GUID for the module that


contains the event.
- event_package_name is the package that contains the
action object.
- event_name is the event object.

Events appear in the sys.dm_xe_objects view as object_type


'event'.

SET { event_customizable_attribute= <value> [ ,...n] } Specifies customizable attributes for the event. Customizable
attributes appear in the sys.dm_xe_object_columns view as
column_type 'customizable ' and object_name = event_name.
ACTION ( { Is the action to associate with the event session, where:
[event_module_guid].event_package_name.action_name [
,...n] } ) - event_module_guid is the GUID for the module that
contains the event.
- event_package_name is the package that contains the
action object.
- action_name is the action object.

Actions appear in the sys.dm_xe_objects view as object_type


'action'.

WHERE <predicate_expression> Specifies the predicate expression used to determine if an


event should be processed. If <predicate_expression> is true,
the event is processed further by the actions and targets for
the session. If <predicate_expression> is false, the event is
dropped by the session before being processed by the actions
and targets for the session. Predicate expressions are limited
to 3000 characters, which limits string arguments.

event_field_name Is the name of the event field that identifies the predicate
source.

[event_module_guid].event_package_name.predicate_source_n Is the name of the global predicate source where:


ame
- event_module_guid is the GUID for the module that
contains the event.
- event_package_name is the package that contains the
predicate object.
- predicate_source_name is defined in the sys.dm_xe_objects
view as object_type 'pred_source'.

[event_module_guid].event_package_name.predicate_compar Is the name of the predicate object to associate with the


e_name event, where:

- event_module_guid is the GUID for the module that


contains the event.
- event_package_name is the package that contains the
predicate object.
- predicate_compare_name is a global source defined in the
sys.dm_xe_objects view as object_type 'pred_compare'.

DROP EVENT <event_specifier> Drops the event identified by <event_specifier>.


<event_specifier> must be valid in the event session.

ADD TARGET <event_target_specifier> Associates the target identified by


<event_target_specifier>with the event session.

[event_module_guid].event_package_name.target_name Is the name of a target in the event session, where:

- event_module_guid is the GUID for the module that


contains the event.
- event_package_name is the package that contains the
action object.
- target_name is the action. Actions appear in
sys.dm_xe_objects view as object_type 'target'.
SET { target_parameter_name= <value> [, ...n] } Sets a target parameter. Target parameters appear in the
sys.dm_xe_object_columns view as column_type 'customizable'
and object_name = target_name.

NOTE!! If you are using the ring buffer target, we recommend


that you set the max_memory target parameter to 2048
kilobytes (KB) to help avoid possible data truncation of the
XML output. For more information about when to use the
different target types, see SQL Server Extended Events Targets.

DROP TARGET <event_target_specifier> Drops the target identified by <event_target_specifier>.


<event_target_specifier> must be valid in the event session.

EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS Specifies the event retention mode to use for handling event
| ALLOW_MULTIPLE_EVENT_LOSS | NO_EVENT_LOSS } loss.

ALLOW_SINGLE_EVENT_LOSS
An event can be lost from the session. A single event is only
dropped when all the event buffers are full. Losing a single
event when event buffers are full allows for acceptable SQL
Server performance characteristics, while minimizing the loss
of data in the processed event stream.

ALLOW_MULTIPLE_EVENT_LOSS
Full event buffers containing multiple events can be lost from
the session. The number of events lost is dependent upon the
memory size allocated to the session, the partitioning of the
memory, and the size of the events in the buffer. This option
minimizes performance impact on the server when event
buffers are quickly filled, but large numbers of events can be
lost from the session.

NO_EVENT_LOSS
No event loss is allowed. This option ensures that all events
raised will be retained. Using this option forces all tasks that
fire events to wait until space is available in an event buffer.
This may cause detectable performance issues while the event
session is active. User connections may stall while waiting for
events to be flushed from the buffer.

MAX_DISPATCH_LATENCY = { seconds SECONDS | INFINITE } Specifies the amount of time that events are buffered in
memory before being dispatched to event session targets. The
minimum latency value is 1 second. However, 0 can be used to
specify INFINITE latency. By default, this value is set to 30
seconds.

seconds SECONDS
The time, in seconds, to wait before starting to flush buffers to
targets. seconds is a whole number.

INFINITE
Flush buffers to targets only when the buffers are full, or when
the event session closes.

NOTE!! MAX_DISPATCH_LATENCY = 0 SECONDS is equivalent


to MAX_DISPATCH_LATENCY = INFINITE.
MAX_EVENT_SIZE =size [ KB | MB ] Specifies the maximum allowable size for events.
MAX_EVENT_SIZE should only be set to allow single events
larger than MAX_MEMORY; setting it to less than
MAX_MEMORY will raise an error. size is a whole number and
can be a kilobyte (KB) or a megabyte (MB) value. If size is
specified in kilobytes, the minimum allowable size is 64 KB.
When MAX_EVENT_SIZE is set, two buffers of size are created
in addition to MAX_MEMORY. This means that the total
memory used for event buffering is MAX_MEMORY + 2 *
MAX_EVENT_SIZE.

MEMORY_PARTITION_MODE = { NONE | PER_NODE | Specifies the location where event buffers are created.
PER_CPU }
NONE
A single set of buffers is created within the SQL Server
instance.

PER NODE - A set of buffers is created for each NUMA node.

PER CPU - A set of buffers is created for each CPU.

TRACK_CAUSALITY = { ON | OFF } Specifies whether or not causality is tracked. If enabled,


causality allows related events on different server connections
to be correlated together.

STARTUP_STATE = { ON | OFF } Specifies whether or not to start this event session


automatically when SQL Server starts.

If STARTUP_STATE=ON the event session will only start if SQL


Server is stopped and then restarted.

ON= Event session is started at startup.

OFF = Event session is NOT started at startup.

Remarks
The ADD and DROP arguments cannot be used in the same statement.

Permissions
Requires the ALTER ANY EVENT SESSION permission.

Examples
The following example starts an event session, obtains some live session statistics, and then adds two events to the
existing session.
-- Start the event session
ALTER EVENT SESSION test_session
ON SERVER
STATE = start;
GO
-- Obtain live session statistics
SELECT * FROM sys.dm_xe_sessions;
SELECT * FROM sys.dm_xe_session_events;
GO

-- Add new events to the session


ALTER EVENT SESSION test_session ON SERVER
ADD EVENT sqlserver.database_transaction_begin,
ADD EVENT sqlserver.database_transaction_end;
GO

See Also
CREATE EVENT SESSION (Transact-SQL )
DROP EVENT SESSION (Transact-SQL )
SQL Server Extended Events Targets
sys.server_event_sessions (Transact-SQL )
sys.dm_xe_objects (Transact-SQL )
sys.dm_xe_object_columns (Transact-SQL )
ALTER EXTERNAL DATA SOURCE (Transact-SQL)
5/4/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies an external data source used to create an external table. The external data source can be Hadoop or Azure
blob storage (WASB ).

Syntax
-- Modify an external data source
-- Applies to: SQL Server (2016 or later)
ALTER EXTERNAL DATA SOURCE data_source_name SET
{
LOCATION = 'server_name_or_IP' [,] |
RESOURCE_MANAGER_LOCATION = <'IP address;Port'> [,] |
CREDENTIAL = credential_name
}
[;]

-- Modify an external data source pointing to Azure Blob storage


-- Applies to: SQL Server (starting with 2017)
ALTER EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://storage_account_name.blob.core.windows.net'
[, CREDENTIAL = credential_name ]
)

Arguments
data_source_name Specifies the user-defined name for the data source. The name must be unique.
LOCATION = ‘server_name_or_IP’ Specifies the name of the server or an IP address.
RESOURCE_MANAGER_LOCATION = ‘<IP address;Port>’ Specifies the Hadoop Resource Manager location.
When specified, the query optimizer might choose to pre-process data for a PolyBase query by using Hadoop’s
computation capabilities. This is a cost-based decision. Called predicate pushdown, this can significantly reduce the
volume of data transferred between Hadoop and SQL, and therefore improve query performance.
CREDENTIAL = Credential_Name Specifies the named credential. See CREATE DATABASE SCOPED
CREDENTIAL (Transact-SQL ).
TYPE = BLOB_STORAGE
Applies to: SQL Server 2017 (14.x). For bulk operations only, LOCATION must be valid the URL to Azure Blob
storage. Do not put /, file name, or shared access signature parameters at the end of the LOCATION URL. The
credential used, must be created using SHARED ACCESS SIGNATURE as the identity. For more information on shared
access signatures, see Using Shared Access Signatures (SAS ).

Remarks
Only single source can be modified at a time. Concurrent requests to modify the same source cause one statement
to wait. However, different sources can be modified at the same time. This statement can run concurrently with
other statements.

Permissions
Requires ALTER ANY EXTERNAL DATA SOURCE permission.

IMPORTANT
The ALTER ANY EXTERNAL DATA SOURCE permission grants any principal the ability to create and modify any external data
source object, and therefore, it also grants the ability to access all database scoped credentials on the database. This
permission must be considered as highly privileged, and therefore must be granted only to trusted principals in the system.

Examples
The following example alters the location and resource manager location of an existing data source.

ALTER EXTERNAL DATA SOURCE hadoop_eds SET


LOCATION = 'hdfs://10.10.10.10:8020',
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8032'
;

The following example alters the credential to connect to an existing data source.

ALTER EXTERNAL DATA SOURCE hadoop_eds SET


CREDENTIAL = new_hadoop_user
;
ALTER EXTERNAL LIBRARY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the content of an existing external package library.

Syntax
ALTER EXTERNAL LIBRARY library_name
[ AUTHORIZATION owner_name ]
SET <file_spec>
WITH ( LANGUAGE = 'R' )
[ ; ]

<file_spec> ::=
{
(CONTENT = { <client_library_specifier> | <library_bits> | NONE}
[, PLATFORM = WINDOWS )
}

<client_library_specifier> :: =
'[\\computer_name\]share_name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'
| '<relative_path_in_external_data_source>'

<library_bits> :: =
{ varbinary_literal | varbinary_expression }

Arguments
library_name
Specifies the name of an existing package library. Libraries are scoped to the user. Library names are must be
unique within the context of a specific user or owner.
The library name cannot be arbitrarily assigned. That is, you must use the name that the calling runtime expects
when it loads the package.
owner_name
Specifies the name of the user or role that owns the external library.
file_spec
Specifies the content of the package for a specific platform. Only one file artifact per platform is supported.
The file can be specified in the form of a local path or network path. If the data source option is specified, the file
name can be a relative path with respect to the container referenced in the EXTERNAL DATA SOURCE .
Optionally, an OS platform for the file can be specified. Only one file artifact or content is permitted for each OS
platform for a specific language or runtime.
library_bits
Specifies the content of the package as a hex literal, similar to assemblies.
This option is useful if you have the required permission to alter a library, but file access on the server is restricted
and you cannot save the contents to a path the server can access.
Instead, you can pass the package contents as a variable in binary format.
PLATFORM = WINDOWS
Specifies the platform for the content of the library. This value is required when modifying an existing library to
add a different platform. Windows is the only supported platform.

Remarks
For the R language, packages must be prepared in the form of zipped archive files with the .ZIP extension for
Windows. Currently, only the Windows platform is supported.
The ALTER EXTERNAL LIBRARY statement only uploads the library bits to the database. The modified library is
installed when a user runs code in sp_execute_external_script (Transact-SQL ) that calls the library.

Permissions
By default, the dbo user or any member of the role db_owner has permission to run ALTER EXTERNAL
LIBRARY. Additionally, the user who created the external library can alter that external library.

Examples
The following examples change an external library called customPackage .
A. Replace the contents of a library using a file
The following example modifies an external library called customPackage , using a zipped file containing the
updated bits.

ALTER EXTERNAL LIBRARY customPackage


SET
(CONTENT = 'C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\customPackage.zip')
WITH (LANGUAGE = 'R');

To install the updated library, execute the stored procedure sp_execute_external_script .

EXEC sp_execute_external_script
@language =N'R',
@script=N'library(customPackage)'
;

B. Alter an existing library using a byte stream


The following example alters the existing library by passing the new bits as a hexidecimal literal.

ALTER EXTERNAL LIBRARY customLibrary


SET (CONTENT = 0xabc123) WITH (LANGUAGE = 'R');

NOTE
This code sample only demonstrates the syntax; the binary value in CONTENT = has been truncated for readability and does
not create a working library. The actual contents of the binary variable would be much longer.
See also
CREATE EXTERNAL LIBRARY (Transact-SQL ) DROP EXTERNAL LIBRARY (Transact-SQL )
sys.external_library_files
sys.external_libraries
ALTER EXTERNAL RESOURCE POOL (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Applies to: SQL Server 2016 (13.x) R Services (In-Database) and SQL Server 2017 (14.x) Machine Learning
Services (In-Database)
Changes a Resource Governor external pool that specifies resources that can be used by external processes.
For R Services (In-Database) in SQL Server 2016 (13.x), the external pool governs rterm.exe ,
BxlServer.exe , and other processes spawned by them.

For Machine Learning Services (In-Database) in SQL Server 2017, the external pool governs the R
processes listed for the previous version, as well as python.exe , BxlServer.exe , and other processes
spawned by them.
Transact-SQL Syntax Conventions.

Syntax
ALTER EXTERNAL RESOURCE POOL { pool_name | "default" }
[ WITH (
[ MAX_CPU_PERCENT = value ]
[ [ , ] AFFINITY CPU =
{
AUTO
| ( <cpu_range_spec> )
| NUMANODE = (( <NUMA_node_id> )
} ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MAX_PROCESSES = value ]
)
]
[ ; ]

<CPU_range_spec> ::=
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]

Arguments
{ pool_name | "default" }
Is the name of an existing user-defined external resource pool or the default external resource pool that is created
when SQL Server is installed. "default" must be enclosed by quotation marks ("") or brackets ([]) when used with
ALTER EXTERNAL RESOURCE POOL to avoid conflict with DEFAULT , which is a system reserved word.

MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in the external resource pool can receive when
there is CPU contention. value is an integer with a default setting of 100. The allowed range for value is from 1
through 100.
AFFINITY {CPU = AUTO | ( <CPU_range_spec> ) | NUMANODE = (<NUMA_node_range_spec>)}
Attach the external resource pool to specific CPUs. The default value is AUTO.
AFFINITY CPU = ( <CPU_range_spec> ) maps the external resource pool to the SQL Server CPUs identified by
the given CPU_IDs. When you use AFFINITY NUMANODE = ( <NUMA_node_range_spec> ), the external
resource pool is affinitized to the SQL Server physical CPUs that correspond to the given NUMA node or range of
nodes.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this external resource pool. value is an integer
with a default setting of 100. The allowed range for value is from 1 through 100.
MAX_PROCESSES =value
Specifies the maximum number of processes allowed for the external resource pool. Specify 0 to set an unlimited
threshold for the pool, which is thereafter bound only by computer resources. The default is 0.

Remarks
The Database Engine implements the resource pool when you execute the ALTER RESOURCE GOVERNOR
RECONFIGURE statement.
For general information about resource pools, see Resource Governor Resource Pool,
sys.resource_governor_external_resource_pools (Transact-SQL ), and
sys.dm_resource_governor_external_resource_pool_affinity (Transact-SQL ).
For information specific to the use of external resource pools to govern machine learning jobs, see Resource
governance for machine learning in SQL Server...

Permissions
Requires CONTROL SERVER permission.

Examples
The following statement changes an external pool, restricting the CPU usage to 50 percent and the maximum
memory to 25 percent of the available memory on the computer.

ALTER EXTERNAL RESOURCE POOL ep_1


WITH (
MAX_CPU_PERCENT = 50
, AFFINITY CPU = AUTO
, MAX_MEMORY_PERCENT = 25
);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

See also
Resource governance for machine learning in SQL Server
external scripts enabled Server Configuration Option
CREATE EXTERNAL RESOURCE POOL (Transact-SQL )
DROP EXTERNAL RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
Resource Governor Resource Pool
ALTER RESOURCE GOVERNOR (Transact-SQL )
ALTER FULLTEXT CATALOG (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a full-text catalog.
Transact-SQL Syntax Conventions

Syntax
ALTER FULLTEXT CATALOG catalog_name
{ REBUILD [ WITH ACCENT_SENSITIVITY = { ON | OFF } ]
| REORGANIZE
| AS DEFAULT
}

Arguments
catalog_name
Specifies the name of the catalog to be modified. If a catalog with the specified name does not exist, Microsoft SQL
Server returns an error and does not perform the ALTER operation.
REBUILD
Tells SQL Server to rebuild the entire catalog. When a catalog is rebuilt, the existing catalog is deleted and a new
catalog is created in its place. All the tables that have full-text indexing references are associated with the new
catalog. Rebuilding resets the full-text metadata in the database system tables.
WITH ACCENT_SENSITIVITY = {ON|OFF }
Specifies if the catalog to be altered is accent-sensitive or accent-insensitive for full-text indexing and querying.
To determine the current accent-sensitivity property setting of a full-text catalog, use the
FULLTEXTCATALOGPROPERTY function with the accentsensitivity property value against catalog_name. If the
function returns '1', the full-text catalog is accent sensitive; if the function returns '0', the catalog is not accent
sensitive.
The catalog and database default accent sensitivity are the same.
REORGANIZE
Tells SQL Server to perform a master merge, which involves merging the smaller indexes created in the process of
indexing into one large index. Merging the full-text index fragments can improve performance and free up disk and
memory resources. If there are frequent changes to the full-text catalog, use this command periodically to
reorganize the full-text catalog.
REORGANIZE also optimizes internal index and catalog structures.
Keep in mind that, depending on the amount of indexed data, a master merge may take some time to complete.
Master merging a large amount of data can create a long running transaction, delaying truncation of the
transaction log during checkpoint. In this case, the transaction log might grow significantly under the full recovery
model. As a best practice, ensure that your transaction log contains sufficient space for a long-running transaction
before reorganizing a large full-text index in a database that uses the full recovery model. For more information,
see Manage the Size of the Transaction Log File.
AS DEFAULT
Specifies that this catalog is the default catalog. When full-text indexes are created with no specified catalogs, the
default catalog is used. If there is an existing default full-text catalog, setting this catalog AS DEFAULT will override
the existing default.

Permissions
User must have ALTER permission on the full-text catalog, or be a member of the db_owner, db_ddladmin fixed
database roles, or sysadmin fixed server role.

NOTE
To use ALTER FULLTEXT CATALOG AS DEFAULT, the user must have ALTER permission on the full-text catalog and CREATE
FULLTEXT CATALOG permission on the database.

Examples
The following example changes the accentsensitivity property of the default full-text catalog ftCatalog , which is
accent sensitive.

--Change to accent insensitive


USE AdventureWorks2012;
GO
ALTER FULLTEXT CATALOG ftCatalog
REBUILD WITH ACCENT_SENSITIVITY=OFF;
GO
-- Check Accentsensitivity
SELECT FULLTEXTCATALOGPROPERTY('ftCatalog', 'accentsensitivity');
GO
--Returned 0, which means the catalog is not accent sensitive.

See Also
sys.fulltext_catalogs (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
DROP FULLTEXT CATALOG (Transact-SQL )
Full-Text Search
ALTER FULLTEXT INDEX (Transact-SQL)
5/3/2018 • 13 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a full-text index in SQL Server.
Transact-SQL Syntax Conventions

Syntax
ALTER FULLTEXT INDEX ON table_name
{ ENABLE
| DISABLE
| SET CHANGE_TRACKING [ = ] { MANUAL | AUTO | OFF }
| ADD ( column_name
[ TYPE COLUMN type_column_name ]
[ LANGUAGE language_term ]
[ STATISTICAL_SEMANTICS ]
[,...n]
)
[ WITH NO POPULATION ]
| ALTER COLUMN column_name
{ ADD | DROP } STATISTICAL_SEMANTICS
[ WITH NO POPULATION ]
| DROP ( column_name [,...n] )
[ WITH NO POPULATION ]
| START { FULL | INCREMENTAL | UPDATE } POPULATION
| {STOP | PAUSE | RESUME } POPULATION
| SET STOPLIST [ = ] { OFF| SYSTEM | stoplist_name }
[ WITH NO POPULATION ]
| SET SEARCH PROPERTY LIST [ = ] { OFF | property_list_name }
[ WITH NO POPULATION ]
}
[;]

Arguments
table_name
Is the name of the table or indexed view that contains the column or columns included in the full-text index.
Specifying database and table owner names is optional.
ENABLE | DISABLE
Tells SQL Server whether to gather full-text index data for table_name. ENABLE activates the full-text index;
DISABLE turns off the full-text index. The table will not support full-text queries while the index is disabled.
Disabling a full-text index allows you to turn off change tracking but keep the full-text index, which you can
reactivate at any time using ENABLE. When the full-text index is disabled, the full-text index metadata remains in
the system tables. If CHANGE_TRACKING is in the enabled state (automatic or manual update) when the full-text
index is disabled, the state of the index freezes, any ongoing crawl stops, and new changes to the table data are
not tracked or propagated to the index.
SET CHANGE_TRACKING {MANUAL | AUTO | OFF }
Specifies whether changes (updates, deletes, or inserts) made to table columns that are covered by the full-text
index will be propagated by SQL Server to the full-text index. Data changes through WRITETEXT and
UPDATETEXT are not reflected in the full-text index, and are not picked up with change tracking.

NOTE
For information about the interaction of change tracking and WITH NO POPULATION, see "Remarks," later in this topic.

MANUAL
Specifies that the tracked changes will be propagated manually by calling the ALTER FULLTEXT INDEX … START
UPDATE POPUL ATION Transact-SQL statement (manual population). You can use SQL Server Agent to call this
Transact-SQL statement periodically.
AUTO
Specifies that the tracked changes will be propagated automatically as data is modified in the base table
(automatic population). Although changes are propagated automatically, these changes might not be reflected
immediately in the full-text index. AUTO is the default.
OFF
Specifies that SQL Server will not keep a list of changes to the indexed data.
ADD | DROP column_name
Specifies the columns to be added or deleted from a full-text index. The column or columns must be of type char,
varchar, nchar, nvarchar, text, ntext, image, xml, varbinary, or varbinary(max).
Use the DROP clause only on columns that have been enabled previously for full-text indexing.
Use TYPE COLUMN and L ANGUAGE with the ADD clause to set these properties on the column_name. When a
column is added, the full-text index on the table must be repopulated in order for full-text queries against this
column to work.

NOTE
Whether the full-text index is populated after a column is added or dropped from a full-text index depends on whether
change-tracking is enabled and whether WITH NO POPULATION is specified. For more information, see "Remarks," later in
this topic.

TYPE COLUMN type_column_name


Specifies the name of a table column, type_column_name, that is used to hold the document type for a varbinary,
varbinary(max), or image document. This column, known as the type column, contains a user-supplied file
extension (.doc, .pdf, .xls, and so forth). The type column must be of type char, nchar, varchar, or nvarchar.
Specify TYPE COLUMN type_column_name only if column_name specifies a varbinary, varbinary(max) or
image column, in which data is stored as binary data; otherwise, SQL Server returns an error.

NOTE
At indexing time, the Full-Text Engine uses the abbreviation in the type column of each table row to identify which full-text
search filter to use for the document in column_name. The filter loads the document as a binary stream, removes the
formatting information, and sends the text from the document to the word-breaker component. For more information, see
Configure and Manage Filters for Search.

L ANGUAGE language_term
Is the language of the data stored in column_name.
language_term is optional and can be specified as a string, integer, or hexadecimal value corresponding to the
locale identifier (LCID ) of a language. If language_term is specified, the language it represents will be applied to
all elements of the search condition. If no value is specified, the default full-text language of the SQL Server
instance is used.
Use the sp_configure stored procedure to access information about the default full-text language of the SQL
Server instance.
When specified as a string, language_term corresponds to the alias column value in the syslanguages system
table. The string must be enclosed in single quotation marks, as in 'language_term'. When specified as an integer,
language_term is the actual LCID that identifies the language. When specified as a hexadecimal value,
language_term is 0x followed by the hex value of the LCID. The hex value must not exceed eight digits, including
leading zeros.
If the value is in double-byte character set (DBCS ) format, SQL Server will convert it to Unicode.
Resources, such as word breakers and stemmers, must be enabled for the language specified as language_term. If
such resources do not support the specified language, SQL Server returns an error.
For non-BLOB and non-XML columns containing text data in multiple languages, or for cases when the language
of the text stored in the column is unknown, use the neutral (0x0) language resource. For documents stored in
XML - or BLOB -type columns, the language encoding within the document will be used at indexing time. For
example, in XML columns, the xml:lang attribute in XML documents will identify the language. At query time, the
value previously specified in language_term becomes the default language used for full-text queries unless
language_term is specified as part of a full-text query.
STATISTICAL_SEMANTICS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Creates the additional key phrase and document similarity indexes that are part of statistical semantic indexing.
For more information, see Semantic Search (SQL Server).
[ ,...n]
Indicates that multiple columns may be specified for the ADD, ALTER, or DROP clauses. When multiple columns
are specified, separate these columns with commas.
WITH NO POPUL ATION
Specifies that the full-text index will not be populated after an ADD or DROP column operation or a SET
STOPLIST operation. The index will only be populated if the user executes a START...POPUL ATION command.
When NO POPUL ATION is specified, SQL Server does not populate an index. The index is populated only after
the user gives an ALTER FULLTEXT INDEX...START POPUL ATION command. When NO POPUL ATION is not
specified, SQL Server populates the index.
If CHANGE_TRACKING is enabled and WITH NO POPUL ATION is specified, SQL Server returns an error. If
CHANGE_TRACKING is enabled and WITH NO POPUL ATION is not specified, SQL Server performs a full
population on the index.

NOTE
For more information about the interaction of change tracking and WITH NO POPULATION, see "Remarks," later in this
topic.

{ADD | DROP } STATISTICAL_SEMANTICS


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Enables or disables statistical semantic indexing for the specified columns. For more information, see Semantic
Search (SQL Server).
START {FULL|INCREMENTAL|UPDATE } POPUL ATION
Tells SQL Server to begin population of the full-text index of table_name. If a full-text index population is already
in progress, SQL Server returns a warning and does not start a new population.
FULL
Specifies that every row of the table be retrieved for full-text indexing even if the rows have already been indexed.
INCREMENTAL
Specifies that only the modified rows since the last population be retrieved for full-text indexing. INCREMENTAL
can be applied only if the table has a column of the type timestamp. If a table in the full-text catalog does not
contain a column of the type timestamp, the table undergoes a FULL population.
UPDATE
Specifies the processing of all insertions, updates, or deletions since the last time the change-tracking index was
updated. Change-tracking population must be enabled on a table, but the background update index or the auto
change tracking should not be turned on.
{STOP | PAUSE | RESUME } POPUL ATION
Stops, or pauses any population in progress; or stops or resumes any paused population.
STOP POPUL ATION does not stop auto change tracking or background update index. To stop change tracking,
use SET CHANGE_TRACKING OFF.
PAUSE POPUL ATION and RESUME POPUL ATION can only be used for full populations. They are not relevant
to other population types because the other populations resume crawls from where the crawl stopped.
SET STOPLIST { OFF| SYSTEM | stoplist_name }
Changes the full-text stoplist that is associated with the index, if any.
OFF
Specifies that no stoplist be associated with the full-text index.
SYSTEM
Specifies that the default full-text system STOPLIST should be used for this full-text index.
stoplist_name
Specifies the name of the stoplist to be associated with the full-text index.
For more information, see Configure and Manage Stopwords and Stoplists for Full-Text Search.
SET SEARCH PROPERTY LIST { OFF | property_list_name } [ WITH NO POPUL ATION ]
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Changes the search property list that is associated with the index, if any.
OFF
Specifies that no property list be associated with the full-text index. When you turn off the search property list of a
full-text index (ALTER FULLTEXT INDEX … SET SEARCH PROPERTY LIST OFF ), property searching on the base
table is no longer possible.
By default, when you turn off an existing search property list, the full-text index automatically repopulates. If you
specify WITH NO POPUL ATION when you turn off the search property list, automatic repopulation does not
occur. However, we recommend that you eventually run a full population on this full-text index at your
convenience. Repopulating the full-text index removes the property-specific metadata of each dropped search
property, making the full-text index smaller and more efficient.
property_list_name
Specifies the name of the search property list to be associated with the full-text index.
Adding a search property list to a full-text index requires repopulating the index to index the search properties
that are registered for the associated search property list. If you specify WITH NO POPUL ATION when adding
the search property list, you will need to run a population on the index, at an appropriate time.

IMPORTANT
If the full-text index was previously associated with a different search it must be rebuilt property list in order to bring the
index into a consistent state. The index is truncated immediately and is empty until the full population runs. For more
information about when changing the search property list causes rebuilding, see "Remarks," later in this topic.

NOTE
You can associate a given search property list with more than one full-text index in the same database.

To find the search property lists on the current database


sys.registered_search_property_lists
For more information about search property lists, see Search Document Properties with Search Property
Lists.

Interactions of Change Tracking and NO POPULATION Parameter


Whether the full-text index is populated depends on whether change-tracking is enabled and whether WITH NO
POPUL ATION is specified in the ALTER FULLTEXT INDEX statement. The following table summarizes the result
of their interaction.

CHANGE TRACKING WITH NO POPULATION RESULT

Not Enabled Not specified A full population is performed on the


index.

Not Enabled Specified No population of the index occurs until


an ALTER FULLTEXT INDEX...START
POPULATION statement is issued.

Enabled Specified An error is raised, and the index is not


altered.

Enabled Not specified A full population is performed on the


index.

For more information about populating full-text indexes, see Populate Full-Text Indexes.

Changing the Search Property List Causes Rebuilding the Index


The first time that a full-text index is associated with a search property list, the index must be repopulated to index
property-specific search terms. The existing index data is not truncated.
However, if you associate the full-text index with a different property list, the index is rebuilt. Rebuilding
immediately truncates the full-text index, removing all existing data, and the index must be repopulated. While the
population progresses, full-text queries on the base table search only on the table rows that have already been
indexed by the population. The repopulated index data will include metadata from the registered properties of the
newly added search property list.
Scenarios that cause rebuilding include:
Switching directly to a different search property list (see "Scenario A," later in this section).
Turning off the search property list and later associating the index with any search property list (see
"Scenario B," later in this section)

NOTE
For more information about how full-text search works with search property lists, see Search Document Properties with
Search Property Lists. For information about full populations, see Populate Full-Text Indexes.

Scenario A: Switching Directly to a Different Search Property List


1. A full-text index is created on table_1 with a search property list spl_1 :

CREATE FULLTEXT INDEX ON table_1 (column_name) KEY INDEX unique_key_index


WITH SEARCH PROPERTY LIST=spl_1,
CHANGE_TRACKING OFF, NO POPULATION;

2. A full population is run on the full-text index:

ALTER FULLTEXT INDEX ON table_1 START FULL POPULATION;

3. The full-text index is later associated a different search property list, spl_2 , using the following statement:

ALTER FULLTEXT INDEX ON table_1 SET SEARCH PROPERTY LIST spl_2;

This statement causes a full population, the default behavior. However, before beginning this population,
the Full-Text Engine automatically truncates the index.
Scenario B: Turning Off the Search Property List and Later Associating the Index with Any Search Property List
1. A full-text index is created on table_1 with a search property list spl_1 , followed by an automatic full
population (the default behavior):

CREATE FULLTEXT INDEX ON table_1 (column_name) KEY INDEX unique_key_index


WITH SEARCH PROPERTY LIST=spl_1;

2. The search property list is turned off, as follows:

ALTER FULLTEXT INDEX ON table_1


SET SEARCH PROPERTY LIST OFF WITH NO POPULATION;

3. The full-text index is once more associated either the same search property list or a different one.
For example the following statement re-associates the full-text index with the original search property list,
spl_1 :

ALTER FULLTEXT INDEX ON table_1 SET SEARCH PROPERTY LIST spl_1;

This statement starts a full population, the default behavior.


NOTE
The rebuild would also be required for a different search property list, such as spl_2 .

Permissions
The user must have ALTER permission on the table or indexed view, or be a member of the sysadmin fixed
server role, or the db_ddladmin or db_owner fixed database roles.
If SET STOPLIST is specified, the user must have REFERENCES permission on the stoplist. If SET SEARCH
PROPERTY LIST is specified, the user must have REFERENCES permission on the search property list. The
owner of the specified stoplist or search property list can grant REFERENCES permission, if the owner has
ALTER FULLTEXT CATALOG permissions.

NOTE
The public is granted REFERENCES permission to the default stoplist that is shipped with SQL Server.

Examples
A. Setting manual change tracking
The following example sets manual change tracking on the full-text index on the JobCandidate table.

USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON HumanResources.JobCandidate
SET CHANGE_TRACKING MANUAL;
GO

B. Associating a property list with a full-text index


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
The following example associates the DocumentPropertyList property list with the full-text index on the
Production.Document table. This ALTER FULLTEXT INDEX statement starts a full population, which is the default
behavior of the SET SEARCH PROPERTY LIST clause.

NOTE
For an example that creates the DocumentPropertyList property list, see CREATE SEARCH PROPERTY LIST (Transact-SQL).

USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON Production.Document
SET SEARCH PROPERTY LIST DocumentPropertyList;
GO

C. Removing a search property list


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
The following example removes the DocumentPropertyList property list from the full-text index on the
Production.Document . In this example, there is no hurry for removing the properties from the index, so the WITH
NO POPUL ATION option is specified. However, property-level searching is longer allowed against this full-text
index.

USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON Production.Document
SET SEARCH PROPERTY LIST OFF WITH NO POPULATION;
GO

D. Starting a full population


The following example starts a full population on the full-text index on the JobCandidate table.

USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON HumanResources.JobCandidate
START FULL POPULATION;
GO

See Also
sys.fulltext_indexes (Transact-SQL )
CREATE FULLTEXT INDEX (Transact-SQL )
DROP FULLTEXT INDEX (Transact-SQL )
Full-Text Search
Populate Full-Text Indexes
ALTER FULLTEXT STOPLIST (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Inserts or deletes a stop word in the default full-text stoplist of the current database.
Transact-SQL Syntax Conventions

Syntax
ALTER FULLTEXT STOPLIST stoplist_name
{
ADD [N] 'stopword' LANGUAGE language_term
| DROP
{
'stopword' LANGUAGE language_term
| ALL LANGUAGE language_term
| ALL
}
;

Arguments
stoplist_name
Is the name of the stoplist being altered. stoplist_name can be a maximum of 128 characters.
' stopword '
Is a string that could be a word with linguistic meaning in the specified language or a token that does not have a
linguistic meaning. stopword is limited to the maximum token length (64 characters). A stopword can be specified
as a Unicode string.
L ANGUAGE language_term
Specifies the language to associate with the stopword being added or dropped.
language_term can be specified as a string, integer, or hexadecimal value corresponding to the locale identifier
(LCID ) of the language, as follows:

FORMAT DESCRIPTION

String language_term corresponds to the alias column value in the


sys.syslanguages (Transact-SQL) compatibility view. The string
must be enclosed in single quotation marks, as in
'language_term'.

Integer language_term is the LCID of the language.

Hexadecimal language_term is 0x followed by the hexadecimal value of the


LCID. The hexadecimal value must not exceed eight digits,
including leading zeros. If the value is in double-byte character
set (DBCS) format, SQL Server converts it to Unicode.
ADD 'stopword' L ANGUAGE language_term
Adds a stop word to the stoplist for the language specified by L ANGUAGE language_term.
If the specified combination of keyword and the LCID value of the language is not unique in the STOPLIST, an
error is returned. If the LCID value does not correspond to a registered language, an error is generated.
DROP { 'stopword' L ANGUAGE language_term | ALL L ANGUAGE language_term | ALL }
Drops a stop word from the stop list.
' stopword ' L ANGUAGE language_term
Drops the specified stop word for the language specified by language_term.
ALL L ANGUAGE language_term
Drops all of the stop words for the language specified by language_term.
ALL
Drops all of the stop words in the stoplist.

Remarks
CREATE FULLTEXT STOPLIST is supported only for compatibility level 100 and higher. For compatibility levels 80
and 90, the system stoplist is always assigned to the database.

Permissions
To designate a stoplist as the default stoplist of the database requires ALTER DATABASE permission. To otherwise
alter a stoplist requires being the stoplist owner or membership in the db_owner or db_ddladmin fixed database
roles.

Examples
The following example alters a stoplist named CombinedFunctionWordList , adding the word 'en', first for Spanish
and then for French.

ALTER FULLTEXT STOPLIST CombinedFunctionWordList ADD 'en' LANGUAGE 'Spanish';


ALTER FULLTEXT STOPLIST CombinedFunctionWordList ADD 'en' LANGUAGE 'French';

See Also
CREATE FULLTEXT STOPLIST (Transact-SQL )
DROP FULLTEXT STOPLIST (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
sys.fulltext_stoplists (Transact-SQL )
sys.fulltext_stopwords (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
ALTER FUNCTION (Transact-SQL)
5/3/2018 • 14 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters an existing Transact-SQL or CLR function that was previously created by executing the CREATE FUNCTION
statement, without changing permissions and without affecting any dependent functions, stored procedures, or
triggers.
Transact-SQL Syntax Conventions

Syntax
-- Transact-SQL Scalar Function Syntax
ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
[ ; ]

-- Transact-SQL Inline Table-Valued Function Syntax


ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS TABLE
[ WITH <function_option> [ ,...n ] ]
[ AS ]
RETURN [ ( ] select_stmt [ ) ]
[ ; ]
-- Transact-SQL Multistatement Table-valued Function Syntax
ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS @return_variable TABLE <table_type_definition>
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN
END
[ ; ]
-- Transact-SQL Function Clauses
<function_option>::=
{
[ ENCRYPTION ]
| [ SCHEMABINDING ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
| [ EXECUTE_AS_Clause ]
}

<table_type_definition>:: =
( { <column_definition> <column_constraint>
| <computed_column_definition> }
[ <table_constraint> ] [ ,...n ]
)
<column_definition>::=
{
{ column_name data_type }
[ [ DEFAULT constant_expression ]
[ COLLATE collation_name ] | [ ROWGUIDCOL ]
]
| [ IDENTITY [ (seed , increment ) ] ]
[ <column_constraint> [ ...n ] ]
}

<column_constraint>::=
{
[ NULL | NOT NULL ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
[ ON { filegroup | "default" } ]
| [ CHECK ( logical_expression ) ] [ ,...n ]
}

<computed_column_definition>::=
column_name AS computed_column_expression

<table_constraint>::=
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
( column_name [ ASC | DESC ] [ ,...n ] )
[ WITH FILLFACTOR = fillfactor
| WITH ( <index_option> [ , ...n ] )
| [ CHECK ( logical_expression ) ] [ ,...n ]
}

<index_option>::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS ={ ON | OFF }
}
-- CLR Scalar and Table-Valued Function Syntax
ALTER FUNCTION [ schema_name. ] function_name
( { @parameter_name [AS] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
)
RETURNS { return_data_type | TABLE <clr_table_type_definition> }
[ WITH <clr_function_option> [ ,...n ] ]
[ AS ] EXTERNAL NAME <method_specifier>
[ ; ]

-- CLR Function Clauses


<method_specifier>::=
assembly_name.class_name.method_name

<clr_function_option>::=
}
[ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
| [ EXECUTE_AS_Clause ]
}

<clr_table_type_definition>::=
( { column_name data_type } [ ,...n ] )

-- Syntax for In-Memory OLTP: Natively compiled, scalar user-defined function


ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ NULL | NOT NULL ] [ = default ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN ATOMIC WITH (set_option [ ,... n ])
function_body
RETURN scalar_expression
END

<function_option>::=
{ | NATIVE_COMPILATION
| SCHEMABINDING
| [ EXECUTE_AS_Clause ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
}

Arguments
schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the user-defined function to be changed.

NOTE
Parentheses are required after the function name even if a parameter is not specified.
@ parameter_name
Is a parameter in the user-defined function. One or more parameters can be declared.
A function can have a maximum of 2,100 parameters. The value of each declared parameter must be supplied by
the user when the function is executed, unless a default for the parameter is defined.
Specify a parameter name by using an at sign (@) as the first character. The parameter name must comply with the
rules for identifiers. Parameters are local to the function; the same parameter names can be used in other
functions. Parameters can take the place only of constants; they cannot be used instead of table names, column
names, or the names of other database objects.

NOTE
ANSI_WARNINGS is not honored when passing parameters in a stored procedure, user-defined function, or when declaring
and setting variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger than
three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.

[ type_schema_name. ] parameter_data_type
Is the parameter data type and optionally, the schema to which it belongs. For Transact-SQL functions, all data
types, including CLR user-defined types, are allowed except the timestamp data type. For CLR functions, all data
types, including CLR user-defined types, are allowed except text, ntext, image, and timestamp data types. The
nonscalar types cursor and table cannot be specified as a parameter data type in either Transact-SQL or CLR
functions.
If type_schema_name is not specified, the SQL Server 2005 Database Engine looks for the parameter_data_type
in the following order:
The schema that contains the names of SQL Server system data types.
The default schema of the current user in the current database.
The dbo schema in the current database.
[ =default ]
Is a default value for the parameter. If a default value is defined, the function can be executed without
specifying a value for that parameter.

NOTE
Default parameter values can be specified for CLR functions except for varchar(max) and varbinary(max) data types.

When a parameter of the function has a default value, the keyword DEFAULT must be specified when calling the
function to retrieve the default value. This behavior is different from using parameters with default values in stored
procedures in which omitting the parameter also implies the default value.
return_data_type
Is the return value of a scalar user-defined function. For Transact-SQL functions, all data types, including CLR user-
defined types, are allowed except the timestamp data type. For CLR functions, all data types, including CLR user-
defined types, are allowed except text, ntext, image, and timestamp data types. The nonscalar types cursor and
table cannot be specified as a return data type in either Transact-SQL or CLR functions.
function_body
Specifies that a series of Transact-SQL statements, which together do not produce a side effect such as modifying a
table, define the value of the function. function_body is used only in scalar functions and multistatement table-
valued functions.
In scalar functions, function_body is a series of Transact-SQL statements that together evaluate to a scalar value.
In multistatement table-valued functions, function_body is a series of Transact-SQL statements that populate a
TABLE return variable.
scalar_expression
Specifies that the scalar function returns a scalar value.
TABLE
Specifies that the return value of the table-valued function is a table. Only constants and @local_variables can be
passed to table-valued functions.
In inline table-valued functions, the TABLE return value is defined through a single SELECT statement. Inline
functions do not have associated return variables.
In multistatement table-valued functions, @return_variable is a TABLE variable used to store and accumulate the
rows that should be returned as the value of the function. @return_variable can be specified only for Transact-SQL
functions and not for CLR functions.
select-stmt
Is the single SELECT statement that defines the return value of an inline table-valued function.
EXTERNAL NAME <method_specifier>assembly_name.class_name.method_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the method of an assembly to bind with the function. assembly_name must match an existing assembly
in SQL Server in the current database with visibility on. class_name must be a valid SQL Server identifier and
must exist as a class in the assembly. If the class has a namespace-qualified name that uses a period (.) to separate
namespace parts, the class name must be delimited by using brackets ([]) or quotation marks (""). method_name
must be a valid SQL Server identifier and must exist as a static method in the specified class.

NOTE
By default, SQL Server cannot execute CLR code. You can create, modify, and drop database objects that reference common
language runtime modules; however, you cannot execute these references in SQL Server until you enable the clr enabled
option. To enable the option, use sp_configure.

NOTE
This option is not available in a contained database.

<table_type_definition>( { <column_definition> <column_constraint> | <computed_column_definition> } [


<table_constraint> ] [ ,...n ])
Defines the table data type for a Transact-SQL function. The table declaration includes column definitions and
column or table constraints.
< clr_table_type_definition > ( { column_namedata_type } [ ,...n ] ) Applies to: SQL Server 2008 through SQL
Server 2017, SQL Database (Preview in some regions).
Defines the table data types for a CLR function. The table declaration includes only column names and data types.
NULL|NOT NULL
Supported only for natively compiled, scalar user-defined functions. For more information, see Scalar User-
Defined Functions for In-Memory OLTP.
NATIVE_COMPIL ATION
Indicates whether a user-defined function is natively compiled. This argument is required for natively compiled,
scalar user-defined functions.
The NATIVE_COMPIL ATION argument is required when you ALTER the function, and can only be used, if the
function was created with the NATIVE_COMPIL ATION argument.
BEGIN ATOMIC WITH
Supported only for natively compiled, scalar user-defined functions, and is required. For more information, see
Atomic Blocks.
SCHEMABINDING
The SCHEMABINDING argument is required for natively compiled, scalar user-defined functions.
<function_option>::= and <clr_function_option>::=
Specifies the function will have one or more of the following options.
ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates that the Database Engine encrypts the catalog view columns that contains the text of the ALTER
FUNCTION statement. Using ENCRYPTION prevents the function from being published as part of SQL Server
replication. ENCRYPTION cannot be specified for CLR functions.
SCHEMABINDING
Specifies that the function is bound to the database objects that it references. This condition will prevent changes
to the function if other schema bound objects are referencing it.
The binding of the function to the objects it references is removed only when one of the following actions occurs:
The function is dropped.
The function is modified by using the ALTER statement with the SCHEMABINDING option not specified.
For a list of conditions that must be met before a function can be schema bound, see CREATE FUNCTION
(Transact-SQL ).
RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT
Specifies the OnNULLCall attribute of a scalar-valued function. If not specified, CALLED ON NULL INPUT is
implied by default. This means that the function body executes even if NULL is passed as an argument.
If RETURNS NULL ON NULL INPUT is specified in a CLR function, it indicates that SQL Server can return NULL
when any of the arguments it receives is NULL, without actually invoking the body of the function. If the method
specified in <method_specifier> already has a custom attribute that indicates RETURNS NULL ON NULL INPUT,
but the ALTER FUNCTION statement indicates CALLED ON NULL INPUT, the ALTER FUNCTION statement
takes precedence. The OnNULLCall attribute cannot be specified for CLR table-valued functions.
EXECUTE AS Clause
Specifies the security context under which the user-defined function is executed. Therefore, you can control which
user account SQL Server uses to validate permissions on any database objects referenced by the function.

NOTE
EXECUTE AS cannot be specified for inline user-defined functions.

For more information, see EXECUTE AS Clause (Transact-SQL ).


< column_definition >::=
Defines the table data type. The table declaration includes column definitions and constraints. For CLR functions,
only column_name and data_type can be specified.
column_name
Is the name of a column in the table. Column names must comply with the rules for identifiers and must be unique
in the table. column_name can consist of 1 through 128 characters.
data_type
Specifies the column data type. For Transact-SQL functions, all data types, including CLR user-defined types, are
allowed except timestamp. For CLR functions, all data types, including CLR user-defined types, are allowed except
text, ntext, image, char, varchar, varchar(max), and timestamp.The nonscalar type cursor cannot be specified
as a column data type in either Transact-SQL or CLR functions.
DEFAULT constant_expression
Specifies the value provided for the column when a value is not explicitly supplied during an insert.
constant_expression is a constant, NULL, or a system function value. DEFAULT definitions can be applied to any
column except those that have the IDENTITY property. DEFAULT cannot be specified for CLR table-valued
functions.
COLL ATE collation_name
Specifies the collation for the column. If not specified, the column is assigned the default collation of the database.
Collation name can be either a Windows collation name or a SQL collation name. For a list of and more
information, see Windows Collation Name (Transact-SQL ) and SQL Server Collation Name (Transact-SQL ).
The COLL ATE clause can be used to change the collations only of columns of the char, varchar, nchar, and
nvarchar data types.
COLL ATE cannot be specified for CLR table-valued functions.
ROWGUIDCOL
Indicates that the new column is a row global unique identifier column. Only one uniqueidentifier column per
table can be designated as the ROWGUIDCOL column. The ROWGUIDCOL property can be assigned only to a
uniqueidentifier column.
The ROWGUIDCOL property does not enforce uniqueness of the values stored in the column. It also does not
automatically generate values for new rows inserted into the table. To generate unique values for each column, use
the NEWID function on INSERT statements. A default value can be specified; however, NEWID cannot be specified
as the default.
IDENTITY
Indicates that the new column is an identity column. When a new row is added to the table, SQL Server provides a
unique, incremental value for the column. Identity columns are typically used together with PRIMARY KEY
constraints to serve as the unique row identifier for the table. The IDENTITY property can be assigned to tinyint,
smallint, int, bigint, decimal(p,0), or numeric(p,0) columns. Only one identity column can be created per table.
Bound defaults and DEFAULT constraints cannot be used with an identity column. You must specify both the seed
and increment or neither. If neither is specified, the default is (1,1).
IDENTITY cannot be specified for CLR table-valued functions.
seed
Is the integer value to be assigned to the first row in the table.
increment
Is the integer value to add to the seed value for successive rows in the table.
< column_constraint >::= and < table_constraint>::=
Defines the constraint for a specified column or table. For CLR functions, the only constraint type allowed is NULL.
Named constraints are not allowed.
NULL | NOT NULL
Determines whether null values are allowed in the column. NULL is not strictly a constraint but can be specified
just like NOT NULL. NOT NULL cannot be specified for CLR table-valued functions.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column through a unique index. In table-valued user-
defined functions, the PRIMARY KEY constraint can be created on only one column per table. PRIMARY KEY
cannot be specified for CLR table-valued functions.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns through a unique index. A table can
have multiple UNIQUE constraints. UNIQUE cannot be specified for CLR table-valued functions.
CLUSTERED | NONCLUSTERED
Indicate that a clustered or a nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints use CLUSTERED, and UNIQUE constraints use NONCLUSTERED.
CLUSTERED can be specified for only one constraint. If CLUSTERED is specified for a UNIQUE constraint and a
PRIMARY KEY constraint is also specified, the PRIMARY KEY uses NONCLUSTERED.
CLUSTERED and NONCLUSTERED cannot be specified for CLR table-valued functions.
CHECK
Is a constraint that enforces domain integrity by limiting the possible values that can be entered into a column or
columns. CHECK constraints cannot be specified for CLR table-valued functions.
logical_expression
Is a logical expression that returns TRUE or FALSE.
<computed_column_definition>::=
Specifies a computed column. For more information about computed columns, see CREATE TABLE (Transact-
SQL ).
column_name
Is the name of the computed column.
computed_column_expression
Is an expression that defines the value of a computed column.
<index_option>::=
Specifies the index options for the PRIMARY KEY or UNIQUE index. For more information about index options,
see CREATE INDEX (Transact-SQL ).
PAD_INDEX = { ON | OFF }
Specifies index padding. The default is OFF.
FILLFACTOR = fillfactor
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or change. fillfactor must be an integer value from 1 to 100. The default is 0.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique index.
The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The default is
OFF.
STATISTICS_NORECOMPUTE = { ON | OFF }
Specifies whether distribution statistics are recomputed. The default is OFF.
ALLOW_ROW_LOCKS = { ON | OFF }
Specifies whether row locks are allowed. The default is ON.
ALLOW_PAGE_LOCKS = { ON | OFF }
Specifies whether page locks are allowed. The default is ON.

Remarks
ALTER FUNCTION cannot be used to change a scalar-valued function to a table-valued function, or vice versa.
Also, ALTER FUNCTION cannot be used to change an inline function to a multistatement function, or vice versa.
ALTER FUNCTION cannot be used to change a Transact-SQL function to a CLR function or vice-versa.
The following Service Broker statements cannot be included in the definition of a Transact-SQL user-defined
function:
BEGIN DIALOG CONVERSATION
END CONVERSATION
GET CONVERSATION GROUP
MOVE CONVERSATION
RECEIVE
SEND

Permissions
Requires ALTER permission on the function or on the schema. If the function specifies a user-defined type,
requires EXECUTE permission on the type.

See Also
CREATE FUNCTION (Transact-SQL )
DROP FUNCTION (Transact-SQL )
Make Schema Changes on Publication Databases
EVENTDATA (Transact-SQL )
ALTER INDEX (Transact-SQL)
5/16/2018 • 46 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies an existing table or view index (relational or XML ) by disabling, rebuilding, or reorganizing the index; or
by setting options on the index.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

ALTER INDEX { index_name | ALL } ON <object>


{
REBUILD {
[ PARTITION = ALL ] [ WITH ( <rebuild_index_option> [ ,...n ] ) ]
| [ PARTITION = partition_number [ WITH ( <single_partition_rebuild_index_option> ) [ ,...n ] ]
}
| DISABLE
| REORGANIZE [ PARTITION = partition_number ] [ WITH ( <reorganize_option> ) ]
| SET ( <set_index_option> [ ,...n ] )
| RESUME [WITH (<resumable_index_options>,[…n])]
| PAUSE
| ABORT
}
[ ; ]

<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}

<rebuild_index_option > ::=


{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| STATISTICS_INCREMENTAL = { ON | OFF }
| ONLINE = {
ON [ ( <low_priority_lock_wait> ) ]
| OFF }
| RESUMABLE = { ON | OFF }
| MAX_DURATION = <time> [MINUTES}
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| COMPRESSION_DELAY = {0 | delay [Minutes]}
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( {<partition_number> [ TO <partition_number>] } [ , ...n ] ) ]
}

<single_partition_rebuild_index_option> ::=
{
SORT_IN_TEMPDB = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| RESUMABLE = { ON | OFF }
| MAX_DURATION = <time> [MINUTES}
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE} }
| ONLINE = { ON [ ( <low_priority_lock_wait> ) ] | OFF }
}

<reorganize_option>::=
{
LOB_COMPACTION = { ON | OFF }
| COMPRESS_ALL_ROW_GROUPS = { ON | OFF}
}

<set_index_option>::=
{
ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| COMPRESSION_DELAY= {0 | delay [Minutes]}
}

<resumable_index_option> ::=
{
MAXDOP = max_degree_of_parallelism
| MAX_DURATION =<time> [MINUTES]
| <low_priority_lock_wait>
}

<low_priority_lock_wait>::=
{
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = <time> [ MINUTES ] ,
ABORT_AFTER_WAIT = { NONE | SELF | BLOCKERS } )
}

-- Syntax for SQL Data Warehouse and Parallel Data Warehouse

ALTER INDEX { index_name | ALL }


ON [ schema_name. ] table_name
{
REBUILD {
[ PARTITION = ALL [ WITH ( <rebuild_index_option> ) ] ]
| [ PARTITION = partition_number [ WITH ( <single_partition_rebuild_index_option> )] ]
}
| DISABLE
| REORGANIZE [ PARTITION = partition_number ]
}
[;]

<rebuild_index_option > ::=


{
DATA_COMPRESSION = { COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( {<partition_number> [ TO <partition_number>] } [ , ...n ] ) ]
}

<single_partition_rebuild_index_option > ::=


{
DATA_COMPRESSION = { COLUMNSTORE | COLUMNSTORE_ARCHIVE }
}

Arguments
index_name
Is the name of the index. Index names must be unique within a table or view but do not have to be unique within
a database. Index names must follow the rules of identifiers.
ALL
Specifies all indexes associated with the table or view regardless of the index type. Specifying ALL causes the
statement to fail if one or more indexes are in an offline or read-only filegroup or the specified operation is not
allowed on one or more index types. The following table lists the index operations and disallowed index types.

USING THE KEYWORD ALL WITH THIS OPERATION FAILS IF THE TABLE HAS ONE OR MORE

REBUILD WITH ONLINE = ON XML index

Spatial index

Columnstore index: Applies to: SQL Server (Starting with


SQL Server 2012 (11.x)) and SQL Database.

REBUILD PARTITION = partition_number Nonpartitioned index, XML index, spatial index, or disabled
index

REORGANIZE Indexes with ALLOW_PAGE_LOCKS set to OFF

REORGANIZE PARTITION = partition_number Nonpartitioned index, XML index, spatial index, or disabled
index

IGNORE_DUP_KEY = ON XML index

Spatial index

Columnstore index: Applies to: SQL Server (Starting with


SQL Server 2012 (11.x)) and SQL Database.

ONLINE = ON XML index

Spatial index

Columnstore index: Applies to: SQL Server (Starting with


SQL Server 2012 (11.x)) and SQL Database.

RESUMABLE = ON Resumable indexes not supported with All keyword.

Applies to: SQL Server (Starting with SQL Server 2017


(14.x)) and SQL Database

WARNING
For more detailed information about index operations that can be performed online, see Guidelines for Online Index
Operations.

If ALL is specified with PARTITION = partition_number, all indexes must be aligned. This means that they are
partitioned based on equivalent partition functions. Using ALL with PARTITION causes all index partitions with
the same partition_number to be rebuilt or reorganized. For more information about partitioned indexes, see
Partitioned Tables and Indexes.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or_view_name
Is the name of the table or view associated with the index. To display a report of the indexes on an object, use the
sys.indexes catalog view.
SQL Database supports the three-part name format database_name.[schema_name].table_or_view_name when
the database_name is the current database or the database_name is tempdb and the table_or_view_name starts
with #.
REBUILD [ WITH (<rebuild_index_option> [ ,... n]) ]
Specifies the index will be rebuilt using the same columns, index type, uniqueness attribute, and sort order. This
clause is equivalent to DBCC DBREINDEX. REBUILD enables a disabled index. Rebuilding a clustered index does
not rebuild associated nonclustered indexes unless the keyword ALL is specified. If index options are not
specified, the existing index option values stored in sys.indexes are applied. For any index option whose value is
not stored in sys.indexes, the default indicated in the argument definition of the option applies.
If ALL is specified and the underlying table is a heap, the rebuild operation has no effect on the table. Any
nonclustered indexes associated with the table are rebuilt.
The rebuild operation can be minimally logged if the database recovery model is set to either bulk-logged or
simple.

NOTE
When you rebuild a primary XML index, the underlying user table is unavailable for the duration of the index operation.

Applies to: SQL Server (Starting with SQL Server 2012 (11.x)) and SQL Database.
For columnstore indexes, the rebuild operation:
1. Does not use the sort order.
2. Acquires an exclusive lock on the table or partition while the rebuild occurs. The data is “offline” and
unavailable during the rebuild, even when using NOLOCK, RCSI, or SI.
3. Re-compresses all data into the columnstore. Two copies of the columnstore index exist while the rebuild
is taking place. When the rebuild is finished, SQL Server deletes the original columnstore index.
For more information about rebuilding columnstore indexes, see Columnstore indexes - defragmentation
PARTITION
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies that only one partition of an index will be rebuilt or reorganized. PARTITION cannot be specified if
index_name is not a partitioned index.
PARTITION = ALL rebuilds all partitions.

WARNING
Creating and rebuilding nonaligned indexes on a table with more than 1,000 partitions is possible, but is not supported.
Doing so may cause degraded performance or excessive memory consumption during these operations. We recommend
using only aligned indexes when the number of partitions exceed 1,000.

partition_number
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Is the partition number of a partitioned index that is to be rebuilt or reorganized. partition_number is a constant
expression that can reference variables. These include user-defined type variables or functions and user-defined
functions, but cannot reference a Transact-SQL statement. partition_number must exist or the statement fails.
WITH (<single_partition_rebuild_index_option>)
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
SORT_IN_TEMPDB, MAXDOP, and DATA_COMPRESSION are the options that can be specified when you
rebuild a single partition (PARTITION = n). XML indexes cannot be specified in a single partition rebuild
operation.
DISABLE
Marks the index as disabled and unavailable for use by the Database Engine. Any index can be disabled. The
index definition of a disabled index remains in the system catalog with no underlying index data. Disabling a
clustered index prevents user access to the underlying table data. To enable an index, use ALTER INDEX
REBUILD or CREATE INDEX WITH DROP_EXISTING. For more information, see Disable Indexes and
Constraints and Enable Indexes and Constraints.
REORGANIZE a rowstore index
For rowstore indexes, REORGANIZE specifies to reorganize the index leaf level. The REORGANIZE operation is:
Always performed online. This means long-term blocking table locks are not held and queries or updates to
the underlying table can continue during the ALTER INDEX REORGANIZE transaction.
Not allowed for a disabled index
Not allowed when ALLOW_PAGE_LOCKS is set to OFF
Not rolled back when it is performed within a transaction and the transaction is rolled back.
REORGANIZE WITH ( LOB_COMPACTION = { ON | OFF } )
Applies to rowstore indexes.
LOB_COMPACTION = ON
Specifies to compact all pages that contain data of these large object (LOB ) data types: image, text, ntext,
varchar(max), nvarchar(max), varbinary(max), and xml. Compacting this data can reduce the data size on
disk.
For a clustered index, this compacts all LOB columns that are contained in the table.
For a nonclustered index, this compacts all LOB columns that are nonkey (included) columns in the index.
REORGANIZE ALL performs LOB_COMPACTION on all indexes. For each index, this compacts all LOB
columns in the clustered index, underlying table, or included columns in a nonclustered index.
LOB_COMPACTION = OFF
Pages that contain large object data are not compacted.
OFF has no effect on a heap.
REORGANIZE a columnstore index
For columnstore indexes, REORGANIZE compresses each CLOSED delta rowgroup into the columnstore
as a compressed rowgroup. The REORGANIZE operation is always performed online. This means long-
term blocking table locks are not held and queries or updates to the underlying table can continue during
the ALTER INDEX REORGANIZE transaction.
REORGANIZE is not required in order to move CLOSED delta rowgroups into compressed rowgroups.
The background tuple-mover (TM ) process wakes up periodically to compress CLOSED delta rowgroups.
We recommend using REORGANIZE when tuple-mover is falling behind. REORGANIZE can compress
rowgroups more aggressively.
To compress all OPEN and CLOSED rowgroups, see the REORGANIZE WITH
(COMPRESS_ALL_ROW_GROUPS ) option in this section.

For columnstore indexes in SQL Server (Starting with 2016) and SQL Database, REORGANIZE performs the
following additional defragmentation optimizations online:
Physically removes rows from a rowgroup when 10% or more of the rows have been logically deleted. The
deleted bytes are reclaimed on the physical media. For example, if a compressed row group of 1 million
rows has 100K rows deleted, SQL Server will remove the deleted rows and recompress the rowgroup
with 900k rows. It saves on the storage by removing deleted rows.
Combines one or more compressed rowgroups to increase rows per rowgroup up to the maximum of
1,024,576 rows. For example, if you bulk import 5 batches of 102,400 rows you will get 5 compressed
rowgroups. If you run REORGANIZE, these rowgroups will get merged into 1 compressed rowgroup of
size 512,000 rows. This assumes there were no dictionary size or memory limitations.
For rowgroups in which 10% or more of the rows have been logically deleted, SQL Server will try to
combine this rowgroup with one or more rowgroups. For example, rowgroup 1 is compressed with
500,000 rows and rowgroup 21 is compressed with the maximum of 1,048,576 rows. Rowgroup 21 has
60% of the rows deleted which leaves 409,830 rows. SQL Server favors combining these two rowgroups
to compress a new rowgroup that has 909,830 rows.
REORGANIZE WITH ( COMPRESS_ALL_ROW_GROUPS = { ON | OFF } )
Applies to: SQL Server (Starting with SQL Server 2016 (13.x)) and SQL Database
COMPRESS_ALL_ROW_GROUPS provides a way to force OPEN or CLOSED delta rowgroups into the
columnstore. With this option, it is not necessary to rebuild the columnstore index to empty the delta rowgroups.
This, combined with the other remove and merge defragmentation features makes it no longer necessary to
rebuild the index in most situations.
ON forces all rowgroups into the columnstore, regardless of size and state (CLOSED or OPEN ).
OFF forces all CLOSED rowgroups into the columnstore.
SET ( <set_index option> [ ,... n] )
Specifies index options without rebuilding or reorganizing the index. SET cannot be specified for a disabled index.
PAD_INDEX = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by FILLFACTOR is applied to the intermediate-level pages of the
index. If FILLFACTOR is not specified at the same time PAD_INDEX is set to ON, the fill factor value stored in
sys.indexes is used.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity. This leaves sufficient space for at least one row of the
maximum size that the index can have, based on the set of keys on the intermediate pages.
For more information, see CREATE INDEX (Transact-SQL ).
FILLFACTOR = fillfactor
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Fill factor
values 0 and 100 are the same in all respects.
An explicit FILLFACTOR setting applies only when the index is first created or rebuilt. The Database Engine does
not dynamically keep the specified percentage of empty space in the pages. For more information, see CREATE
INDEX (Transact-SQL ).
To view the fill factor setting, use sys.indexes.

IMPORTANT
Creating or altering a clustered index with a FILLFACTOR value affects the amount of storage space the data occupies,
because the Database Engine redistributes the data when it creates the clustered index.

SORT_IN_TEMPDB = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether to store the sort results in tempdb. The default is OFF.
ON
The intermediate sort results that are used to build the index are stored in tempdb. If tempdb is on a different
set of disks than the user database, this may reduce the time needed to create an index. However, this increases
the amount of disk space that is used during the index build.
OFF
The intermediate sort results are stored in the same database as the index.
If a sort operation is not required, or if the sort can be performed in memory, the SORT_IN_TEMPDB option is
ignored.
For more information, see SORT_IN_TEMPDB Option For Indexes.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique index.
The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The default
is OFF.
ON
A warning message will occur when duplicate key values are inserted into a unique index. Only the rows violating
the uniqueness constraint will fail.
OFF
An error message will occur when duplicate key values are inserted into a unique index. The entire INSERT
operation will be rolled back.
IGNORE_DUP_KEY cannot be set to ON for indexes created on a view, non-unique indexes, XML indexes, spatial
indexes, and filtered indexes.
To view IGNORE_DUP_KEY, use sys.indexes.
In backward compatible syntax, WITH IGNORE_DUP_KEY is equivalent to WITH IGNORE_DUP_KEY = ON.
STATISTICS_NORECOMPUTE = { ON | OFF }
Specifies whether distribution statistics are recomputed. The default is OFF.
ON
Out-of-date statistics are not automatically recomputed.
OFF
Automatic statistics updating are enabled.
To restore automatic statistics updating, set the STATISTICS_NORECOMPUTE to OFF, or execute UPDATE
STATISTICS without the NORECOMPUTE clause.

IMPORTANT
Disabling automatic recomputation of distribution statistics may prevent the query optimizer from picking optimal
execution plans for queries that involve the table.

STATISTICS_INCREMENTAL = { ON | OFF }
When ON, the statistics created are per partition statistics. When OFF, the statistics tree is dropped and SQL
Server re-computes the statistics. The default is OFF.
If per partition statistics are not supported the option is ignored and a warning is generated. Incremental stats
are not supported for following statistics types:
Statistics created with indexes that are not partition-aligned with the base table.
Statistics created on Always On readable secondary databases.
Statistics created on read-only databases.
Statistics created on filtered indexes.
Statistics created on views.
Statistics created on internal tables.
Statistics created with spatial indexes or XML indexes.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
ONLINE = { ON | OFF } <as applies to rebuild_index_option>
Specifies whether underlying tables and associated indexes are available for queries and data modification during
the index operation. The default is OFF.
For an XML index or spatial index, only ONLINE = OFF is supported, and if ONLINE is set to ON an error is
raised.

NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016 (13.x) and Editions and Supported
Features for SQL Server 2017.

ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. This allows queries or updates to the
underlying table and indexes to continue. At the start of the operation, a Shared (S ) lock is very briefly held on
the source object. At the end of the operation, an S lock is very briefly held on the source if a nonclustered index
is being created, or an SCH-M (Schema Modification) lock is acquired when a clustered index is created or
dropped online, or when a clustered or nonclustered index is being rebuilt. ONLINE cannot be set to ON when
an index is being created on a local temporary table.
OFF
Table locks are applied for the duration of the index operation. An offline index operation that creates, rebuilds, or
drops a clustered, spatial, or XML index, or rebuilds or drops a nonclustered index, acquires a Schema
modification (Sch-M ) lock on the table. This prevents all user access to the underlying table for the duration of
the operation. An offline index operation that creates a nonclustered index acquires a Shared (S ) lock on the table.
This prevents updates to the underlying table but allows read operations, such as SELECT statements.
For more information, see How Online Index Operations Work.
Indexes, including indexes on global temp tables, can be rebuilt online with the following exceptions:
XML indexes
Indexes on local temp tables
A subset of a partitioned index (An entire partitioned index can be rebuilt online.)
SQL Database prior to V12, and SQL Server prior to SQL Server 2012 (11.x), do not permit the ONLINE
option for clustered index build or rebuild operations when the base table contains varchar(max) or
varbinary(max) columns.
RESUMABLE = { ON | OFF}
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Specifies whether an online index operation is resumable.
ON Index operation is resumable.
OFF Index operation is not resumable.
MAX_DURATION = time [MINUTES ] used with RESUMABLE = ON (requires ONLINE = ON ).
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Indicates time (an integer value specified in minutes) that a resumable online index operation is executed before
being paused.
ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when you access the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.

NOTE
An index cannot be reorganized when ALLOW_PAGE_LOCKS is set to OFF.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Overrides the max degree of parallelism configuration option for the duration of the index operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to
limit the number of processors used in a parallel plan execution. The maximum is 64 processors.

IMPORTANT
Although the MAXDOP option is syntactically supported for all XML indexes, for a spatial index or a primary XML index,
ALTER INDEX currently uses only a single processor.

max_degree_of_parallelism can be:


1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.

NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016 (13.x).

COMPRESSION_DEL AY = { 0 |duration [Minutes] }


This feature is available Starting with SQL Server 2016 (13.x)
For a disk-based table, delay specifies the minimum number of minutes a delta rowgroup in the CLOSED state
must remain in the delta rowgroup before SQL Server can compress it into the compressed rowgroup. Since
disk-based tables don't track insert and update times on individual rows, SQL Server applies the delay to delta
rowgroups in the CLOSED state.
The default is 0 minutes.
The default is 0 minutes.
For recommendations on when to use COMPRESSION_DEL AY, see Columnstore Indexes for Real-Time
Operational Analytics.
DATA_COMPRESSION
Specifies the data compression option for the specified index, partition number, or range of partitions. The
options are as follows:
NONE
Index or specified partitions are not compressed. This does not apply to columnstore indexes.
ROW
Index or specified partitions are compressed by using row compression. This does not apply to columnstore
indexes.
PAGE
Index or specified partitions are compressed by using page compression. This does not apply to columnstore
indexes.
COLUMNSTORE
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
Applies only to columnstore indexes, including both nonclustered columnstore and clustered columnstore
indexes. COLUMNSTORE specifies to decompress the index or specified partitions that are compressed with the
COLUMNSTORE_ARCHIVE option. When the data is restored, it will continue to be compressed with the
columnstore compression that is used for all columnstore indexes.
COLUMNSTORE_ARCHIVE
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
Applies only to columnstore indexes, including both nonclustered columnstore and clustered columnstore
indexes. COLUMNSTORE_ARCHIVE will further compress the specified partition to a smaller size. This can be
used for archival, or for other situations that require a smaller storage size and can afford more time for storage
and retrieval.
For more information about compression, see Data Compression.
ON PARTITIONS ( { <partition_number_expression> | <range> } [,...n] )
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies the partitions to which the DATA_COMPRESSION setting applies. If the index is not partitioned, the
ON PARTITIONS argument will generate an error. If the ON PARTITIONS clause is not provided, the
DATA_COMPRESSION option applies to all partitions of a partitioned index.
<partition_number_expression> can be specified in the following ways:
Provide the number for a partition, for example: ON PARTITIONS (2).
Provide the partition numbers for several individual partitions separated by commas, for example: ON
PARTITIONS (1, 5).
Provide both ranges and individual partitions: ON PARTITIONS (2, 4, 6 TO 8).
<range> can be specified as partition numbers separated by the word TO, for example: ON PARTITIONS
(6 TO 8).
To set different types of data compression for different partitions, specify the DATA_COMPRESSION
option more than once, for example:

REBUILD WITH
(
DATA_COMPRESSION = NONE ON PARTITIONS (1),
DATA_COMPRESSION = ROW ON PARTITIONS (2, 4, 6 TO 8),
DATA_COMPRESSION = PAGE ON PARTITIONS (3, 5)
);

ONLINE = { ON | OFF } <as applies to single_partition_rebuild_index_option>


Specifies whether an index or an index partition of an underlying table can be rebuilt online or offline. If
REBUILD is performed online (ON ) the data in this table is available for queries and data modification during
the index operation. The default is OFF.
ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. An S -lock on the table is required in the
Starting of the index rebuild and a Sch-M lock on the table at the end of the online index rebuild. Although both
locks are short metadata locks, especially the Sch-M lock must wait for all blocking transactions to be completed.
During the wait time the Sch-M lock blocks all other transactions that wait behind this lock when accessing the
same table.

NOTE
Online index rebuild can set the low_priority_lock_wait options described later in this section.

OFF
Table locks are applied for the duration of the index operation. This prevents all user access to the underlying
table for the duration of the operation.
WAIT_AT_LOW_PRIORITY used with ONLINE=ON only.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
An online index rebuild has to wait for blocking operations on this table. WAIT_AT_LOW_PRIORITY indicates
that the online index rebuild operation will wait for low priority locks, allowing other operations to proceed while
the online index build operation is waiting. Omitting the WAIT AT LOW PRIORITY option is equivalent to
WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE) . For more information, see
WAIT_AT_LOW_PRIORITY.
MAX_DURATION = time [MINUTES ]
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
The wait time (an integer value specified in minutes) that the online index rebuild locks will wait with low priority
when executing the DDL command. If the operation is blocked for the MAX_DURATION time, one of the
ABORT_AFTER_WAIT actions will be executed. MAX_DURATION time is always in minutes, and the word
MINUTES can be omitted.
ABORT_AFTER_WAIT = [NONE | SELF | BLOCKERS } ]
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
NONE
Continue waiting for the lock with normal (regular) priority.
SELF
Exit the online index rebuild DDL operation currently being executed without taking any action.
BLOCKERS
Kill all user transactions that block the online index rebuild DDL operation so that the operation can continue. The
BLOCKERS option requires the login to have ALTER ANY CONNECTION permission.
RESUME
Applies to: Starting with SQL Server 2017 (14.x)
Resume an index operation that is paused manually or due to a failure.
MAX_DURATION used with RESUMABLE=ON
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
The time (an integer value specified in minutes) the resumable online index operation is executed after being
resumed. Once the time expires, the resumable operation is paused if it is still running.
WAIT_AT_LOW_PRIORITY used with RESUMABLE=ON and ONLINE = ON.
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Resuming an online index rebuild after a pause has to wait for blocking operations on this table.
WAIT_AT_LOW_PRIORITY indicates that the online index rebuild operation will wait for low priority locks,
allowing other operations to proceed while the online index build operation is waiting. Omitting the WAIT AT
LOW PRIORITY option is equivalent to
WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE) . For more information, see
WAIT_AT_LOW_PRIORITY.
PAUSE
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Pause a resumable online index rebuild operation.
ABORT
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Abort a running or paused index operation that was declared as resumable. You have to explicitly execute an
ABORT command to terminate a resumable index rebuild operation. Failure or pausing a resumable index
operation does not terminate its execution; rather, it leaves the operation in an indefinite pause state.

Remarks
ALTER INDEX cannot be used to repartition an index or move it to a different filegroup. This statement cannot be
used to modify the index definition, such as adding or deleting columns or changing the column order. Use
CREATE INDEX with the DROP_EXISTING clause to perform these operations.

When an option is not explicitly specified, the current setting is applied. For example, if a FILLFACTOR setting is
not specified in the REBUILD clause, the fill factor value stored in the system catalog will be used during the
rebuild process. To view the current index option settings, use sys.indexes.
The values for ONLINE , MAXDOP , and SORT_IN_TEMPDB are not stored in the system catalog. Unless specified in the
index statement, the default value for the option is used.
On multiprocessor computers, just like other queries do, ALTER INDEX ... REBUILD automatically uses more
processors to perform the scan and sort operations that are associated with modifying the index. When you run
ALTER INDEX ... REORGANIZE , with or without LOB_COMPACTION , the max degree of parallelism value is a single
threaded operation. For more information, see Configure Parallel Index Operations.

IMPORTANT
An index cannot be reorganized or rebuilt if the filegroup in which it is located is offline or set to read-only. When the
keyword ALL is specified and one or more indexes are in an offline or read-only filegroup, the statement fails.

Rebuilding Indexes
Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by
compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in
contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction.
Foreign key constraints do not have to be dropped in advance. When indexes with 128 extents or more are
rebuilt, the Database Engine defers the actual page deallocations, and their associated locks, until after the
transaction commits.
For more information, see Reorganize and Rebuild Indexes.

NOTE
Rebuilding or reorganizing small indexes often does not reduce fragmentation. The pages of small indexes are sometimes
stored on mixed extents. Mixed extents are shared by up to eight objects, so the fragmentation in a small index might not
be reduced after reorganizing or rebuilding it.

IMPORTANT
When an index is created or rebuilt in SQL Server, statistics are created or updated by scanning all the rows in the table.
However, starting with SQL Server 2012 (11.x), statistics are not created by scanning all the rows in the table when a
partitioned index is created or rebuilt. Instead, the query optimizer uses the default sampling algorithm to generate these
statistics. To obtain statistics on partitioned indexes by scanning all the rows in the table, use CREATE STATISTICS or
UPDATE STATISTICS with the FULLSCAN clause.

In earlier versions of SQL Server, you could sometimes rebuild a nonclustered index to correct inconsistencies
caused by hardware failures.
In SQL Server 2008 and later, you may still be able to repair such inconsistencies between the index and the
clustered index by rebuilding a nonclustered index offline. However, you cannot repair nonclustered index
inconsistencies by rebuilding the index online, because the online rebuild mechanism will use the existing
nonclustered index as the basis for the rebuild and thus persist the inconsistency. Rebuilding the index offline can
sometimes force a scan of the clustered index (or heap) and so remove the inconsistency. To assure a rebuild
from the clustered index, drop and recreate the non-clustered index. As with earlier versions, we recommend
recovering from inconsistencies by restoring the affected data from a backup; however, you may be able to repair
the index inconsistencies by rebuilding the nonclustered index offline. For more information, see DBCC
CHECKDB (Transact-SQL ).
To rebuild a clustered columnstore index, SQL Server:
1. Acquires an exclusive lock on the table or partition while the rebuild occurs. The data is “offline” and
unavailable during the rebuild.
2. Defragments the columnstore by physically deleting rows that have been logically deleted from the table;
the deleted bytes are reclaimed on the physical media.
3. Reads all data from the original columnstore index, including the deltastore. It combines the data into new
rowgroups, and compresses the rowgroups into the columnstore.
4. Requires space on the physical media to store two copies of the columnstore index while the rebuild is
taking place. When the rebuild is finished, SQL Server deletes the original clustered columnstore index.

Reorganizing Indexes
Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and nonclustered
indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order
of the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor
value. To view the fill factor setting, use sys.indexes.
When ALL is specified, relational indexes, both clustered and nonclustered, and XML indexes on the table are
reorganized. Some restrictions apply when specifying ALL , refer to the definition for ALL in the Arguments
section of this article.
For more information, see Reorganize and Rebuild Indexes.
IMPORTANT
When an index is reorganized in SQL Server, statistics are not updated.

Disabling Indexes
Disabling an index prevents user access to the index, and for clustered indexes, to the underlying table data. The
index definition remains in the system catalog. Disabling a nonclustered index or clustered index on a view
physically deletes the index data. Disabling a clustered index prevents access to the data, but the data remains
unmaintained in the B -tree until the index is dropped or rebuilt. To view the status of an enabled or disabled
index, query the is_disabled column in the sys.indexes catalog view.
If a table is in a transactional replication publication, you cannot disable any indexes that are associated with
primary key columns. These indexes are required by replication. To disable an index, you must first drop the table
from the publication. For more information, see Publish Data and Database Objects.
Use the ALTER INDEX REBUILD statement or the CREATE INDEX WITH DROP_EXISTING statement to enable
the index. Rebuilding a disabled clustered index cannot be performed with the ONLINE option set to ON. For
more information, see Disable Indexes and Constraints.

Setting Options
You can set the options ALLOW_ROW_LOCKS , ALLOW_PAGE_LOCKS , IGNORE_DUP_KEY and STATISTICS_NORECOMPUTE for a
specified index without rebuilding or reorganizing that index. The modified values are immediately applied to the
index. To view these settings, use sys.indexes. For more information, see Set Index Options.
Row and Page Locks Options
When ALLOW_ROW_LOCKS = ON and ALLOW_PAGE_LOCK = ON , row -level, page-level, and table-level locks are allowed
when you access the index. The Database Engine chooses the appropriate lock and can escalate the lock from a
row or page lock to a table lock.
When ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCK = OFF , only a table-level lock is allowed when you access the
index.
If ALL is specified when the row or page lock options are set, the settings are applied to all indexes. When the
underlying table is a heap, the settings are applied in the following ways:

ALLOW_ROW_LOCKS = ON or OFF To the heap and any associated nonclustered indexes.

ALLOW_PAGE_LOCKS = ON To the heap and any associated nonclustered indexes.

ALLOW_PAGE_LOCKS = OFF Fully to the nonclustered indexes. This means that all page
locks are not allowed on the nonclustered indexes. On the
heap, only the shared (S), update (U) and exclusive (X) locks
for the page are not allowed. The Database Engine can still
acquire an intent page lock (IS, IU or IX) for internal purposes.

Online Index Operations


When rebuilding an index and the ONLINE option is set to ON, the underlying objects, the tables and associated
indexes, are available for queries and data modification. You can also rebuild online a portion of an index residing
on a single partition. Exclusive table locks are held only for a very short amount of time during the alteration
process.
Reorganizing an index is always performed online. The process does not hold locks long term and, therefore,
does not block queries or updates that are running.
You can perform concurrent online index operations on the same table or table partition only when doing the
following:
Creating multiple nonclustered indexes.
Reorganizing different indexes on the same table.
Reorganizing different indexes while rebuilding nonoverlapping indexes on the same table.
All other online index operations performed at the same time fail. For example, you cannot rebuild two or more
indexes on the same table concurrently, or create a new index while rebuilding an existing index on the same
table.
Resumable index operations
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Online index rebuild is specified as resumable using the RESUMABLE = ON option.
The RESUMABLE option is not persisted in the metadata for a given index and applies only to the duration of a
current DDL statement. Therefore, the RESUMABLE = ON clause must be specified explicitly to enable
resumability.
MAX_DURATION option is supported for RESUMABLE = ON option or the low_priority_lock_wait
argument option.
MAX_DURATION for RESUMABLE option specifies the time interval for an index being rebuild. Once
this time is used the index rebuild is either paused or it completes its execution. User decides when a
rebuild for a paused index can be resumed. The time in minutes for MAX_DURATION must be greater
than 0 minutes and less or equal one week (7 * 24 * 60 = 10080 minutes). Having a long pause for an
index operation may impact the DML performance on a specific table as well as the database disk
capacity since both indexes the original one and the newly created one require disk space and need to
be updated during DML operations. If MAX_DURATION option is omitted, the index operation will
continue until its completion or until a failure occurs.
The <low_priority_lock_wait> argument option allows you to decide how the index operation can
proceed when blocked on the SCH-M lock.
Re-executing the original ALTER INDEX REBUILD statement with the same parameters resumes a paused
index rebuild operation. You can also resume a paused index rebuild operation by executing the ALTER
INDEX RESUME statement.
The SORT_IN_TEMPDB=ON option is not supported for resumable index
The DDL command with RESUMABLE=ON cannot be executed inside an explicit transaction (cannot be part
of begin tran … commit block).
Only index operations that are paused are resumable.
When resuming an index operation that is paused, you can change the MAXDOP value to a new value. If
MAXDOP is not specified when resuming an index operation that is paused, the last MAXDOP value is taken.
IF the MAXDOP option is not specified at all for index rebuild operation, the default value is taken.
To pause immediately the index operation, you can stop the ongoing command (Ctrl-C ) or you can execute the
ALTER INDEX PAUSE command or the KILL session_id command. Once the command is paused it can be
resumed using RESUME option.
The ABORT command kills the session that hosted the original index rebuild and aborts the index operation
No extra resources are required for resumable index rebuild except for
Additional space required to keep the index being built, including the time when index is being paused
A DDL state preventing any DDL modification
The ghost cleanup will be running during the index pause phase, but it will be paused during index run
The following functionality is disabled for resumable index rebuild operations
Rebuilding an index that is disabled is not supported with RESUMABLE=ON
ALTER INDEX REBUILD ALL command
ALTER TABLE using index rebuild
DDL command with “RESUMEABLE = ON” cannot be executed inside an explicit transaction (cannot
be part of begin tran … commit block)
Rebuild an index that has computed or TIMESTAMP column(s) as key columns.
In case the base table contains LOB column(s) resumable clustered index rebuild requires a Sch-M lock in the
Starting of this operation
The SORT_IN_TEMPDB=ON option is not supported for resumable index

NOTE
The DDL command runs until it completes, pauses or fails. In case the command pauses, an error will be issued indicating
that the operation was paused and that the index creation did not complete. More information about the current index
status can be obtained from sys.index_resumable_operations. As before in case of a failure an error will be issued as well.

For more information, see Perform Index Operations Online.


WAIT_AT_LOW_PRIORITY with online index operations
In order to execute the DDL statement for an online index rebuild, all active blocking transactions running on a
particular table must be completed. When the online index rebuild executes, it blocks all new transactions that are
ready to start execution on this table. Although the duration of the lock for online index rebuild is very short,
waiting for all open transactions on a given table to complete and blocking the new transactions to start, might
significantly affect the throughput, causing a workload slow down or timeout, and significantly limit access to the
underlying table. The WAIT_AT_LOW_PRIORITY option allows DBA's to manage the S -lock and Sch-M locks
required for online index rebuilds and allows them to select one of 3 options. In all 3 cases, if during the wait time
( (MAX_DURATION = n [minutes]) ), there are no blocking activities, the online index rebuild is executed immediately
without waiting and the DDL statement is completed.

Spatial Index Restrictions


When you rebuild a spatial index, the underlying user table is unavailable for the duration of the index operation
because the spatial index holds a schema lock.
The PRIMARY KEY constraint in the user table cannot be modified while a spatial index is defined on a column of
that table. To change the PRIMARY KEY constraint, first drop every spatial index of the table. After modifying the
PRIMARY KEy constraint, you can re-create each of the spatial indexes.
In a single partition rebuild operation, you cannot specify any spatial indexes. However, you can specify spatial
indexes in a complete partition rebuild.
To change options that are specific to a spatial index, such as BOUNDING_BOX or GRID, you can either use a
CREATE SPATIAL INDEX statement that specifies DROP_EXISTING = ON, or drop the spatial index and create a
new one. For an example, see CREATE SPATIAL INDEX (Transact-SQL ).

Data Compression
For a more information about data compression, see Data Compression.
To evaluate how changing PAGE and ROW compression will affect a table, an index, or a partition, use the
sp_estimate_data_compression_savings stored procedure.
The following restrictions apply to partitioned indexes:
When you use ALTER INDEX ALL ... , you cannot change the compression setting of a single partition if the
table has nonaligned indexes.
The ALTER INDEX <index> ... REBUILD PARTITION ... syntax rebuilds the specified partition of the index.
The ALTER INDEX <index> ... REBUILD WITH ... syntax rebuilds all partitions of the index.

Statistics
When you execute ALTER INDEX ALL … on a table, only the statistics associates with indexes are updated.
Automatic or manual statistics created on the table (instead of an index) are not updated.

Permissions
To execute ALTER INDEX, at a minimum, ALTER permission on the table or view is required.

Version Notes
SQL Database does not use filegroup and filestream options.
Columnstore indexes are not available prior to SQL Server 2012 (11.x).
Resumable index operations are available Starting with SQL Server 2017 (14.x) and SQL Database

Basic syntax example:


ALTER INDEX index1 ON table1 REBUILD;

ALTER INDEX ALL ON table1 REBUILD;

ALTER INDEX ALL ON dbo.table1 REBUILD;

Examples: Columnstore Indexes


These examples apply to columnstore indexes.
A. REORGANIZE demo
This example demonstrates how the ALTER INDEX REORGANIZE command works. It creates a table that has
multiple rowgroups, and then demonstrates how REORGANIZE merges the rowgroups.
-- Create a database
CREATE DATABASE [ columnstore ];
GO

-- Create a rowstore staging table


CREATE TABLE [ staging ] (
AccountKey int NOT NULL,
AccountDescription nvarchar (50),
AccountType nvarchar(50),
AccountCodeAlternateKey int
)

-- Insert 10 million rows into the staging table.


DECLARE @loop int
DECLARE @AccountDescription varchar(50)
DECLARE @AccountKey int
DECLARE @AccountType varchar(50)
DECLARE @AccountCode int

SELECT @loop = 0
BEGIN TRAN
WHILE (@loop < 300000)
BEGIN
SELECT @AccountKey = CAST (RAND()*10000000 as int);
SELECT @AccountDescription = 'accountdesc ' + CONVERT(varchar(20), @AccountKey);
SELECT @AccountType = 'AccountType ' + CONVERT(varchar(20), @AccountKey);
SELECT @AccountCode = CAST (RAND()*10000000 as int);

INSERT INTO staging VALUES (@AccountKey, @AccountDescription, @AccountType, @AccountCode);

SELECT @loop = @loop + 1;


END
COMMIT

-- Create a table for the clustered columnstore index

CREATE TABLE cci_target (


AccountKey int NOT NULL,
AccountDescription nvarchar (50),
AccountType nvarchar(50),
AccountCodeAlternateKey int
)

-- Convert the table to a clustered columnstore index named inxcci_cci_target;


```sql
CREATE CLUSTERED COLUMNSTORE INDEX idxcci_cci_target ON cci_target;

Use the TABLOCK option to insert rows in parallel. Starting with SQL Server 2016 (13.x), the INSERT INTO
operation can run in parallel when TABLOCK is used.

INSERT INTO cci_target WITH (TABLOCK)


SELECT TOP 300000 * FROM staging;

Run this command to see the OPEN delta rowgroups. The number of rowgroups depends on the degree of
parallelism.

SELECT *
FROM sys.dm_db_column_store_row_group_physical_stats
WHERE object_id = object_id('cci_target');

Run this command to force all CLOSED and OPEN rowgroups into the columnstore.
ALTER INDEX idxcci_cci_target ON cci_target REORGANIZE WITH (COMPRESS_ALL_ROW_GROUPS = ON);

Run this command again and you will see that smaller rowgroups are merged into one compressed rowgroup.

ALTER INDEX idxcci_cci_target ON cci_target REORGANIZE WITH (COMPRESS_ALL_ROW_GROUPS = ON);

B. Compress CLOSED delta rowgroups into the columnstore


This example uses the REORGANIZE option to compresses each CLOSED delta rowgroup into the columnstore
as a compressed rowgroup. This is not necessary, but is useful when the tuple-mover is not compressing
CLOSED rowgroups fast enough.

-- Uses AdventureWorksDW
-- REORGANIZE all partitions
ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE;

-- REORGANIZE a specific partition


ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE PARTITION = 0;

C. Compress all OPEN AND CLOSED delta rowgroups into the columnstore
Applies to: SQL Server (Starting with SQL Server 2016 (13.x)) and SQL Database
The command REORGANIZE WITH ( COMPRESS_ALL_ROW_GROUPS = ON ) compreses each OPEN and
CLOSED delta rowgroup into the columnstore as a compressed rowgroup. This empties the deltastore and forces
all rows to get compressed into the columnstore. This is useful especially after performing many insert
operations since these operations store the rows in one or more delta rowgroups.
REORGANIZE combines rowgroups to fill rowgroups up to a maximum number of rows <= 1,024,576.
Therefore, when you compress all OPEN and CLOSED rowgroups you won't end up with lots of compressed
rowgroups that only have a few rows in them. You want rowgroups to be as full as possible to reduce the
compressed size and improve query performance.

-- Uses AdventureWorksDW2016
-- Move all OPEN and CLOSED delta rowgroups into the columnstore.
ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE WITH (COMPRESS_ALL_ROW_GROUPS = ON);

-- For a specific partition, move all OPEN AND CLOSED delta rowgroups into the columnstore
ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE PARTITION = 0 WITH
(COMPRESS_ALL_ROW_GROUPS = ON);

D. Defragment a columnstore index online


Does not apply to: SQL Server 2012 (11.x) and SQL Server 2014 (12.x).
Starting with SQL Server 2016 (13.x), REORGANIZE does more than compress delta rowgroups into the
columnstore. It also performs online defragmentation. First, it reduces the size of the columnstore by physically
removing deleted rows when 10% or more of the rows in a rowgroup have been deleted. Then, it combines
rowgroups together to form larger rowgroups that have up to the maximum of 1,024,576 rows per rowgroups.
All rowgroups that are changed get re-compressed.
NOTE
Starting with SQL Server 2016 (13.x), rebuilding a columnstore index is no longer necessary in most situations since
REORGANIZE physically removes deleted rows and merges rowgroups. The COMPRESS_ALL_ROW_GROUPS option forces
all OPEN or CLOSED delta rowgroups into the columnstore which previously could only be done with a rebuild.
REORGANIZE is online and occurs in the background so queries can continue as the operation happens.

-- Uses AdventureWorks
-- Defragment by physically removing rows that have been logically deleted from the table, and merging
rowgroups.
ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE;

E. Rebuild a clustered columnstore index offline


Applies to: SQL Server (Starting with SQL Server 2012 (11.x))

TIP
Starting with SQL Server 2016 (13.x) and in Azure SQL Database, we recommend using ALTER INDEX REORGANIZE instead
of ALTER INDEX REBUILD.

NOTE
In SQL Server 2012 (11.x) and SQL Server 2014 (12.x), REORGANIZE is only used to compress CLOSED rowgroups into the
columnstore. The only way to perform defragmentation operations and to force all delta rowgroups into the columnstore is
to rebuild the index.

This example shows how to rebuild a clustered columnstore index and force all delta rowgroups into the
columnstore. This first step prepares a table FactInternetSales2 with a clustered columnstore index and inserts
data from the first four columns.

-- Uses AdventureWorksDW

CREATE TABLE dbo.FactInternetSales2 (


ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey [int] NOT NULL,
ShipDateKey [int] NOT NULL);

CREATE CLUSTERED COLUMNSTORE INDEX cci_FactInternetSales2


ON dbo.FactInternetSales2;

INSERT INTO dbo.FactInternetSales2


SELECT ProductKey, OrderDateKey, DueDateKey, ShipDateKey
FROM dbo.FactInternetSales;

SELECT * FROM sys.column_store_row_groups;

The results show there is one OPEN rowgroup, which means SQL Server will wait for more rows to be added
before it closes the rowgroup and moves the data to the columnstore. This next statement rebuilds the clustered
columnstore index, which forces all rows into the columnstore.

ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REBUILD;


SELECT * FROM sys.column_store_row_groups;
The results of the SELECT statement show the rowgroup is COMPRESSED, which means the column segments
of the rowgroup are now compressed and stored in the columnstore.
F. Rebuild a partition of a clustered columnstore index offline
Applies to: SQL Server (Starting with SQL Server 2012 (11.x))
To rebuild a partition of a large clustered columnstore index, use ALTER INDEX REBUILD with the partition
option. This example rebuilds partition 12. Starting with SQL Server 2016 (13.x), we recommend replacing
REBUILD with REORGANIZE.

ALTER INDEX cci_fact3


ON fact3
REBUILD PARTITION = 12;

G. Change a clustered columstore index to use archival compression


Does not apply to: SQL Server 2012 (11.x)
You can choose to reduce the size of a clustered columnstore index even further by using the
COLUMNSTORE_ARCHIVE data compression option. This is practical for older data that you want to keep on
cheaper storage. We recommend only using this on data that is not accessed often since decompress is slower
than with the normal COLUMNSTORE compression.
The following example rebuilds a clustered columnstore index to use archival compression, and then shows how
to remove the archival compression. The final result will use only columnstore compression.

--Prepare the example by creating a table with a clustered columnstore index.


CREATE TABLE SimpleTable (
ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey [int] NOT NULL,
ShipDateKey [int] NOT NULL
);

CREATE CLUSTERED INDEX cci_SimpleTable ON SimpleTable (ProductKey);

CREATE CLUSTERED COLUMNSTORE INDEX cci_SimpleTable


ON SimpleTable
WITH (DROP_EXISTING = ON);

--Compress the table further by using archival compression.


ALTER INDEX cci_SimpleTable ON SimpleTable
REBUILD
WITH (DATA_COMPRESSION = COLUMNSTORE_ARCHIVE);

--Remove the archive compression and only use columnstore compression.


ALTER INDEX cci_SimpleTable ON SimpleTable
REBUILD
WITH (DATA_COMPRESSION = COLUMNSTORE);
GO

Examples: Rowstore indexes


A. Rebuilding an index
The following example rebuilds a single index on the Employee table in the AdventureWorks2012 database.

ALTER INDEX PK_Employee_EmployeeID ON HumanResources.Employee REBUILD;


B. Rebuilding all indexes on a table and specifying options
The following example specifies the keyword ALL . This rebuilds all indexes associated with the table
Production.Product in the AdventureWorks2012 database. Three options are specified.
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.

ALTER INDEX ALL ON Production.Product


REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = ON);

The following example adds the ONLINE option including the low priority lock option, and adds the row
compression option.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.

ALTER INDEX ALL ON Production.Product


REBUILD WITH
(
FILLFACTOR = 80,
SORT_IN_TEMPDB = ON,
STATISTICS_NORECOMPUTE = ON,
ONLINE = ON ( WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 4 MINUTES, ABORT_AFTER_WAIT = BLOCKERS ) ),
DATA_COMPRESSION = ROW
);

C. Reorganizing an index with LOB compaction


The following example reorganizes a single clustered index in the AdventureWorks2012 database. Because the
index contains a LOB data type in the leaf level, the statement also compacts all pages that contain the large
object data. Note that specifying the WITH (LOB_COMPACTION ) option is not required because the default
value is ON.

ALTER INDEX PK_ProductPhoto_ProductPhotoID ON Production.ProductPhoto REORGANIZE WITH (LOB_COMPACTION);

D. Setting options on an index


The following example sets several options on the index AK_SalesOrderHeader_SalesOrderNumber in the
AdventureWorks2012 database.
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.

ALTER INDEX AK_SalesOrderHeader_SalesOrderNumber ON


Sales.SalesOrderHeader
SET (
STATISTICS_NORECOMPUTE = ON,
IGNORE_DUP_KEY = ON,
ALLOW_PAGE_LOCKS = ON
) ;
GO

E. Disabling an index
The following example disables a nonclustered index on the Employee table in the AdventureWorks2012
database.

ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee DISABLE;

F. Disabling constraints
The following example disables a PRIMARY KEY constraint by disabling the PRIMARY KEY index in the
AdventureWorks2012 database. The FOREIGN KEY constraint on the underlying table is automatically disabled
and warning message is displayed.

ALTER INDEX PK_Department_DepartmentID ON HumanResources.Department DISABLE;

The result set returns this warning message.

Warning: Foreign key 'FK_EmployeeDepartmentHistory_Department_DepartmentID'


on table 'EmployeeDepartmentHistory' referencing table 'Department'
was disabled as a result of disabling the index 'PK_Department_DepartmentID'.

G. Enabling constraints
The following example enables the PRIMARY KEY and FOREIGN KEY constraints that were disabled in Example
F.
The PRIMARY KEY constraint is enabled by rebuilding the PRIMARY KEY index.

ALTER INDEX PK_Department_DepartmentID ON HumanResources.Department REBUILD;

The FOREIGN KEY constraint is then enabled.

ALTER TABLE HumanResources.EmployeeDepartmentHistory


CHECK CONSTRAINT FK_EmployeeDepartmentHistory_Department_DepartmentID;
GO

H. Rebuilding a partitioned index


The following example rebuilds a single partition, partition number 5 , of the partitioned index
IX_TransactionHistory_TransactionDate in the AdventureWorks2012 database. Partition 5 is rebuilt online and
the 10 minutes wait time for the low priority lock applies separately to every lock acquired by index rebuild
operation. If during this time the lock cannot be obtained to complete index rebuild, the rebuild operation
statement is aborted.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.

-- Verify the partitioned indexes.


SELECT *
FROM sys.dm_db_index_physical_stats (DB_ID(),OBJECT_ID(N'Production.TransactionHistory'), NULL , NULL, NULL);
GO
--Rebuild only partition 5.
ALTER INDEX IX_TransactionHistory_TransactionDate
ON Production.TransactionHistory
REBUILD Partition = 5
WITH (ONLINE = ON (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 10 minutes, ABORT_AFTER_WAIT = SELF)));
GO

I. Changing the compression setting of an index


The following example rebuilds an index on a nonpartitioned rowstore table.
ALTER INDEX IX_INDEX1
ON T1
REBUILD
WITH (DATA_COMPRESSION = PAGE);
GO

For additional data compression examples, see Data Compression.


J. Online resumable index rebuild
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
The following examples show how to use online resumable index rebuild.
1. Execute an online index rebuild as resumable operation with MAXDOP=1.

ALTER INDEX test_idx on test_table REBUILD WITH (ONLINE=ON, MAXDOP=1, RESUMABLE=ON) ;

2. Executing the same command again (see above) after an index operation was paused, resumes
automatically the index rebuild operation.
3. Execute an online index rebuild as resumable operation with MAX_DURATION set to 240 minutes.

ALTER INDEX test_idx on test_table REBUILD WITH (ONLINE=ON, RESUMABLE=ON, MAX_DURATION=240) ;

4. Pause a running resumable online index rebuild.

ALTER INDEX test_idx on test_table PAUSE ;

5. Resume an online index rebuild for an index rebuild that was executed as resumable operation specifying a
new value for MAXDOP set to 4.

ALTER INDEX test_idx on test_table RESUME WITH (MAXDOP=4) ;

6. Resume an online index rebuild operation for an index online rebuild that was executed as resumable. Set
MAXDOP to 2, set the execution time for the index being running as resmumable to 240 minutes and in
case of an index being blocked on the lock wait 10 minutes and after that kill all blockers.

ALTER INDEX test_idx on test_table


RESUME WITH (MAXDOP=2, MAX_DURATION= 240 MINUTES,
WAIT_AT_LOW_PRIORITY (MAX_DURATION=10, ABORT_AFTER_WAIT=BLOCKERS)) ;

7. Abort resumable index rebuild operation which is running or paused.

ALTER INDEX test_idx on test_table ABORT ;

See Also
CREATE INDEX (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE XML INDEX (Transact-SQL )
DROP INDEX (Transact-SQL )
Disable Indexes and Constraints
XML Indexes (SQL Server)
Perform Index Operations Online
Reorganize and Rebuild Indexes
sys.dm_db_index_physical_stats (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER INDEX (Selective XML Indexes)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies an existing selective XML index. The ALTER INDEX statement changes one or more of the following
items:
The list of indexed paths (FOR clause).
The list of namespaces (WITH XMLNAMESPACES clause).
The index options (WITH clause).
You cannot alter secondary selective XML indexes. For more information, see Create, Alter, and Drop
Secondary Selective XML Indexes.
Transact-SQL Syntax Conventions

Syntax
ALTER INDEX index_name
ON <table_object>
[WITH XMLNAMESPACES ( <xmlnamespace_list> )]
FOR ( <promoted_node_path_action_list> )
[WITH ( <index_options> )]

<table_object> ::=
{ [database_name. [schema_name ] . | schema_name. ] table_name }
<promoted_node_path_action_list> ::=
<promoted_node_path_action_item> [, <promoted_node_path_action_list>]

<promoted_node_path_action_item>::=
<add_node_path_item_action> | <remove_node_path_item_action>

<add_node_path_item_action> ::=
ADD <path_name> = <promoted_node_path_item>

<promoted_node_path_item>::=
<xquery_node_path_item> | <sql_values_node_path_item>

<remove_node_path_item_action> ::= REMOVE <path_name>

<path_name_or_typed_node_path>::=
<path_name> | <typed_node_path>

<typed_node_path> ::=
<node_path> [[AS XQUERY <xsd_type_ext>] | [AS SQL <sql_type>]]

<xquery_node_path_item> ::=
<node_path> [AS XQUERY <xsd_type_or_node_hint>] [SINGLETON]

<xsd_type_or_node_hint> ::=
[<xsd_type>] [MAXLENGTH(x)] | 'node()'

<sql_values_node_path_item> ::=
<node_path> AS SQL <sql_type> [SINGLETON]

<node_path> ::=
character_string_literal
character_string_literal

<xsd_type_ext> ::=
character_string_literal

<sql_type> ::=
identifier

<path_name> ::=
identifier

<xmlnamespace_list> ::=
<xmlnamespace_item> [, <xmlnamespace_list>]

<xmlnamespace_item> ::=
<xmlnamespace_uri> AS <xmlnamespace_prefix>

<xml_namespace_uri> ::= character_string_literal


<xml_namespace_prefix> ::= identifier

<index_options> ::=
(
| PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY =OFF
| DROP_EXISTING = { ON | OFF }
| ONLINE =OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
)

Arguments
index_name
Is the name of the existing index to alter.
<table_object>
Is the table that contains the XML column to index. Use one of the following formats:
database_name.schema_name.table_name

database_name..table_name

schema_name.table_name

table_name

[WITH XMLNAMESPACES ( <xmlnamespace_list> )]


Is the list of namespaces used by the paths to index. For information about the syntax of the WITH
XMLNAMESPACES clause, see WITH XMLNAMESPACES (Transact-SQL ).
FOR ( <promoted_node_path_action_list> )
Is the list of indexed paths to add or remove.
ADD a path. When you ADD a path, you use the same syntax that is used to create paths with the CREATE
SELECTIVE XML INDEX statement. For information about the paths that you can specify in the CREATE or
ALTER statement, see Specify Paths and Optimization Hints for Selective XML Indexes.
REMOVE a path. When you REMOVE a path, you provide the name that was given to the path when it
was created.
[WITH ( <index_options> )]
You can only specify <index_options> when you use ALTER INDEX without the FOR clause. When you use
ALTER INDEX to add or remove paths in the index, the index options are not valid arguments. For
information about the index options, see CREATE XML INDEX (Selective XML Indexes).

Remarks
IMPORTANT
When you run an ALTER INDEX statement, the selective XML index is always rebuilt. Be sure to consider the impact of this
process on server resources.

Security
Permissions
ALTER permission on the table or view is required to run ALTER INDEX.

Examples
The following example shows an ALTER INDEX statement. This statement adds the path '/a/b/m' to the XQuery
part of the index and deletes the path '/a/b/e' from the SQL part of the index created in the example in the topic
CREATE SELECTIVE XML INDEX (Transact-SQL ). The path to delete is identified by the name that was given to it
when it was created.

ALTER INDEX sxi_index


ON Tbl
FOR
(
ADD pathm = '/a/b/m' as XQUERY 'node()' ,
REMOVE pathabe
);

The following example shows an ALTER INDEX statement that specifies index options. Index options are permitted
because the statement does not use a FOR clause to add or remove paths.

ALTER INDEX sxi_index


ON Tbl
PAD_INDEX = ON;

See Also
Selective XML Indexes (SXI)
Create, Alter, and Drop Selective XML Indexes
Specify Paths and Optimization Hints for Selective XML Indexes
ALTER LOGIN (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a SQL Server login account.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

ALTER LOGIN login_name


{
<status_option>
| WITH <set_option> [ ,... ]
| <cryptographic_credential_option>
}
[;]

<status_option> ::=
ENABLE | DISABLE

<set_option> ::=
PASSWORD = 'password' | hashed_password HASHED
[
OLD_PASSWORD = 'oldpassword'
| <password_option> [<password_option> ]
]
| DEFAULT_DATABASE = database
| DEFAULT_LANGUAGE = language
| NAME = login_name
| CHECK_POLICY = { ON | OFF }
| CHECK_EXPIRATION = { ON | OFF }
| CREDENTIAL = credential_name
| NO CREDENTIAL

<password_option> ::=
MUST_CHANGE | UNLOCK

<cryptographic_credentials_option> ::=
ADD CREDENTIAL credential_name
| DROP CREDENTIAL credential_name
-- Syntax for Azure SQL Database and Azure SQL Data Warehouse

ALTER LOGIN login_name


{
<status_option>
| WITH <set_option> [ ,.. .n ]
}
[;]

<status_option> ::=
ENABLE | DISABLE

<set_option> ::=
PASSWORD ='password'
[
OLD_PASSWORD ='oldpassword'
]
| NAME = login_name

-- Syntax for Parallel Data Warehouse

ALTER LOGIN login_name


{
<status_option>
| WITH <set_option> [ ,... ]
}

<status_option> ::=ENABLE | DISABLE

<set_option> ::=
PASSWORD ='password'
[
OLD_PASSWORD ='oldpassword'
| <password_option> [<password_option> ]
]
| NAME = login_name
| CHECK_POLICY = { ON | OFF }
| CHECK_EXPIRATION = { ON | OFF }

<password_option> ::=
MUST_CHANGE | UNLOCK

Arguments
login_name
Specifies the name of the SQL Server login that is being changed. Domain logins must be enclosed in brackets in
the format [domain\user].
ENABLE | DISABLE
Enables or disables this login. Disabling a login does not affect the behavior of logins that are already connected.
(Use the KILL statement to terminate an existing connections.) Disabled logins retain their permissions and can
still be impersonated.
PASSWORD ='password'
Applies only to SQL Server logins. Specifies the password for the login that is being changed. Passwords are case-
sensitive.
Continuously active connections to SQL Database require reauthorization (performed by the Database Engine) at
least every 10 hours. The Database Engine attempts reauthorization using the originally submitted password and
no user input is required. For performance reasons, when a password is reset in SQL Database, the connection
will not be re-authenticated, even if the connection is reset due to connection pooling. This is different from the
behavior of on-premises SQL Server. If the password has been changed since the connection was initially
authorized, the connection must be terminated and a new connection made using the new password. A user with
the KILL DATABASE CONNECTION permission can explicitly terminate a connection to SQL Database by using
the KILL command. For more information, see KILL (Transact-SQL ).
PASSWORD =hashed_password
Applies to: SQL Server 2008 through SQL Server 2017.
Applies to the HASHED keyword only. Specifies the hashed value of the password for the login that is being
created.

IMPORTANT
When a login (or a contained database user) connects and is authenticated, the connection caches identity information
about the login. For a Windows Authentication login, this includes information about membership in Windows groups. The
identity of the login remains authenticated as long as the connection is maintained. To force changes in the identity, such as
a password reset or change in Windows group membership, the login must logoff from the authentication authority
(Windows or SQL Server), and log in again. A member of the sysadmin fixed server role or any login with the ALTER ANY
CONNECTION permission can use the KILL command to end a connection and force a login to reconnect. SQL Server
Management Studio can reuse connection information when opening multiple connections to Object Explorer and Query
Editor windows. Close all connections to force reconnection.

HASHED
Applies to: SQL Server 2008 through SQL Server 2017.
Applies to SQL Server logins only. Specifies that the password entered after the PASSWORD argument is already
hashed. If this option is not selected, the password is hashed before being stored in the database. This option
should only be used for login synchronization between two servers. Do not use the HASHED option to routinely
change passwords.
OLD_PASSWORD ='oldpassword'
Applies only to SQL Server logins. The current password of the login to which a new password will be assigned.
Passwords are case-sensitive.
MUST_CHANGE
Applies to: SQL Server 2008 through SQL Server 2017, and Parallel Data Warehouse.
Applies only to SQL Server logins. If this option is included, SQL Server will prompt for an updated password the
first time the altered login is used.
DEFAULT_DATABASE =database
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a default database to be assigned to the login.
DEFAULT_L ANGUAGE =language
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a default language to be assigned to the login. The default language for all SQL Database logins is
English and cannot be changed. The default language of the sa login on SQL Server on Linux, is English but it
can be changed.
NAME = login_name
The new name of the login that is being renamed. If this is a Windows login, the SID of the Windows principal
corresponding to the new name must match the SID associated with the login in SQL Server. The new name of a
SQL Server login cannot contain a backslash character (\).
CHECK_EXPIRATION = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, and Parallel Data Warehouse.
Applies only to SQL Server logins. Specifies whether password expiration policy should be enforced on this login.
The default value is OFF.
CHECK_POLICY = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, and Parallel Data Warehouse.
Applies only to SQL Server logins. Specifies that the Windows password policies of the computer on which SQL
Server is running should be enforced on this login. The default value is ON.
CREDENTIAL = credential_name
Applies to: SQL Server 2008 through SQL Server 2017.
The name of a credential to be mapped to a SQL Server login. The credential must already exist in the server. For
more information see Credentials (Database Engine). A credential cannot be mapped to the sa login.
NO CREDENTIAL
Applies to: SQL Server 2008 through SQL Server 2017.
Removes any existing mapping of the login to a server credential. For more information see Credentials (Database
Engine).
UNLOCK
Applies to: SQL Server 2008 through SQL Server 2017, and Parallel Data Warehouse.
Applies only to SQL Server logins. Specifies that a login that is locked out should be unlocked.
ADD CREDENTIAL
Applies to: SQL Server 2008 through SQL Server 2017.
Adds an Extensible Key Management (EKM ) provider credential to the login. For more information, see Extensible
Key Management (EKM ).
DROP CREDENTIAL
Applies to: SQL Server 2008 through SQL Server 2017.
Removes an Extensible Key Management (EKM ) provider credential from the login. For more information see
Extensible Key Management (EKM ).

Remarks
When CHECK_POLICY is set to ON, the HASHED argument cannot be used.
When CHECK_POLICY is changed to ON, the following behavior occurs:
The password history is initialized with the value of the current password hash.
When CHECK_POLICY is changed to OFF, the following behavior occurs:
CHECK_EXPIRATION is also set to OFF.
The password history is cleared.
The value of lockout_time is reset.
If MUST_CHANGE is specified, CHECK_EXPIRATION and CHECK_POLICY must be set to ON. Otherwise, the
statement will fail.
If CHECK_POLICY is set to OFF, CHECK_EXPIRATION cannot be set to ON. An ALTER LOGIN statement that
has this combination of options will fail.
You cannot use ALTER_LOGIN with the DISABLE argument to deny access to a Windows group. For example,
ALTER_LOGIN [domain\group] DISABLE will return the following error message:
"Msg 15151, Level 16, State 1, Line 1
"Cannot alter the login 'Domain\Group', because it does not exist or you do not have permission."
This is by design.
In SQL Database, login data required to authenticate a connection and server-level firewall rules are temporarily
cached in each database. This cache is periodically refreshed. To force a refresh of the authentication cache and
make sure that a database has the latest version of the logins table, execute DBCC FLUSHAUTHCACHE (Transact-
SQL ).

Permissions
Requires ALTER ANY LOGIN permission.
If the CREDENTIAL option is used, also requires ALTER ANY CREDENTIAL permission.
If the login that is being changed is a member of the sysadmin fixed server role or a grantee of CONTROL
SERVER permission, also requires CONTROL SERVER permission when making the following changes:
Resetting the password without supplying the old password.
Enabling MUST_CHANGE, CHECK_POLICY, or CHECK_EXPIRATION.
Changing the login name.
Enabling or disabling the login.
Mapping the login to a different credential.
A principal can change the password, default language, and default database for its own login.

Examples
A. Enabling a disabled login
The following example enables the login Mary5 .

ALTER LOGIN Mary5 ENABLE;

B. Changing the password of a login


The following example changes the password of login Mary5 to a strong password.

ALTER LOGIN Mary5 WITH PASSWORD = '<enterStrongPasswordHere>';

C. Changing the name of a login


The following example changes the name of login Mary5 to John2 .

ALTER LOGIN Mary5 WITH NAME = John2;

D. Mapping a login to a credential


The following example maps the login John2 to the credential Custodian04 .

ALTER LOGIN John2 WITH CREDENTIAL = Custodian04;

E. Mapping a login to an Extensible Key Management credential


The following example maps the login Mary5 to the EKM credential EKMProvider1 .
Applies to: SQL Server 2008 through SQL Server 2017.

ALTER LOGIN Mary5


ADD CREDENTIAL EKMProvider1;
GO

F. Unlocking a login
To unlock a SQL Server login, execute the following statement, replacing **** with the desired account password.

ALTER LOGIN [Mary5] WITH PASSWORD = '****' UNLOCK ;

GO

To unlock a login without changing the password, turn the check policy off and then on again.

ALTER LOGIN [Mary5] WITH CHECK_POLICY = OFF;


ALTER LOGIN [Mary5] WITH CHECK_POLICY = ON;
GO

G. Changing the password of a login using HASHED


The following example changes the password of the TestUser login to an already hashed value.
Applies to: SQL Server 2008 through SQL Server 2017.

ALTER LOGIN TestUser WITH


PASSWORD = 0x01000CF35567C60BFB41EBDE4CF700A985A13D773D6B45B90900 HASHED ;
GO

See Also
Credentials (Database Engine)
CREATE LOGIN (Transact-SQL )
DROP LOGIN (Transact-SQL )
CREATE CREDENTIAL (Transact-SQL )
EVENTDATA (Transact-SQL )
Extensible Key Management (EKM )
ALTER MASTER KEY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a database master key.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

ALTER MASTER KEY <alter_option>

<alter_option> ::=
<regenerate_option> | <encryption_option>

<regenerate_option> ::=
[ FORCE ] REGENERATE WITH ENCRYPTION BY PASSWORD = 'password'

<encryption_option> ::=
ADD ENCRYPTION BY { SERVICE MASTER KEY | PASSWORD = 'password' }
|
DROP ENCRYPTION BY { SERVICE MASTER KEY | PASSWORD = 'password' }

-- Syntax for Azure SQL Database


-- Note: DROP ENCRYPTION BY SERVICE MASTER KEY is not supported on Azure SQL Database.

ALTER MASTER KEY <alter_option>

<alter_option> ::=
<regenerate_option> | <encryption_option>

<regenerate_option> ::=
[ FORCE ] REGENERATE WITH ENCRYPTION BY PASSWORD = 'password'

<encryption_option> ::=
ADD ENCRYPTION BY { SERVICE MASTER KEY | PASSWORD = 'password' }
|
DROP ENCRYPTION BY { PASSWORD = 'password' }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

ALTER MASTER KEY <alter_option>

<alter_option> ::=
<regenerate_option> | <encryption_option>

<regenerate_option> ::=
[ FORCE ] REGENERATE WITH ENCRYPTION BY PASSWORD ='password'<encryption_option> ::=
ADD ENCRYPTION BY SERVICE MASTER KEY
|
DROP ENCRYPTION BY SERVICE MASTER KEY
Arguments
PASSWORD ='password'
Specifies a password with which to encrypt or decrypt the database master key. password must meet the
Windows password policy requirements of the computer that is running the instance of SQL Server.

Remarks
The REGENERATE option re-creates the database master key and all the keys it protects. The keys are first
decrypted with the old master key, and then encrypted with the new master key. This resource-intensive operation
should be scheduled during a period of low demand, unless the master key has been compromised.
SQL Server 2012 (11.x) uses the AES encryption algorithm to protect the service master key (SMK) and the
database master key (DMK). AES is a newer encryption algorithm than 3DES used in earlier versions. After
upgrading an instance of the Database Engine to SQL Server 2012 (11.x) the SMK and DMK should be
regenerated in order to upgrade the master keys to AES. For more information about regenerating the SMK, see
ALTER SERVICE MASTER KEY (Transact-SQL ).
When the FORCE option is used, key regeneration will continue even if the master key is unavailable or the
server cannot decrypt all the encrypted private keys. If the master key cannot be opened, use the RESTORE
MASTER KEY statement to restore the master key from a backup. Use the FORCE option only if the master key is
irretrievable or if decryption fails. Information that is encrypted only by an irretrievable key will be lost.
The DROP ENCRYPTION BY SERVICE MASTER KEY option removes the encryption of the database master key
by the service master key.
ADD ENCRYPTION BY SERVICE MASTER KEY causes a copy of the master key to be encrypted using the
service master key and stored in both the current database and in master.

Permissions
Requires CONTROL permission on the database. If the database master key has been encrypted with a password,
knowledge of that password is also required.

Examples
The following example creates a new database master key for AdventureWorks and reencrypts the keys below it in
the encryption hierarchy.

USE AdventureWorks2012;
ALTER MASTER KEY REGENERATE WITH ENCRYPTION BY PASSWORD = 'dsjdkflJ435907NnmM#sX003';
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


The following example creates a new database master key for AdventureWorksPDW2012 and re-encrypts the keys
below it in the encryption hierarchy.

USE master;
ALTER MASTER KEY REGENERATE WITH ENCRYPTION BY PASSWORD = 'dsjdkflJ435907NnmM#sX003';
GO

See Also
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
BACKUP MASTER KEY (Transact-SQL )
RESTORE MASTER KEY (Transact-SQL )
DROP MASTER KEY (Transact-SQL )
Encryption Hierarchy
CREATE DATABASE (SQL Server Transact-SQL )
Database Detach and Attach (SQL Server)
ALTER MESSAGE TYPE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a message type.
Transact-SQL Syntax Conventions

Syntax
ALTER MESSAGE TYPE message_type_name
VALIDATION =
{ NONE
| EMPTY
| WELL_FORMED_XML
| VALID_XML WITH SCHEMA COLLECTION schema_collection_name }
[ ; ]

Arguments
message_type_name
The name of the message type to change. Server, database, and schema names cannot be specified.
VALIDATION
Specifies how Service Broker validates the message body for messages of this type.
NONE
No validation is performed. The message body might contain any data, or might be NULL.
EMPTY
The message body must be NULL.
WELL_FORMED_XML
The message body must contain well-formed XML.
VALID_XML_WITH_SCHEMA = schema_collection_name
The message body must contain XML that complies with a schema in the specified schema collection. The
schema_collection_name must be the name of an existing XML schema collection.

Remarks
Changing the validation of a message type does not affect messages that have already been delivered to a queue.
To change the AUTHORIZATION for a message type, use the ALTER AUTHORIZATION statement.

Permissions
Permission for altering a message type defaults to the owner of the message type, members of the db_ddladmin
or db_owner fixed database roles, and members of the sysadmin fixed server role.
When the ALTER MESSAGE TYPE statement specifies a schema collection, the user executing the statement must
have REFERENCES permission on the schema collection specified.

Examples
The following example changes the message type //Adventure-Works.com/Expenses/SubmitExpense to require that
the message body contain a well-formed XML document.

ALTER MESSAGE TYPE


[//Adventure-Works.com/Expenses/SubmitExpense]
VALIDATION = WELL_FORMED_XML ;

See Also
ALTER AUTHORIZATION (Transact-SQL )
CREATE MESSAGE TYPE (Transact-SQL )
DROP MESSAGE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a partition function by splitting or merging its boundary values. By executing ALTER PARTITION
FUNCTION, one partition of any table or index that uses the partition function can be split into two partitions, or
two partitions can be merged into one less partition.
Cau t i on

More than one table or index can use the same partition function. ALTER PARTITION FUNCTION affects all of
them in a single transaction.
Transact-SQL Syntax Conventions

Syntax
ALTER PARTITION FUNCTION partition_function_name()
{
SPLIT RANGE ( boundary_value )
| MERGE RANGE ( boundary_value )
} [ ; ]

Arguments
partition_function_name
Is the name of the partition function to be modified.
SPLIT RANGE ( boundary_value )
Adds one partition to the partition function. boundary_value determines the range of the new partition, and must
differ from the existing boundary ranges of the partition function. Based on boundary_value, the Database Engine
splits one of the existing ranges into two. Of these two, the one where the new boundary_value resides is
considered the new partition.
A filegroup must exist online and be marked by the partition scheme that uses the partition function as NEXT
USED to hold the new partition. Filegroups are allocated to partitions in a CREATE PARTITION SCHEME
statement. If a CREATE PARTITION SCHEME statement allocates more filegroups than necessary (fewer
partitions are created in the CREATE PARTITION FUNCTION statement than filegroups to hold them), then there
are unassigned filegroups, and one of them is marked NEXT USED by the partition scheme. This filegroup will
hold the new partition. If there are no filegroups marked NEXT USED by the partition scheme, you must use
ALTER PARTITION SCHEME to either add a filegroup, or designate an existing one, to hold the new partition. A
filegroup that already holds partitions can be designated to hold additional partitions. Because a partition function
can participate in more than one partition scheme, all the partition schemes that use the partition function to
which you are adding partitions must have a NEXT USED filegroup. Otherwise, ALTER PARTITION FUNCTION
fails with an error that displays the partition scheme or schemes that lack a NEXT USED filegroup.
If you create all the partitions in the same filegroup, that filegroup is initially assigned to be the NEXT USED
filegroup automatically. However, after a split operation is performed, there is no longer a designated NEXT USED
filegroup. You must explicitly assign the filegroup to be the NEXT USED filegroup by using ALTER PARITION
SCHEME or a subsequent split operation will fail.
NOTE
Limitations with columnstore index: Only empty partitions can be split in when a columnstore index exists on the table. You
will need to drop or disable the columnstore index before performing this operation

MERGE [ RANGE ( boundary_value) ]


Drops a partition and merges any values that exist in the partition into one of the remaining partitions. RANGE
(boundary_value) must be an existing boundary value, into which the values from the dropped partition are
merged. The filegroup that originally held boundary_value is removed from the partition scheme unless it is used
by a remaining partition, or is marked with the NEXT USED property. The merged partition resides in the
filegroup that originally did not hold boundary_value. boundary_value is a constant expression that can reference
variables (including user-defined type variables) or functions (including user-defined functions). It cannot
reference a Transact-SQL expression. boundary_value must either match or be implicitly convertible to the data
type of its corresponding partitioning column, and cannot be truncated during implicit conversion in a way that
the size and scale of the value does not match that of its corresponding input_parameter_type.

NOTE
Limitations with columnstore index: Two nonempty partitions containing a columnstore index cannot be merged. You will
need to drop or disable the columnstore index before performing this operation

Best Practices
Always keep empty partitions at both ends of the partition range to guarantee that the partition split (before
loading new data) and partition merge (after unloading old data) do not incur any data movement. Avoid splitting
or merging populated partitions. This can be extremely inefficient, as this may cause as much as four times more
log generation, and may also cause severe locking.

Limitations and Restrictions


ALTER PARTITION FUNCTION repartitions any tables and indexes that use the function in a single atomic
operation. However, this operation occurs offline, and depending on the extent of repartitioning, may be resource-
intensive.
ALTER PARTITION FUNCTION can only be used for splitting one partition into two, or merging two partitions
into one. To change the way a table is otherwise partitioned (for example, from 10 partitions to 5 partitions), you
can exercise any of the following options. Depending on the configuration of your system, these options can vary
in resource consumption:
Create a new partitioned table with the desired partition function, and then insert the data from the old
table into the new table by using an INSERT INTO...SELECT FROM statement.
Create a partitioned clustered index on a heap.

NOTE
Dropping a partitioned clustered index results in a partitioned heap.

Drop and rebuild an existing partitioned index by using the Transact-SQL CREATE INDEX statement with
the DROP EXISTING = ON clause.
Perform a sequence of ALTER PARTITION FUNCTION statements.
All filegroups that are affected by ALTER PARITITION FUNCTION must be online.
ALTER PARTITION FUNCTION fails when there is a disabled clustered index on any tables that use the
partition function.
SQL Server does not provide replication support for modifying a partition function. Changes to a partition
function in the publication database must be manually applied in the subscription database.

Permissions
Any one of the following permissions can be used to execute ALTER PARTITION FUNCTION:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition function was created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition function was created.

Examples
A. Splitting a partition of a partitioned table or index into two partitions
The following example creates a partition function to partition a table or index into four partitions.
ALTER PARTITION FUNCTION splits one of the partitions into two to create a total of five partitions.

IF EXISTS (SELECT * FROM sys.partition_functions


WHERE name = 'myRangePF1')
DROP PARTITION FUNCTION myRangePF1;
GO
CREATE PARTITION FUNCTION myRangePF1 (int)
AS RANGE LEFT FOR VALUES ( 1, 100, 1000 );
GO
--Split the partition between boundary_values 100 and 1000
--to create two partitions between boundary_values 100 and 500
--and between boundary_values 500 and 1000.
ALTER PARTITION FUNCTION myRangePF1 ()
SPLIT RANGE (500);

B. Merging two partitions of a partitioned table into one partition


The following example creates the same partition function as above, and then merges two of the partitions into
one partition, for a total of three partitions.

IF EXISTS (SELECT * FROM sys.partition_functions


WHERE name = 'myRangePF1')
DROP PARTITION FUNCTION myRangePF1;
GO
CREATE PARTITION FUNCTION myRangePF1 (int)
AS RANGE LEFT FOR VALUES ( 1, 100, 1000 );
GO
--Merge the partitions between boundary_values 1 and 100
--and between boundary_values 100 and 1000 to create one partition
--between boundary_values 1 and 1000.
ALTER PARTITION FUNCTION myRangePF1 ()
MERGE RANGE (100);

See Also
Partitioned Tables and Indexes
CREATE PARTITION FUNCTION (Transact-SQL )
DROP PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
DROP PARTITION SCHEME (Transact-SQL )
CREATE INDEX (Transact-SQL )
ALTER INDEX (Transact-SQL )
CREATE TABLE (Transact-SQL )
sys.partition_functions (Transact-SQL )
sys.partition_parameters (Transact-SQL )
sys.partition_range_values (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds a filegroup to a partition scheme or alters the designation of the NEXT USED filegroup for the partition
scheme.

NOTE
In Azure SQL Database only primary filegroups are supported.

Transact-SQL Syntax Conventions

Syntax
ALTER PARTITION SCHEME partition_scheme_name
NEXT USED [ filegroup_name ] [ ; ]

Arguments
partition_scheme_name
Is the name of the partition scheme to be altered.
filegroup_name
Specifies the filegroup to be marked by the partition scheme as NEXT USED. This means the filegroup will accept
a new partition that is created by using an ALTER PARTITION FUNCTION statement.
In a partition scheme, only one filegroup can be designated NEXT USED. A filegroup that is not empty can be
specified. If filegroup_name is specified and there currently is no filegroup marked NEXT USED, filegroup_name
is marked NEXT USED. If filegroup_name is specified, and a filegroup with the NEXT USED property already
exists, the NEXT USED property transfers from the existing filegroup to filegroup_name.
If filegroup_name is not specified and a filegroup with the NEXT USED property already exists, that filegroup
loses its NEXT USED state so that there are no NEXT USED filegroups in partition_scheme_name.
If filegroup_name is not specified, and there are no filegroups marked NEXT USED, ALTER PARTITION SCHEME
returns a warning.

Remarks
Any filegroup affected by ALTER PARTITION SCHEME must be online.

Permissions
The following permissions can be used to execute ALTER PARTITION SCHEME:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition scheme was created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition scheme was created.

Examples
The following example assumes the partition scheme MyRangePS1 and the filegroup test5fg exist in the current
database.

ALTER PARTITION SCHEME MyRangePS1


NEXT USED test5fg;

Filegroup test5fg will receive any additional partition of a partitioned table or index as a result of an ALTER
PARTITION FUNCTION statement.

See Also
CREATE PARTITION SCHEME (Transact-SQL )
DROP PARTITION SCHEME (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL )
DROP PARTITION FUNCTION (Transact-SQL )
CREATE TABLE (Transact-SQL )
CREATE INDEX (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.partition_schemes (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.destination_data_spaces (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
ALTER PROCEDURE (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies a previously created procedure that was created by executing the CREATE PROCEDURE statement in
SQL Server.
Transact-SQL Syntax Conventions (Transact-SQL )

Syntax
-- Syntax for SQL Server and Azure SQL Database

ALTER { PROC | PROCEDURE } [schema_name.] procedure_name [ ; number ]


[ { @parameter [ type_schema_name. ] data_type }
[ VARYING ] [ = default ] [ OUT | OUTPUT ] [READONLY]
] [ ,...n ]
[ WITH <procedure_option> [ ,...n ] ]
[ FOR REPLICATION ]
AS { [ BEGIN ] sql_statement [;] [ ...n ] [ END ] }
[;]

<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE AS Clause ]

-- Syntax for SQL Server CLR Stored Procedure

ALTER { PROC | PROCEDURE } [schema_name.] procedure_name [ ; number ]


[ { @parameter [ type_schema_name. ] data_type }
[ = default ] [ OUT | OUTPUT ] [READONLY]
] [ ,...n ]
[ WITH EXECUTE AS Clause ]
AS { EXTERNAL NAME assembly_name.class_name.method_name }
[;]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

ALTER { PROC | PROCEDURE } [schema_name.] procedure_name


[ { @parameterdata_type } [= ] ] [ ,...n ]
AS { [ BEGIN ] sql_statement [ ; ] [ ,...n ] [ END ] }
[;]

Arguments
schema_name
The name of the schema to which the procedure belongs.
procedure_name
The name of the procedure to change. Procedure names must comply with the rules for identifiers.
; number
An existing optional integer that is used to group procedures of the same name so that they can be dropped
together by using one DROP PROCEDURE statement.

NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.

@ parameter
A parameter in the procedure. Up to 2,100 parameters can be specified.
[ type_schema_name. ] data_type
Is the data type of the parameter and the schema it belongs to.
For information about data type restrictions, see CREATE PROCEDURE (Transact-SQL ).
VARYING
Specifies the result set supported as an output parameter. This parameter is constructed dynamically by the stored
procedure and its contents can vary. Applies only to cursor parameters. This option is not valid for CLR
procedures.
default
Is a default value for the parameter.
OUT | OUTPUT
Indicates that the parameter is a return parameter.
READONLY
Indicates that the parameter cannot be updated or modified within the body of the procedure. If the parameter
type is a table-value type, READONLY must be specified.
RECOMPILE
Indicates that the Database Engine does not cache a plan for this procedure and the procedure is recompiled at run
time.
ENCRYPTION
Applies to: SQL Server ( SQL Server 2008 through SQL Server 2017) and Azure SQL Database.
Indicates that the Database Engine will convert the original text of the ALTER PROCEDURE statement to an
obfuscated format. The output of the obfuscation is not directly visible in any of the catalog views in SQL Server.
Users that have no access to system tables or database files cannot retrieve the obfuscated text. However, the text
will be available to privileged users that can either access system tables over the DAC port or directly access
database files. Also, users that can attach a debugger to the server process can retrieve the original procedure from
memory at runtime. For more information about accessing system metadata, see Metadata Visibility
Configuration.
Procedures created with this option cannot be published as part of SQL Server replication.
This option cannot be specified for common language runtime (CLR ) stored procedures.

NOTE
During an upgrade, the Database Engine uses the obfuscated comments stored in sys.sql_modules to re-create procedures.

EXECUTE AS
Specifies the security context under which to execute the stored procedure after it is accessed.
For more information, see EXECUTE AS Clause (Transact-SQL ).
FOR REPLICATION
Specifies that stored procedures that are created for replication cannot be executed on the Subscriber. A stored
procedure created with the FOR REPLICATION option is used as a stored procedure filter and only executed
during replication. Parameters cannot be declared if FOR REPLICATION is specified. This option is not valid for
CLR procedures. The RECOMPILE option is ignored for procedures created with FOR REPLICATION.

NOTE
This option is not available in a contained database.

{ [ BEGIN ] sql_statement [;] [ ...n ] [ END ] }


One or more Transact-SQL statements comprising the body of the procedure. You can use the optional BEGIN and
END keywords to enclose the statements. For more information, see the Best Practices, General Remarks, and
Limitations and Restrictions sections in CREATE PROCEDURE (Transact-SQL ).
EXTERNAL NAME assembly_name.class_name.method_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the method of a .NET Framework assembly for a CLR stored procedure to reference. class_name must be
a valid SQL Server identifier and must exist as a class in the assembly. If the class has a namespace-qualified name
uses a period (.) to separate namespace parts, the class name must be delimited by using brackets ([]) or quotation
marks (""). The specified method must be a static method of the class.
By default, SQL Server cannot execute CLR code. You can create, modify, and drop database objects that reference
common language runtime modules; however, you cannot execute these references in SQL Server until you enable
the clr enabled option. To enable the option, use sp_configure.

NOTE
CLR procedures are not supported in a contained database.

General Remarks
Transact-SQL stored procedures cannot be modified to be CLR stored procedures and vice versa.
ALTER PROCEDURE does not change permissions and does not affect any dependent stored procedures or
triggers. However, the current session settings for QUOTED_IDENTIFIER and ANSI_NULLS are included in the
stored procedure when it is modified. If the settings are different from those in effect when stored procedure was
originally created, the behavior of the stored procedure may change.
If a previous procedure definition was created using WITH ENCRYPTION or WITH RECOMPILE, these options
are enabled only if they are included in ALTER PROCEDURE.
For more information about stored procedures, see CREATE PROCEDURE (Transact-SQL ).

Security
Permissions
Requires ALTER permission on the procedure or requires membership in the db_ddladmin fixed database role.

Examples
The following example creates the uspVendorAllInfo stored procedure. This procedure returns the names of all the
vendors that supply Adventure Works Cycles, the products they supply, their credit ratings, and their availability.
After this procedure is created, it is then modified to return a different result set.

IF OBJECT_ID ( 'Purchasing.uspVendorAllInfo', 'P' ) IS NOT NULL


DROP PROCEDURE Purchasing.uspVendorAllInfo;
GO
CREATE PROCEDURE Purchasing.uspVendorAllInfo
WITH EXECUTE AS CALLER
AS
SET NOCOUNT ON;
SELECT v.Name AS Vendor, p.Name AS 'Product name',
v.CreditRating AS 'Rating',
v.ActiveFlag AS Availability
FROM Purchasing.Vendor v
INNER JOIN Purchasing.ProductVendor pv
ON v.BusinessEntityID = pv.BusinessEntityID
INNER JOIN Production.Product p
ON pv.ProductID = p.ProductID
ORDER BY v.Name ASC;
GO

The following example alters the uspVendorAllInfo stored procedure. It removes the EXECUTE AS CALLER clause
and modifies the body of the procedure to return only those vendors that supply the specified product. The LEFT
and CASE functions customize the appearance of the result set.

USE AdventureWorks2012;
GO
ALTER PROCEDURE Purchasing.uspVendorAllInfo
@Product varchar(25)
AS
SET NOCOUNT ON;
SELECT LEFT(v.Name, 25) AS Vendor, LEFT(p.Name, 25) AS 'Product name',
'Rating' = CASE v.CreditRating
WHEN 1 THEN 'Superior'
WHEN 2 THEN 'Excellent'
WHEN 3 THEN 'Above average'
WHEN 4 THEN 'Average'
WHEN 5 THEN 'Below average'
ELSE 'No rating'
END
, Availability = CASE v.ActiveFlag
WHEN 1 THEN 'Yes'
ELSE 'No'
END
FROM Purchasing.Vendor AS v
INNER JOIN Purchasing.ProductVendor AS pv
ON v.BusinessEntityID = pv.BusinessEntityID
INNER JOIN Production.Product AS p
ON pv.ProductID = p.ProductID
WHERE p.Name LIKE @Product
ORDER BY v.Name ASC;
GO

Here is the result set.


Vendor Product name Rating Availability
-------------------- ------------- ------- ------------
Proseware, Inc. LL Crankarm Average No
Vision Cycles, Inc. LL Crankarm Superior Yes
(2 row(s) affected)`

See Also
CREATE PROCEDURE (Transact-SQL )
DROP PROCEDURE (Transact-SQL )
EXECUTE (Transact-SQL )
EXECUTE AS (Transact-SQL )
EVENTDATA (Transact-SQL )
Stored Procedures (Database Engine)
sys.procedures (Transact-SQL )
ALTER QUEUE (Transact-SQL)
5/4/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a queue.
Transact-SQL Syntax Conventions

Syntax
ALTER QUEUE <object>
queue_settings
| queue_action
[ ; ]

<object> : :=
{
[ database_name. [ schema_name ] . | schema_name. ]
queue_name
}

<queue_settings> : :=
WITH
[ STATUS = { ON | OFF } [ , ] ]
[ RETENTION = { ON | OFF } [ , ] ]
[ ACTIVATION (
{ [ STATUS = { ON | OFF } [ , ] ]
[ PROCEDURE_NAME = <procedure> [ , ] ]
[ MAX_QUEUE_READERS = max_readers [ , ] ]
[ EXECUTE AS { SELF | 'user_name' | OWNER } ]
| DROP }
) [ , ]]
[ POISON_MESSAGE_HANDLING (
STATUS = { ON | OFF } )
]

<queue_action> : :=
REBUILD [ WITH <query_rebuild_options> ]
| REORGANIZE [ WITH (LOB_COMPACTION = { ON | OFF } ) ]
| MOVE TO { file_group | "default" }

<procedure> : :=
{
[ database_name. [ schema_name ] . | schema_name. ]
stored_procedure_name
}

<queue_rebuild_options> : :=
{
( MAXDOP = max_degree_of_parallelism )
}

Arguments
database_name (object)
Is the name of the database that contains the queue to be changed. When no database_name is provided, this
defaults to the current database.
schema_name (object)
Is the name of the schema to which the new queue belongs. When no schema_name is provided, this defaults to
the default schema for the current user.
queue_name
Is the name of the queue to be changed.
STATUS (Queue)
Specifies whether the queue is available (ON ) or unavailable (OFF ). When the queue is unavailable, no messages
can be added to the queue or removed from the queue.
RETENTION
Specifies the retention setting for the queue. If RETENTION = ON, all messages sent or received on conversations
using this queue are retained in the queue until the conversations have ended. This allows you to retain messages
for auditing purposes, or to perform compensating transactions if an error occurs

NOTE
Setting RETENTION = ON can reduce performance. This setting should only be used if required to meet the service level
agreement for the application.

ACTIVATION
Specifies information about the stored procedure that is activated to process messages that arrive in this queue.
STATUS (Activation)
Specifies whether or not the queue activates the stored procedure. When STATUS = ON, the queue starts the
stored procedure specified with PROCEDURE_NAME when the number of procedures currently running is less
than MAX_QUEUE_READERS and when messages arrive on the queue faster than the stored procedures receive
messages. When STATUS = OFF, the queue does not activate the stored procedure.
REBUILD [ WITH <queue_rebuild_options> ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Rebuilds all indexes on the queue internal table. Use this capability when you are experiencing fragmentation
problems due to high load. MAXDOP is the only supported queue rebuild option. REBUILD is always an offline
operation.
REORGANIZE [ WITH ( LOB_COMPACTION = { ON | OFF } ) ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Reorganize all indexes on the queue internal table.
Unlike REORGANIZE on user tables, REORGANIZE on a queue is always performed as an offline operation
because page level locks are explicitly disabled on queues.

TIP
For general guidance regarding index fragmentation, when fragmentation is between 5% and 30%, reorganize the index.
When fragmentation is above 30%, rebuild the index. However, these numbers are only for general guidance as a starting
point for your environment. To determine the amount of index fragmentation, use sys.dm_db_index_physical_stats (Transact-
SQL) - see example G in that article for examples.

MOVE TO { file_group | "default" }


Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Moves the queue internal table (with its indexes) to a user-specified filegroup. The new filegroup must not be read-
only.
PROCEDURE_NAME = <procedure>
Specifies the name of the stored procedure to activate when the queue contains messages to be processed. This
value must be a SQL Server identifier.
database_name (procedure)
Is the name of the database that contains the stored procedure.
schema_name (procedure)
Is the name of the schema that owns the stored procedure.
stored_procedure_name
Is the name of the stored procedure.
MAX_QUEUE_READERS =max_reader
Specifies the maximum number of instances of the activation stored procedure that the queue starts
simultaneously. The value of max_readers must be a number between 0 and 32767.
EXECUTE AS
Specifies the SQL Server database user account under which the activation stored procedure runs. SQL Server
must be able to check the permissions for this user at the time that the queue activates the stored procedure. For
Windows domain user, the SQL Server must be connected to the domain and able to validate the permissions of
the specified user when the procedure is activated or activation fails. For a SQL Server user, the server can always
check permissions.
SELF
Specifies that the stored procedure executes as the current user. (The database principal executing this ALTER
QUEUE statement.)
'user_name'
Is the name of the user that the stored procedure executes as. user_name must be a valid SQL Server user
specified as a SQL Server identifier. The current user must have IMPERSONATE permission for the user_name
specified.
OWNER
Specifies that the stored procedure executes as the owner of the queue.
DROP
Deletes all of the activation information associated with the queue.
POISON_MESSAGE_HANDLING
Specifies whether poison message handling is enabled. The default is ON.
A queue that has poison message handling set to OFF will not be disabled after five consecutive transaction
rollbacks. This allows for a custom poison message handing system to be defined by the application.

Remarks
When a queue with a specified activation stored procedure contains messages, changing the activation status from
OFF to ON immediately activates the activation stored procedure. Altering the activation status from ON to OFF
stops the broker from activating instances of the stored procedure, but does not stop instances of the stored
procedure that are currently running.
Altering a queue to add an activation stored procedure does not change the activation status of the queue.
Changing the activation stored procedure for the queue does not affect instances of the activation stored
procedure that are currently running.
Service Broker checks the maximum number of queue readers for a queue as part of the activation process.
Therefore, altering a queue to increase the maximum number of queue readers allows Service Broker to
immediately start more instances of the activation stored procedure. Altering a queue to decrease the maximum
number of queue readers does not affect instances of the activation stored procedure currently running. However,
Service Broker does not start a new instance of the stored procedure until the number of instances for the
activation stored procedure falls below the configured maximum number.
When a queue is unavailable, Service Broker holds messages for services that use the queue in the transmission
queue for the database. The sys.transmission_queue catalog view provides a view of the transmission queue.
If a RECEIVE statement or a GET CONVERSATION GROUP statement specifies an unavailable queue, that
statement fails with a Transact-SQL error.

Permissions
Permission for altering a queue defaults to the owner of the queue, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.

Examples
A. Making a queue unavailable
The following example makes the ExpenseQueue queue unavailable to receive messages.

ALTER QUEUE ExpenseQueue WITH STATUS = OFF ;

B. Changing the activation stored procedure


The following example changes the stored procedure that the queue starts. The stored procedure executes as the
user who ran the ALTER QUEUE statement.

ALTER QUEUE ExpenseQueue


WITH ACTIVATION (
PROCEDURE_NAME = new_stored_proc,
EXECUTE AS SELF) ;

C. Changing the number of queue readers


The following example sets to 7 the maximum number of stored procedure instances that Service Broker starts
for this queue.

ALTER QUEUE ExpenseQueue WITH ACTIVATION (MAX_QUEUE_READERS = 7) ;

D. Changing the activation stored procedure and the EXECUTE AS account


The following example changes the stored procedure that Service Broker starts. The stored procedure executes as
the user SecurityAccount .

ALTER QUEUE ExpenseQueue


WITH ACTIVATION (
PROCEDURE_NAME = AdventureWorks2012.dbo.new_stored_proc ,
EXECUTE AS 'SecurityAccount') ;

E. Setting the queue to retain messages


The following example sets the queue to retain messages. The queue retains all messages sent to or from services
that use this queue until the conversation that contains the message ends.

ALTER QUEUE ExpenseQueue WITH RETENTION = ON ;

F. Removing activation from a queue


The following example removes all activation information from the queue.

ALTER QUEUE ExpenseQueue WITH ACTIVATION (DROP) ;

G. Rebuilding queue indexes


Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
The following example rebuilds queue indexes'

ALTER QUEUE ExpenseQueue REBUILD WITH (MAXDOP = 2)

H. Reorganizing queue indexes


Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
The following example reorganizes queue indexes

ALTER QUEUE ExpenseQueue REORGANIZE

I: Moving queue internal table to another filegroup


Applies to: SQL Server 2016 (13.x) through SQL Server 2017.

ALTER QUEUE ExpenseQueue MOVE TO [NewFilegroup]

See Also
CREATE QUEUE (Transact-SQL )
DROP QUEUE (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.dm_db_index_physical_stats (Transact-SQL )
ALTER REMOTE SERVICE BINDING (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the user associated with a remote service binding, or changes the anonymous authentication setting for
the binding.
Transact-SQL Syntax Conventions

Syntax
ALTER REMOTE SERVICE BINDING binding_name
WITH [ USER = <user_name> ] [ , ANONYMOUS = { ON | OFF } ]
[ ; ]

Arguments
binding_name
The name of the remote service binding to change. Server, database, and schema names cannot be specified.
WITH USER = <user_name>
Specifies the database user that holds the certificate associated with the remote service for this binding. The public
key from this certificate is used for encryption and authentication of messages exchanged with the remote service.
ANONYMOUS
Specifies whether anonymous authentication is used when communicating with the remote service. If
ANONYMOUS = ON, anonymous authentication is used and the credentials of the local user are not transferred
to the remote service. If ANONYMOUS = OFF, user credentials are transferred. If this clause is not specified, the
default is OFF.

Remarks
The public key in the certificate associated with user_name is used to authenticate messages sent to the remote
service and to encrypt a session key that is then used to encrypt the conversation. The certificate for user_name
must correspond to the certificate for a login in the database that hosts the remote service.

Permissions
Permission for altering a remote service binding defaults to the owner of the remote service binding, members of
the db_owner fixed database role, and members of the sysadmin fixed server role.
The user that executes the ALTER REMOTE SERVICE BINDING statement must have impersonate permission for
the user specified in the statement.
To alter the AUTHORIZATION for a remote service binding, use the ALTER AUTHORIZATION statement.

Examples
The following example changes the remote service binding APBinding to encrypt messages by using the
certificates from the account SecurityAccount .

ALTER REMOTE SERVICE BINDING APBinding


WITH USER = SecurityAccount ;

See Also
CREATE REMOTE SERVICE BINDING (Transact-SQL )
DROP REMOTE SERVICE BINDING (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This statement is used to perform the following Resource Governor actions in SQL Server:
Apply the configuration changes specified when the CREATE|ALTER|DROP WORKLOAD GROUP or
CREATE|ALTER|DROP RESOURCE POOL or CREATE|ALTER|DROP EXTERNAL RESOURCE POOL
statements are issued.
Enable or disable Resource Governor.
Configure classification for incoming requests.
Reset workload group and resource pool statistics.
Sets the maximum I/O operations per disk volume.
Transact-SQL Syntax Conventions

Syntax
ALTER RESOURCE GOVERNOR
{ DISABLE | RECONFIGURE }
| WITH ( CLASSIFIER_FUNCTION = { schema_name.function_name | NULL } )
| RESET STATISTICS
| WITH ( MAX_OUTSTANDING_IO_PER_VOLUME = value )
[ ; ]

Arguments
DISABLE
Disables Resource Governor. Disabling Resource Governor has the following results:
The classifier function is not executed.
All new connections are automatically classified into the default group.
System-initiated requests are classified into the internal workload group.
All existing workload group and resource pool settings are reset to their default values. In this case, no
events are fired when limits are reached.
Normal system monitoring is not affected.
Configuration changes can be made, but the changes do not take effect until Resource Governor is
enabled.
Upon restarting SQL Server, the Resource Governor will not load its configuration, but instead will have
only the default and internal groups and pools.
RECONFIGURE
When the Resource Governor is not enabled, RECONFIGURE enables the Resource Governor. Enabling
Resource Governor has the following results:
The classifier function is executed for new connections so that their workload can be assigned to workload
groups.
The resource limits that are specified in the Resource Governor configuration are honored and enforced.
Requests that existed before enabling Resource Governor are affected by any configuration changes that
were made when Resource Governor was disabled.
When Resource Governor is running, RECONFIGURE applies any configuration changes requested when
the CREATE|ALTER|DROP WORKLOAD GROUP or CREATE|ALTER|DROP RESOURCE POOL or
CREATE|ALTER|DROP EXTERNAL RESOURCE POOL statements are executed.

IMPORTANT
ALTER RESOURCE GOVERNOR RECONFIGURE must be issued in order for any configuration changes to take effect.

CL ASSIFIER_FUNCTION = { schema_name.function_name | NULL }


Registers the classification function specified by schema_name.function_name. This function classifies every new
session and assigns the session requests and queries to a workload group. When NULL is used, new sessions are
automatically assigned to the default workload group.
RESET STATISTICS
Resets statistics on all workload groups and resource pools. For more information, see
sys.dm_resource_governor_workload_groups (Transact-SQL ) and sys.dm_resource_governor_resource_pools
(Transact-SQL ).
MAX_OUTSTANDING_IO_PER_VOLUME = value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Sets the maximum queued I/O operations per disk volume. These I/O operations can be reads or writes of any
size. The maximum value for MAX_OUTSTANDING_IO_PER_VOLUME is 100. It is not a percent. This setting is
designed to tune IO resource governance to the IO characteristics of a disk volume. We recommend that you
experiment with different values and consider using a calibration tool such as IOMeter, DiskSpd, or SQLIO
(deprecated) to identify the max value for your storage subsystem. This setting provides a system-level safety
check that allows SQL Server to meet the minimum IOPS for resource pools even if other pools have the
MAX_IOPS_PER_VOLUME set to unlimited. For more information about MAX_IOPS_PER_VOLUME, see
CREATE RESOURCE POOL.

Remarks
ALTER RESOURCE GOVERNOR DISABLE, ALTER RESOURCE GOVERNOR RECONFIGURE, and ALTER
RESOURCE GOVERNOR RESET STATISTICS cannot be used inside a user transaction.
The RECONFIGURE parameter is part of the Resource Governor syntax and should not be confused with
RECONFIGURE, which is a separate DDL statement.
We recommend being familiar with Resource Governor states before you execute DDL statements. For more
information, see Resource Governor.

Permissions
Requires CONTROL SERVER permission.

Examples
A. Starting the Resource Governor
When SQL Server is first installed Resource Governor is disabled. The following example starts Resource
Governor. After the statement executes, Resource Governor is running and can use the predefined workload
groups and resource pools.

ALTER RESOURCE GOVERNOR RECONFIGURE;

B. Assigning new sessions to the default group


The following example assigns all new sessions to the default workload group by removing any existing classifier
function from the Resource Governor configuration. When no function is designated as a classifier function, all
new sessions are assigned to the default workload group. This change applies to new sessions only. Existing
sessions are not affected.

ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL);


GO
ALTER RESOURCE GOVERNOR RECONFIGURE;

C. Creating and registering a classifier function


The following example creates a classifier function named dbo.rgclassifier_v1 . The function classifies every new
session based on either the user name or application name and assigns the session requests and queries to a
specific workload group. Sessions that do not map to the specified user or application names are assigned to the
default workload group. The classifier function is then registered and the configuration change is applied.
-- Store the classifier function in the master database.
USE master;
GO
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
CREATE FUNCTION dbo.rgclassifier_v1() RETURNS sysname
WITH SCHEMABINDING
AS
BEGIN
-- Declare the variable to hold the value returned in sysname.
DECLARE @grp_name AS sysname
-- If the user login is 'sa', map the connection to the groupAdmin
-- workload group.
IF (SUSER_NAME() = 'sa')
SET @grp_name = 'groupAdmin'
-- Use application information to map the connection to the groupAdhoc
-- workload group.
ELSE IF (APP_NAME() LIKE '%MANAGEMENT STUDIO%')
OR (APP_NAME() LIKE '%QUERY ANALYZER%')
SET @grp_name = 'groupAdhoc'
-- If the application is for reporting, map the connection to
-- the groupReports workload group.
ELSE IF (APP_NAME() LIKE '%REPORT SERVER%')
SET @grp_name = 'groupReports'
-- If the connection does not map to any of the previous groups,
-- put the connection into the default workload group.
ELSE
SET @grp_name = 'default'
RETURN @grp_name
END;
GO
-- Register the classifier user-defined function and update the
-- the in-memory configuration.
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.rgclassifier_v1);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

D. Resetting Statistics
The following example resets all workload group and resource pool statistics.

ALTER RESOURCE GOVERNOR RESET STATISTICS;

E. Setting the MAX_OUTSTANDING_IO_PER_VOLUME option


The following example set the MAX_OUTSTANDING_IO_PER_VOLUME option to 20.

ALTER RESOURCE GOVERNOR


WITH (MAX_OUTSTANDING_IO_PER_VOLUME = 20);

See Also
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
CREATE EXTERNAL RESOURCE POOL (Transact-SQL )
DROP EXTERNAL RESOURCE POOL (Transact-SQL )
ALTER EXTERNAL RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
Resource Governor
sys.dm_resource_governor_workload_groups (Transact-SQL )
sys.dm_resource_governor_resource_pools (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes an existing Resource Governor resource pool configuration in SQL Server.
Transact-SQL Syntax Conventions.

Syntax
ALTER RESOURCE POOL { pool_name | "default" }
[WITH
( [ MIN_CPU_PERCENT = value ]
[ [ , ] MAX_CPU_PERCENT = value ]
[ [ , ] CAP_CPU_PERCENT = value ]
[ [ , ] AFFINITY {
SCHEDULER = AUTO
| ( <scheduler_range_spec> )
| NUMANODE = ( <NUMA_node_range_spec> )
}]
[ [ , ] MIN_MEMORY_PERCENT = value ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MIN_IOPS_PER_VOLUME = value ]
[ [ , ] MAX_IOPS_PER_VOLUME = value ]
)]
[;]

<scheduler_range_spec> ::=
{SCHED_ID | SCHED_ID TO SCHED_ID}[,…n]

<NUMA_node_range_spec> ::=
{NUMA_node_ID | NUMA_node_ID TO NUMA_node_ID}[,…n]

Arguments
{ pool_name | "default" }
Is the name of an existing user-defined resource pool or the default resource pool that is created when SQL
Server is installed.
"default" must be enclosed by quotation marks ("") or brackets ([]) when used with ALTER RESOURCE POOL to
avoid conflict with DEFAULT, which is a system reserved word. For more information, see Database Identifiers.

NOTE
Predefined workload groups and resource pools all use lowercase names, such as "default". This should be taken into
account for servers that use case-sensitive collation. Servers with case-insensitive collation, such as
SQL_Latin1_General_CP1_CI_AS, will treat "default" and "Default" as the same.

MIN_CPU_PERCENT =value
Specifies the guaranteed average CPU bandwidth for all requests in the resource pool when there is CPU
contention. value is an integer with a default setting of 0. The allowed range for value is from 0 through 100.
MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in the resource pool will receive when there is
CPU contention. value is an integer with a default setting of 100. The allowed range for value is from 1 through
100.
CAP_CPU_PERCENT =value
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the target maximum CPU capacity for requests in the resource pool. value is an integer with a default
setting of 100. The allowed range for value is from 1 through 100.

NOTE
Due to the statistical nature of CPU governance, you may notice occasional spikes exceeding the value specified in
CAP_CPU_PERCENT.

AFFINITY {SCHEDULER = AUTO | (Scheduler_range_spec) | NUMANODE = (NUMA_node_range_spec)}


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Attach the resource pool to specific schedulers. The default value is AUTO.
AFFINITY SCHEDULER = (Scheduler_range_spec) maps the resource pool to the SQL Server schedules
identified by the given IDs. These IDs map to the values in the scheduler_id column in sys.dm_os_schedulers
(Transact-SQL ).
When you use AFFINITY NAMANODE = (NUMA_node_range_spec), the resource pool is affinitized to the SQL
Server schedulers that map to the physical CPUs that correspond to the given NUMA node or range of nodes.
You can use the following Transact-SQL query to discover the mapping between the physical NUMA
configuration and the SQL Server scheduler IDs.

SELECT osn.memory_node_id AS [numa_node_id], sc.cpu_id, sc.scheduler_id


FROM sys.dm_os_nodes AS osn
INNER JOIN sys.dm_os_schedulers AS sc
ON osn.node_id = sc.parent_node_id
AND sc.scheduler_id < 1048576;

MIN_MEMORY_PERCENT =value
Specifies the minimum amount of memory reserved for this resource pool that can not be shared with other
resource pools. value is an integer with a default setting of 0. The allowed range for value is from 0 through 100.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this resource pool. value is an integer with a
default setting of 100. The allowed range for value is from 1 through 100.
MIN_IOPS_PER_VOLUME =value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Specifies the minimum I/O operations per second (IOPS ) per disk volume to reserve for the resource pool. The
allowed range for value is from 0 through 2^31-1 (2,147,483,647). Specify 0 to indicate no minimum threshold
for the pool.
MAX_IOPS_PER_VOLUME =value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Specifies the maximum I/O operations per second (IOPS ) per disk volume to allow for the resource pool. The
allowed range for value is from 0 through 2^31-1 (2,147,483,647). Specify 0 to set an unlimited threshold for the
pool. The default is 0.
If the MAX_IOPS_PER_VOLUME for a pool is set to 0, the pool is not governed at all and can take all the IOPS in
the system even if other pools have MIN_IOPS_PER_VOLUME set. For this case, we recommend that you set the
MAX_IOPS_PER_VOLUME value for this pool to a high number (for example, the maximum value 2^31-1) if you
want this pool to be governed for IO.

Remarks
MAX_CPU_PERCENT and MAX_MEMORY_PERCENT must be greater than or equal to MIN_CPU_PERCENT
and MIN_MEMORY_PERCENT, respectively.
MAX_CPU_PERCENT can use CPU capacity above the value of MAX_CPU_PERCENT if it is available. Although
there may be periodic spikes above CAP_CPU_PERCENT, workloads should not exceed CAP_CPU_PERCENT for
extended periods of time, even when additional CPU capacity is available.
The total CPU percentage for each affinitized component (scheduler(s) or NUMA node(s)) should not exceed
100%.
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states. For
more information, see Resource Governor.
When changing a plan affecting setting, the new setting will only take effect in previously cached plans after
executing DBCC FREEPROCCACHE (pool_name), where pool_name is the name of a Resource Governor
resource pool.
If you are changing AFFINITY from multiple schedulers to a single scheduler, executing DBCC
FREEPROCCACHE is not required because parallel plans can run in serial mode. However, it may not be as
efficient as a plan compiled as a serial plan.
If you are changing AFFINITY from a single scheduler to multiple schedulers, executing DBCC
FREEPROCCACHE is not required. However, serial plans cannot run in parallel, so clearing the respective
cache will allow new plans to potentially be compiled using parallelism.
Cau t i on

Clearing cached plans from a resource pool that is associated with more than one workload group will affect all
workload groups with the user-defined resource pool identified by pool_name.

Permissions
Requires CONTROL SERVER permission.

Examples
The following example keeps all the default resource pool settings on the default pool except for
MAX_CPU_PERCENT , which is changed to 25 .

ALTER RESOURCE POOL "default"


WITH
( MAX_CPU_PERCENT = 25);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

In the following example, the CAP_CPU_PERCENT sets the hard cap to 80% and AFFINITY SCHEDULER is set to an
individual value of 8 and a range of 12 to 16.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
ALTER RESOURCE POOL Pool25
WITH(
MIN_CPU_PERCENT = 5,
MAX_CPU_PERCENT = 10,
CAP_CPU_PERCENT = 80,
AFFINITY SCHEDULER = (8, 12 TO 16),
MIN_MEMORY_PERCENT = 5,
MAX_MEMORY_PERCENT = 15
);

GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

See Also
Resource Governor
CREATE RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
ALTER ROLE (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds or removes members to or from a database role, or changes the name of a user-defined database role.

NOTE
To alter roles adding or dropping members in SQL Data Warehouse or Parallel Data Warehouse, use sp_addrolemember
(Transact-SQL) and sp_droprolemember (Transact-SQL).

Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server (starting with 2012) and Azure SQL Database

ALTER ROLE role_name


{
ADD MEMBER database_principal
| DROP MEMBER database_principal
| WITH NAME = new_name
}
[;]

-- Syntax for SQL Server 2008, Azure SQL Data Warehouse and Parallel Data Warehouse

-- Change the name of a user-defined database role


ALTER ROLE role_name
WITH NAME = new_name
[;]

Arguments
role_name
APPLIES TO: SQL Server (starting with 2008), Azure SQL Database
Specifies the database role to change.
ADD MEMBER database_principall
APPLIES TO: SQL Server (starting with 2012), Azure SQL Database
Specifies to add the database principal to the membership of a database role.
database_principal is a database user or a user-defined database role.
database_principal cannot be a fixed database role or a server principal.
DROP MEMBER database_principal
APPLIES TO: SQL Server (starting with 2012), Azure SQL Database
Specifies to remove a database principal from the membership of a database role.
database_principal is a database user or a user-defined database role.
database_principal cannot be a fixed database role or a server principal.
WITH NAME = new_name
APPLIES TO: SQL Server (starting with 2008), Azure SQL Database
Specifies to change the name of a user-defined database role. The new name must not already exist in the
database.
Changing the name of a database role does not change ID number, owner, or permissions of the role.

Permissions
To run this command you need one or more of these permissions or memberships:
ALTER permission on the role
ALTER ANY ROLE permission on the database
Membership in the db_securityadmin fixed database role
Additionally, to change the membership in a fixed database role you need:
Membership in the db_owner fixed database role

Limitations and restrictions


You cannot change the name of a fixed database role.

Metadata

These system views contain information about database roles and database principals.
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )

Examples
A. Change the name of a database role
APPLIES TO: SQL Server (starting with 2008), SQL Database
The following example changes the name of role buyers to purchasing . This example can be executed in the
AdventureWorks sample database.

ALTER ROLE buyers WITH NAME = purchasing;

B. Add or remove role members


APPLIES TO: SQL Server (starting with 2012), SQL Database
This example creates a database role named Sales . It adds a database user named Barry to the membership, and
then shows how to remove the member Barry. This example can be executed in the AdventureWorks sample
database.
CREATE ROLE Sales;
ALTER ROLE Sales ADD MEMBER Barry;
ALTER ROLE Sales DROP MEMBER Barry;

See Also
CREATE ROLE (Transact-SQL )
Principals (Database Engine)
DROP ROLE (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
ALTER ROUTE (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Modifies route information for an existing route in SQL Server.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
ALTER ROUTE route_name
WITH
[ SERVICE_NAME = 'service_name' [ , ] ]
[ BROKER_INSTANCE = 'broker_instance' [ , ] ]
[ LIFETIME = route_lifetime [ , ] ]
[ ADDRESS = 'next_hop_address' [ , ] ]
[ MIRROR_ADDRESS = 'next_hop_mirror_address' ]
[ ; ]

Arguments
route_name
Is the name of the route to change. Server, database, and schema names cannot be specified.
WITH
Introduces the clauses that define the route being altered.
SERVICE_NAME ='service_name'
Specifies the name of the remote service that this route points to. The service_name must exactly match the name
the remote service uses. Service Broker uses a byte-by-byte comparison to match the service_name. In other
words, the comparison is case sensitive and does not consider the current collation. A route with a service name of
'SQL/ServiceBroker/BrokerConfiguration' is a route to a Broker Configuration Notice service. A route to this
service might not specify a broker instance.
If the SERVICE_NAME clause is omitted, the service name for the route is unchanged.
BROKER_INSTANCE ='broker_instance'
Specifies the database that hosts the target service. The broker_instance parameter must be the broker instance
identifier for the remote database, which can be obtained by running the following query in the selected database:

SELECT service_broker_guid
FROM sys.databases
WHERE database_id = DB_ID();
When the BROKER_INSTANCE clause is omitted, the broker instance for the route is unchanged.

NOTE
This option is not available in a contained database.

LIFETIME =route_lifetime
Specifies the time, in seconds, that SQL Server retains the route in the routing table. At the end of the lifetime, the
route expires, and SQL Server no longer considers the route when choosing a route for a new conversation. If this
clause is omitted, the lifetime of the route is unchanged.
ADDRESS ='next_hop_address'
For SQL Database Managed Instance, ADDRESS must be local.
Specifies the network address for this route. The next_hop_address specifies a TCP/IP address in the following
format:
TCP:// { dns_name | netbios_name |ip_address } : port_number
The specified port_number must match the port number for the Service Broker endpoint of an instance of SQL
Server at the specified computer. This can be obtained by running the following query in the selected database:

SELECT tcpe.port
FROM sys.tcp_endpoints AS tcpe
INNER JOIN sys.service_broker_endpoints AS ssbe
ON ssbe.endpoint_id = tcpe.endpoint_id
WHERE ssbe.name = N'MyServiceBrokerEndpoint';

When a route specifies 'LOCAL' for the next_hop_address, the message is delivered to a service within the current
instance of SQL Server.
When a route specifies 'TRANSPORT' for the next_hop_address, the network address is determined based on the
network address in the name of the service. A route that specifies 'TRANSPORT' can specify a service name or
broker instance.
When the next_hop_address is the principal server for a database mirror, you must also specify the
MIRROR_ADDRESS for the mirror server. Otherwise, this route does not automatically failover to the mirror
server.

NOTE
This option is not available in a contained database.

MIRROR_ADDRESS ='next_hop_mirror_address'
Specifies the network address for the mirror server of a mirrored pair whose principal server is at the
next_hop_address. The next_hop_mirror_address specifies a TCP/IP address in the following format:
TCP://{ dns_name | netbios_name | ip_address } : port_number
The specified port_number must match the port number for the Service Broker endpoint of an instance of SQL
Server at the specified computer. This can be obtained by running the following query in the selected database:
SELECT tcpe.port
FROM sys.tcp_endpoints AS tcpe
INNER JOIN sys.service_broker_endpoints AS ssbe
ON ssbe.endpoint_id = tcpe.endpoint_id
WHERE ssbe.name = N'MyServiceBrokerEndpoint';

When the MIRROR_ADDRESS is specified, the route must specify the SERVICE_NAME clause and the
BROKER_INSTANCE clause. A route that specifies 'LOCAL' or 'TRANSPORT' for the next_hop_address might
not specify a mirror address.

NOTE
This option is not available in a contained database.

Remarks
The routing table that stores the routes is a meta-data table that can be read through the sys.routes catalog view.
The routing table can only be updated through the CREATE ROUTE, ALTER ROUTE, and DROP ROUTE
statements.
Clauses that are not specified in the ALTER ROUTE command remain unchanged. Therefore, you cannot ALTER a
route to specify that the route does not time out, that the route matches any service name, or that the route
matches any broker instance. To change these characteristics of a route, you must drop the existing route and
create a new route with the new information.
When a route specifies 'TRANSPORT' for the next_hop_address, the network address is determined based on the
name of the service. SQL Server can successfully process service names that begin with a network address in a
format that is valid for a next_hop_address. Services with names that contain valid network addresses will route to
the network address in the service name.
The routing table can contain any number of routes that specify the same service, network address, and/or broker
instance identifier. In this case, Service Broker chooses a route using a procedure designed to find the most exact
match between the information specified in the conversation and the information in the routing table.
To alter the AUTHORIZATION for a service, use the ALTER AUTHORIZATION statement.

Permissions
Permission for altering a route defaults to the owner of the route, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.

Examples
A. Changing the service for a route
The following example modifies the ExpenseRoute route to point to the remote service
//Adventure-Works.com/Expenses .

ALTER ROUTE ExpenseRoute


WITH
SERVICE_NAME = '//Adventure-Works.com/Expenses';

B. Changing the target database for a route


The following example changes the target database for the ExpenseRoute route to the database identified by the
unique identifier D8D4D268-00A3-4C62-8F91-634B89B1E317.

ALTER ROUTE ExpenseRoute


WITH
BROKER_INSTANCE = 'D8D4D268-00A3-4C62-8F91-634B89B1E317';

C. Changing the address for a route


The following example changes the network address for the ExpenseRoute route to TCP port 1234 on the host
with the IP address 10.2.19.72 .

ALTER ROUTE ExpenseRoute


WITH
ADDRESS = 'TCP://10.2.19.72:1234';

D. Changing the database and address for a route


The following example changes the network address for the ExpenseRoute route to TCP port 1234 on the host
with the DNS name www.Adventure-Works.com . It also changes the target database to the database identified by the
unique identifier D8D4D268-00A3-4C62-8F91-634B89B1E317 .

ALTER ROUTE ExpenseRoute


WITH
BROKER_INSTANCE = 'D8D4D268-00A3-4C62-8F91-634B89B1E317',
ADDRESS = 'TCP://www.Adventure-Works.com:1234';

See Also
CREATE ROUTE (Transact-SQL )
DROP ROUTE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER SCHEMA (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Transfers a securable between schemas.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

ALTER SCHEMA schema_name


TRANSFER [ <entity_type> :: ] securable_name
[;]

<entity_type> ::=
{
Object | Type | XML Schema Collection
}

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

ALTER SCHEMA schema_name


TRANSFER [ OBJECT :: ] securable_name
[;]

Arguments
schema_name
Is the name of a schema in the current database, into which the securable will be moved. Cannot be SYS or
INFORMATION_SCHEMA.
<entity_type>
Is the class of the entity for which the owner is being changed. Object is the default.
securable_name
Is the one-part or two-part name of a schema-scoped securable to be moved into the schema.

Remarks
Users and schemas are completely separate.
ALTER SCHEMA can only be used to move securables between schemas in the same database. To change or drop
a securable within a schema, use the ALTER or DROP statement specific to that securable.
If a one-part name is used for securable_name, the name-resolution rules currently in effect will be used to locate
the securable.
All permissions associated with the securable will be dropped when the securable is moved to the new schema. If
the owner of the securable has been explicitly set, the owner will remain unchanged. If the owner of the securable
has been set to SCHEMA OWNER, the owner will remain SCHEMA OWNER ; however, after the move SCHEMA
OWNER will resolve to the owner of the new schema. The principal_id of the new owner will be NULL.
Moving a stored procedure, function, view, or trigger will not change the schema name, if present, of the
corresponding object either in the definition column of the sys.sql_modules catalog view or obtained using the
OBJECT_DEFINITION built-in function. Therefore, we recommend that ALTER SCHEMA not be used to move
these object types. Instead, drop and re-create the object in its new schema.
Moving an object such as a table or synonym will not automatically update references to that object. You must
modify any objects that reference the transferred object manually. For example, if you move a table and that table
is referenced in a trigger, you must modify the trigger to reflect the new schema name. Use
sys.sql_expression_dependencies to list dependencies on the object before moving it.
To change the schema of a table by using SQL Server Management Studio, in Object Explorer, right-click on the
table and then click Design. Press F4 to open the Properties window. In the Schema box, select a new schema.
Cau t i on

Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER AUTHORIZATION.
In such databases you must instead use the new catalog views. The new catalog views take into account the
separation of principals and schemas that was introduced in SQL Server 2005. For more information about
catalog views, see Catalog Views (Transact-SQL ).

Permissions
To transfer a securable from another schema, the current user must have CONTROL permission on the securable
(not schema) and ALTER permission on the target schema.
If the securable has an EXECUTE AS OWNER specification on it and the owner is set to SCHEMA OWNER, the
user must also have IMPERSONATE permission on the owner of the target schema.
All permissions associated with the securable that is being transferred are dropped when it is moved.

Examples
A. Transferring ownership of a table
The following example modifies the schema HumanResources by transferring the table Address from schema
Person into the schema.

USE AdventureWorks2012;
GO
ALTER SCHEMA HumanResources TRANSFER Person.Address;
GO

B. Transferring ownership of a type


The following example creates a type in the Production schema, and then transfers the type to the Person
schema.
USE AdventureWorks2012;
GO

CREATE TYPE Production.TestType FROM [varchar](10) NOT NULL ;


GO

-- Check the type owner.


SELECT sys.types.name, sys.types.schema_id, sys.schemas.name
FROM sys.types JOIN sys.schemas
ON sys.types.schema_id = sys.schemas.schema_id
WHERE sys.types.name = 'TestType' ;
GO

-- Change the type to the Person schema.


ALTER SCHEMA Person TRANSFER type::Production.TestType ;
GO

-- Check the type owner.


SELECT sys.types.name, sys.types.schema_id, sys.schemas.name
FROM sys.types JOIN sys.schemas
ON sys.types.schema_id = sys.schemas.schema_id
WHERE sys.types.name = 'TestType' ;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


C. Transferring ownership of a table
The following example creates a table Region in the dbo schema, creates a Sales schema, and then moves the
Region table from the dbo schema to the Sales schema.

CREATE TABLE dbo.Region


(Region_id int NOT NULL,
Region_Name char(5) NOT NULL)
WITH (DISTRIBUTION = REPLICATE);
GO

CREATE SCHEMA Sales;


GO

ALTER SCHEMA Sales TRANSFER OBJECT::dbo.Region;


GO

See Also
CREATE SCHEMA (Transact-SQL )
DROP SCHEMA (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER SEARCH PROPERTY LIST (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds a specified search property to, or drops it from the specified search property list.

Syntax
ALTER SEARCH PROPERTY LIST list_name
{
ADD 'property_name'
WITH
(
PROPERTY_SET_GUID = 'property_set_guid'
, PROPERTY_INT_ID = property_int_id
[ , PROPERTY_DESCRIPTION = 'property_description' ]
)
| DROP 'property_name'
}
;

Arguments
list_name
Is the name of the property list being altered. list_name is an identifier.
To view the names of the existing property lists, use the sys.registered_search_property_lists catalog view, as
follows:

SELECT name FROM sys.registered_search_property_lists;

ADD
Adds a specified search property to the property list specified by list_name. The property is registered for the
search property list . Before newly added properties can be used for property searching, the associated full-text
index or indexes must be repopulated. For more information, see ALTER FULLTEXT INDEX (Transact-SQL ).

NOTE
To add a given search property to a search property list, you must provide its property-set GUID (property_set_guid) and
property int ID (property_int_id). For more information, see "Obtaining Property Set GUIDS and Identifiers," later in this
topic.

property_name
Specifies the name to be used to identify the property in full-text queries. property_name must uniquely identify
the property within the property set. A property name can contain internal spaces. The maximum length of
property_name is 256 characters. This name can be a user-friendly name, such as Author or Home Address, or it
can be the Windows canonical name of the property, such as System.Author or System.Contact.HomeAddress.
Developers will need to use the value you specify for property_name to identify the property in the CONTAINS
predicate. Therefore, when adding a property it is important to specify a value that meaningfully represents the
property defined by the specified property set GUID (property_set_guid) and property identifier (property_int_id).
For more information about property names, see "Remarks," later in this topic.
To view the names of properties that currently exist in a search property list of the current database, use the
sys.registered_search_properties catalog view, as follows:

SELECT property_name FROM sys.registered_search_properties;

PROPERTY_SET_GUID ='property_set_guid'
Specifies the identifier of the property set to which the property belongs. This is a globally unique identifier
(GUID ). For information about obtaining this value, see "Remarks," later in this topic.
To view the property set GUID of any property that exists in a search property list of the current database, use the
sys.registered_search_properties catalog view, as follows:

SELECT property_set_guid FROM sys.registered_search_properties;

PROPERTY_INT_ID =property_int_id
Specifies the integer that identifies the property within its property set. For information about obtaining this value,
see "Remarks."
To view the integer identifier of any property that exists in a search property list of the current database, use the
sys.registered_search_properties catalog view, as follows:

SELECT property_int_id FROM sys.registered_search_properties;

NOTE
A given combination of property_set_guid and property_int_id must be unique in a search property list. If you try to add an
existing combination, the ALTER SEARCH PROPERTY LIST operation fails and issues an error. This means that you can define
only one name for a given property.

PROPERTY_DESCRIPTION ='property_description'
Specifies a user-defined description of the property. property_description is a string of up to 512 characters. This
option is optional.
DROP
Drops the specified property from the property list specified by list_name. Dropping a property unregisters it, so it
is no longer searchable.

Remarks
Each full-text index can have only one search property list.
To enable querying on a given search property, you must add it to the search property list of the full-text index and
then repopulate the index.
When specifying a property you can arrange the PROPERTY_SET_GUID, PROPERTY_INT_ID, and
PROPERTY_DESCRIPTION clauses in any order, as a comma-separated list within parentheses, for example:
ALTER SEARCH PROPERTY LIST CVitaProperties
ADD 'System.Author'
WITH (
PROPERTY_DESCRIPTION = 'Author or authors of a given document.',
PROPERTY_SET_GUID = 'F29F85E0-4FF9-1068-AB91-08002B27B3D9',
PROPERTY_INT_ID = 4
);

NOTE
This example uses the property name, System.Author , which is similar to the concept of canonical property names
introduced in Windows Vista (Windows canonical name).

Obtaining Property Values


Full-text search maps a search property to a full-text index by using its property set GUID and property integer ID.
For information about how to obtain these for properties that have been defined by Microsoft, see Find Property
Set GUIDs and Property Integer IDs for Search Properties. For information about properties defined by an
independent software vendor (ISV ), see the documentation of that vendor.

Making Added Properties Searchable


Adding a search property to a search property list registers the property. A newly added property can be
immediately specified in CONTAINS queries. However, property-scoped full-text queries on a newly added
property will not return documents until the associated full-text index is repopulated. For example, the following
property-scoped query on a newly added property, new_search_property, will not return any documents until the
full-text index associated with the target table (table_name) is repopulated:

SELECT column_name
FROM table_name
WHERE CONTAINS( PROPERTY( column_name, 'new_search_property' ),
'contains_search_condition');
GO

To start a full population, use the following ALTER FULLTEXT INDEX (Transact-SQL ) statement:

USE database_name;
GO
ALTER FULLTEXT INDEX ON table_name START FULL POPULATION;
GO

NOTE
Repopulation is not needed after a property is dropped from a property list, because only the properties that remain in the
search property list are available for full-text querying.

Related References
To create a property list
CREATE SEARCH PROPERTY LIST (Transact-SQL )
To drop a property list
DROP SEARCH PROPERTY LIST (Transact-SQL )
To add or remove a property list on a full-text index
ALTER FULLTEXT INDEX (Transact-SQL )
To run a population on a full-text index
ALTER FULLTEXT INDEX (Transact-SQL )

Permissions
Requires CONTROL permission on the property list.

Examples
A. Adding a property
The following example adds several properties— Title , Author , and Tags —to a property list named
DocumentPropertyList .

NOTE
For an example that creates DocumentPropertyList property list, see CREATE SEARCH PROPERTY LIST (Transact-SQL).

ALTER SEARCH PROPERTY LIST DocumentPropertyList


ADD 'Title'
WITH ( PROPERTY_SET_GUID = 'F29F85E0-4FF9-1068-AB91-08002B27B3D9', PROPERTY_INT_ID = 2,
PROPERTY_DESCRIPTION = 'System.Title - Title of the item.' );

ALTER SEARCH PROPERTY LIST DocumentPropertyList


ADD 'Author'
WITH ( PROPERTY_SET_GUID = 'F29F85E0-4FF9-1068-AB91-08002B27B3D9', PROPERTY_INT_ID = 4,
PROPERTY_DESCRIPTION = 'System.Author - Author or authors of the item.' );

ALTER SEARCH PROPERTY LIST DocumentPropertyList


ADD 'Tags'
WITH ( PROPERTY_SET_GUID = 'F29F85E0-4FF9-1068-AB91-08002B27B3D9', PROPERTY_INT_ID = 5,
PROPERTY_DESCRIPTION =
'System.Keywords - Set of keywords (also known as tags) assigned to the item.' );

NOTE
You must associate a given search property list with a full-text index before using it for property-scoped queries. To do so,
use an ALTER FULLTEXT INDEX statement and specify the SET SEARCH PROPERTY LIST clause.

B. Dropping a property
The following example drops the Comments property from the DocumentPropertyList property list.

ALTER SEARCH PROPERTY LIST DocumentPropertyList


DROP 'Comments' ;

See Also
CREATE SEARCH PROPERTY LIST (Transact-SQL )
DROP SEARCH PROPERTY LIST (Transact-SQL )
sys.registered_search_properties (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
sys.dm_fts_index_keywords_by_property (Transact-SQL )
Search Document Properties with Search Property Lists
Find Property Set GUIDs and Property Integer IDs for Search Properties
ALTER SECURITY POLICY (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a security policy.
Transact-SQL Syntax Conventions

Syntax
ALTER SECURITY POLICY schema_name.security_policy_name
(
{ ADD { FILTER | BLOCK } PREDICATE tvf_schema_name.security_predicate_function_name
( { column_name | arguments } [ , …n ] ) ON table_schema_name.table_name
[ <block_dml_operation> ] }
| { ALTER { FILTER | BLOCK } PREDICATE tvf_schema_name.new_security_predicate_function_name
( { column_name | arguments } [ , …n ] ) ON table_schema_name.table_name
[ <block_dml_operation> ] }
| { DROP { FILTER | BLOCK } PREDICATE ON table_schema_name.table_name }
| [ <additional_add_alter_drop_predicate_statements> [ , ...n ] ]
) [ WITH ( STATE = { ON | OFF } ]
[ NOT FOR REPLICATION ]
[;]

<block_dml_operation>
[ { AFTER { INSERT | UPDATE } }
| { BEFORE { UPDATE | DELETE } } ]

Arguments
security_policy_name
The name of the security policy. Security policy names must comply with the rules for identifiers and must be
unique within the database and to its schema.
schema_name
Is the name of the schema to which the security policy belongs. schema_name is required because of schema
binding.
[ FILTER | BLOCK ]
The type of security predicate for the function being bound to the target table. FILTER predicates silently filter the
rows that are available to read operations. BLOCK predicates explicitly block write operations that violate the
predicate function.
tvf_schema_name.security_predicate_function_name
Is the inline table value function that will be used as a predicate and that will be enforced upon queries against a
target table. At most one security predicate can be defined for a particular DML operation against a particular
table. The inline table value function must have been created using the SCHEMABINDING option.
{ column_name | arguments }
The column name or expression used as parameters for the security predicate function. Any columns on the target
table can be used as arguments for the predicate function. Expressions that include literals, builtins, and
expressions that use arithmetic operators can be used.
table_schema_name.table_name
Is the target table to which the security predicate will be applied. Multiple disabled security policies can target a
single table for a particular DML operation, but only one can be enabled at any given time.
<block_dml_operation>
The particular DML operation for which the block predicate will be applied. AFTER specifies that the predicate will
be evaluated on the values of the rows after the DML operation was performed (INSERT or UPDATE ). BEFORE
specifies that the predicate will be evaluated on the values of the rows before the DML operation is performed
(UPDATE or DELETE ). If no operation is specified, the predicate will apply to all operations.
You cannot ALTER the operation for which a block predicate will be applied, because the operation is used to
uniquely identify the predicate. Instead, you must drop the predicate and add a new one for the new operation.
WITH ( STATE = { ON | OFF } )
Enables or disables the security policy from enforcing its security predicates against the target tables. If not
specified the security policy being created is disabled.
NOT FOR REPLICATION
Indicates that the security policy should not be executed when a replication agent modifies the target object. For
more information, see Control the Behavior of Triggers and Constraints During Synchronization (Replication
Transact-SQL Programming).
table_schema_name.table_name
Is the target table to which the security predicate will be applied. Multiple disabled security policies can target a
single table, but only one can be enabled at any given time.

Remarks
The ALTER SECURITY POLICY statement is in a transaction's scope. If the transaction is rolled back, the statement
is also rolled back.
When using predicate functions with memory-optimized tables, security policies must include
SCHEMABINDING and use the WITH NATIVE_COMPILATION compilation hint. The SCHEMABINDING
argument cannot be changed with the ALTER statement because it applies to all predicates. To change schema
binding you must drop and recreate the security policy.
Block predicates are evaluated after the corresponding DML operation is executed. Therefore, a READ
UNCOMMITTED query can see transient values that will be rolled back.

Permissions
Requires the ALTER ANY SECURITY POLICY permission.
Additionally the following permissions are required for each predicate that is added:
SELECT and REFERENCES permissions on the function being used as a predicate.
REFERENCES permission on the target table being bound to the policy.
REFERENCES permission on every column from the target table used as arguments.

Examples
The following examples demonstrate the use of the ALTER SECURITY POLICY syntax. For an example of a
complete security policy scenario, see Row -Level Security.
A. Adding an additional predicate to a policy
The following syntax alters a security policy, adding a filter predicate on the mytable table.
ALTER SECURITY POLICY pol1
ADD FILTER PREDICATE schema_preds.SecPredicate(column1)
ON myschema.mytable;

B. Enabling an existing policy


The following example uses the ALTER syntax to enable a security policy.

ALTER SECURITY POLICY pol1 WITH ( STATE = ON );

C. Adding and dropping multiple predicates


The following syntax alters a security policy, adding filter predicates on the mytable1 and mytable3 tables, and
removing the filter predicate on the mytable2 table.

ALTER SECURITY POLICY pol1


ADD FILTER PREDICATE schema_preds.SecPredicate1(column1)
ON myschema.mytable1,
DROP FILTER PREDICATE
ON myschema.mytable2,
ADD FILTER PREDICATE schema_preds.SecPredicate2(column2, 1)
ON myschema.mytable3;

D. Changing the predicate on a table


The following syntax changes the existing filter predicate on the mytable table to be the SecPredicate2 function.

ALTER SECURITY POLICY pol1


ALTER FILTER PREDICATE schema_preds.SecPredicate2(column1)
ON myschema.mytable;

E. Changing a block predicate


Changing the block predicate function for an operation on a table.

ALTER SECURITY POLICY rls.SecPol


ALTER BLOCK PREDICATE rls.tenantAccessPredicate_v2(TenantId)
ON dbo.Sales AFTER INSERT;

See Also
Row -Level Security
CREATE SECURITY POLICY (Transact-SQL )
DROP SECURITY POLICY (Transact-SQL )
sys.security_policies (Transact-SQL )
sys.security_predicates (Transact-SQL )
ALTER SEQUENCE (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the arguments of an existing sequence object. If the sequence was created with the CACHE option,
altering the sequence will recreate the cache.
Sequences objects are created by using the CREATE SEQUENCE statement. Sequences are integer values and can
be of any data type that returns an integer. The data type cannot be changed by using the ALTER SEQUENCE
statement. To change the data type, drop and create the sequence object.
A sequence is a user-defined schema bound object that generates a sequence of numeric values according to a
specification. New values are generated from a sequence by calling the NEXT VALUE FOR function. Use
sp_sequence_get_range to get multiple sequence numbers at once. For information and scenarios that use both
CREATE SEQUENCE, sp_sequence_get_range, and the NEXT VALUE FOR function, see Sequence Numbers.
Transact-SQL Syntax Conventions

Syntax
ALTER SEQUENCE [schema_name. ] sequence_name
[ RESTART [ WITH <constant> ] ]
[ INCREMENT BY <constant> ]
[ { MINVALUE <constant> } | { NO MINVALUE } ]
[ { MAXVALUE <constant> } | { NO MAXVALUE } ]
[ CYCLE | { NO CYCLE } ]
[ { CACHE [ <constant> ] } | { NO CACHE } ]
[ ; ]

Arguments
sequence_name
Specifies the unique name by which the sequence is known in the database. Type is sysname.
RESTART [ WITH <constant> ]
The next value that will be returned by the sequence object. If provided, the RESTART WITH value must be an
integer that is less than or equal to the maximum and greater than or equal to the minimum value of the sequence
object. If the WITH value is omitted, the sequence numbering restarts based on the original CREATE SEQUENCE
options.
INCREMENT BY <constant>
The value that is used to increment (or decrement if negative) the sequence object’s base value for each call to the
NEXT VALUE FOR function. If the increment is a negative value the sequence object is descending, otherwise, it is
ascending. The increment can not be 0.
[ MINVALUE <constant> | NO MINVALUE ]
Specifies the bounds for sequence object. If NO MINVALUE is specified, the minimum possible value of the
sequence data type is used.
[ MAXVALUE <constant> | NO MAXVALUE
Specifies the bounds for sequence object. If NO MAXVALUE is specified, the maximum possible value of the
sequence data type is used.
[ CYCLE | NO CYCLE ]
This property specifies whether the sequence object should restart from the minimum value (or maximum for
descending sequence objects) or throw an exception when its minimum or maximum value is exceeded.

NOTE
After cycling the next value is the minimum or maximum value, not the START VALUE of the sequence.

[ CACHE [<constant> ] | NO CACHE ]


Increases performance for applications that use sequence objects by minimizing the number of IOs that are
required to persist generated values to the system tables.
For more information about the behavior of the cache, see CREATE SEQUENCE (Transact-SQL ).

Remarks
For information about how sequences are created and how the sequence cache is managed, see CREATE
SEQUENCE (Transact-SQL ).
The MINVALUE for ascending sequences and the MAXVALUE for descending sequences cannot be altered to a
value that does not permit the START WITH value of the sequence. To change the MINVALUE of an ascending
sequence to a number larger than the START WITH value or to change the MAXVALUE of a descending sequence
to a number smaller than the START WITH value, include the RESTART WITH argument to restart the sequence at
a desired point that falls within the minimum and maximum range.

Metadata

For information about sequences, query sys.sequences.

Security
Permissions
Requires ALTER permission on the sequence or ALTER permission on the schema. To grant ALTER permission
on the sequence, use ALTER ON OBJECT in the following format:

GRANT ALTER ON OBJECT::Test.TinySeq TO [AdventureWorks\Larry]

The ownership of a sequence object can be transferred by using the ALTER AUTHORIZATION statement.
Audit
To audit ALTER SEQUENCE, monitor the SCHEMA_OBJECT_CHANGE_GROUP.

Examples
For examples of both creating sequences and using the NEXT VALUE FOR function to generate sequence
numbers, see Sequence Numbers.
A. Altering a sequence
The following example creates a schema named Test and a sequence named TestSeq using the int data type,
having a range from 0 to 255. The sequence starts with 125 and increments by 25 every time that a number is
generated. Because the sequence is configure to cycle, when the value exceeds the maximum value of 200, the
sequence restarts at the minimum value of 100.

CREATE SCHEMA Test ;


GO

CREATE SEQUENCE Test.TestSeq


AS int
START WITH 125
INCREMENT BY 25
MINVALUE 100
MAXVALUE 200
CYCLE
CACHE 3
;
GO

The following example alters the TestSeq sequence to have a range from 0 to 255. The sequence restarts the
numbering series with 100 and increments by 50 every time that a number is generated.

ALTER SEQUENCE Test. TestSeq


RESTART WITH 100
INCREMENT BY 50
MINVALUE 50
MAXVALUE 200
NO CYCLE
NO CACHE
;
GO

Because the sequence will not cycle, the NEXT VALUE FOR function will result in an error when the sequence
exceeds 200.
B. Restarting a sequence
The following example creates a sequence named CountBy1. The sequence uses the default values.

CREATE SEQUENCE Test.CountBy1 ;

To generate a sequence value, the owner then executes the following statement:

SELECT NEXT VALUE FOR Test.CountBy1

The value returned of -9,223,372,036,854,775,808 is the lowest possible value for the bigint data type. The owner
realizes he wanted the sequence to start with 1, but did not indicate the START WITH clause when he created the
sequence. To correct this error, the owner executes the following statement.

ALTER SEQUENCE Test.CountBy1 RESTART WITH 1 ;

Then the owner executes the following statement again to generate a sequence number.

SELECT NEXT VALUE FOR Test.CountBy1;

The number is now 1, as expected.


The CountBy1 sequence was created using the default value of NO CYCLE so it will stop operating after
generating number 9,223,372,036,854,775,807. Subsequent calls to the sequence object will return error 11728.
The following statement changes the sequence object to cycle and sets a cache of 20.

ALTER SEQUENCE Test.CountBy1


CYCLE
CACHE 20 ;

Now when the sequence object reaches 9,223,372,036,854,775,807 it will cycle, and the next number after cycling
will be the minimum of the data type, -9,223,372,036,854,775,808.
The owner realized that the bigint data type uses 8 bytes each time it is used. The int data type that uses 4 bytes is
sufficient. However the data type of a sequence object cannot be altered. To change to an int data type, the owner
must drop the sequence object and recreate the object with the correct data type.

See Also
CREATE SEQUENCE (Transact-SQL )
DROP SEQUENCE (Transact-SQL )
NEXT VALUE FOR (Transact-SQL )
Sequence Numbers
sp_sequence_get_range (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database (Managed Instance only) Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a server audit object using the SQL Server Audit feature. For more information, see SQL Server Audit
(Database Engine).

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
ALTER SERVER AUDIT audit_name
{
[ TO { { FILE ( <file_options> [, ...n] ) } | APPLICATION_LOG | SECURITY_LOG } ]
[ WITH ( <audit_options> [ , ...n] ) ]
[ WHERE <predicate_expression> ]
}
| REMOVE WHERE
| MODIFY NAME = new_audit_name
[ ; ]

<file_options>::=
{
FILEPATH = 'os_file_path'
| MAXSIZE = { max_size { MB | GB | TB } | UNLIMITED }
| MAX_ROLLOVER_FILES = { integer | UNLIMITED }
| MAX_FILES = integer
| RESERVE_DISK_SPACE = { ON | OFF }
}

<audit_options>::=
{
QUEUE_DELAY = integer
| ON_FAILURE = { CONTINUE | SHUTDOWN | FAIL_OPERATION }
| STATE = = { ON | OFF }
}

<predicate_expression>::=
{
[NOT ] <predicate_factor>
[ { AND | OR } [NOT ] { <predicate_factor> } ]
[,...n ]
}

<predicate_factor>::=
event_field_name { = | < > | ! = | > | > = | < | < = } { number | ' string ' }

Arguments
TO { FILE | APPLICATION_LOG | SECURITY }
Determines the location of the audit target. The options are a binary file, the Windows application log, or the
Windows security log.
FILEPATH = 'os_file_path'
The path of the audit trail. The file name is generated based on the audit name and audit GUID.
MAXSIZE =max_size
Specifies the maximum size to which the audit file can grow. The max_size value must be an integer followed by
MB, GB, TB, or UNLIMITED. The minimum size that you can specify for max_size is 2 MB and the maximum is
2,147,483,647 TB. When UNLIMITED is specified, the file grows until the disk is full. Specifying a value lower
than 2 MB raises MSG_MAXSIZE_TOO_SMALL the error. The default value is UNLIMITED.
MAX_ROLLOVER_FILES =integer | UNLIMITED
Specifies the maximum number of files to retain in the file system. When the setting of
MAX_ROLLOVER_FILES=0, there is no limit imposed on the number of rollover files that are created. The default
value is 0. The maximum number of files that can be specified is 2,147,483,647.
MAX_FILES =integer
Specifies the maximum number of audit files that can be created. Does not roll over to the first file when the limit
is reached. When the MAX_FILES limit is reached, any action that causes additional audit events to be generated
fails with an error.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
RESERVE_DISK_SPACE = { ON | OFF }
This option pre-allocates the file on the disk to the MAXSIZE value. Only applies if MAXSIZE is not equal to
UNLIMITED. The default value is OFF.
QUEUE_DEL AY =integer
Determines the time in milliseconds that can elapse before audit actions are forced to be processed. A value of 0
indicates synchronous delivery. The minimum settable query delay value is 1000 (1 second), which is the default.
The maximum is 2,147,483,647 (2,147,483.647 seconds or 24 days, 20 hours, 31 minutes, 23.647 seconds).
Specifying an invalid number, raises the error MSG_INVALID_QUEUE_DEL AY.
ON_FAILURE = { CONTINUE | SHUTDOWN | FAIL_OPERATION }
Indicates whether the instance writing to the target should fail, continue, or stop if SQL Server cannot write to the
audit log.
CONTINUE
SQL Server operations continue. Audit records are not retained. The audit continues to attempt to log events and
resumes if the failure condition is resolved. Selecting the continue option can allow unaudited activity, which could
violate your security policies. Use this option, when continuing operation of the Database Engine is more
important than maintaining a complete audit.
SHUTDOWN
Forces the instance of SQL Server to shut down, if SQL Server fails to write data to the audit target for any
reason. The login executing the ALTER statement must have the SHUTDOWN permission within SQL Server. The
shutdown behavior persists even if the SHUTDOWN permission is later revoked from the executing login. If the user
does not have this permission, then the statement will fail and the audit will not be modified. Use the option when
an audit failure could compromise the security or integrity of the system. For more information, see
SHUTDOWN.
FAIL_OPERATION
Database actions fail if they cause audited events. Actions, which do not cause audited events can continue, but no
audited events can occur. The audit continues to attempt to log events and resumes if the failure condition is
resolved. Use this option when maintaining a complete audit is more important than full access to the Database
Engine.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
STATE = { ON | OFF }
Enables or disables the audit from collecting records. Changing the state of a running audit (from ON to OFF )
creates an audit entry that the audit was stopped, the principal that stopped the audit, and the time the audit was
stopped.
MODIFY NAME = new_audit_name
Changes the name of the audit. Cannot be used with any other option.
predicate_expression
Specifies the predicate expression used to determine if an event should be processed or not. Predicate expressions
are limited to 3000 characters, which limits string arguments.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
event_field_name
Is the name of the event field that identifies the predicate source. Audit fields are described in sys.fn_get_audit_file
(Transact-SQL ). All fields can be audited except file_name and audit_file_offset .
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
number
Is any numeric type including decimal. Limitations are the lack of available physical memory or a number that is
too large to be represented as a 64-bit integer.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
' string '
Either an ANSI or Unicode string as required by the predicate compare. No implicit string type conversion is
performed for the predicate compare functions. Passing the wrong type results in an error.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.

Remarks
You must specify at least one of the TO, WITH, or MODIFY NAME clauses when you call ALTER AUDIT.
You must set the state of an audit to the OFF option in order to make changes to an audit. If ALTER AUDIT is run
when an audit is enabled with any options other than STATE=OFF, you receive a MSG_NEED_AUDIT_DISABLED
error message.
You can add, alter, and remove audit specifications without stopping an audit.
You cannot change an audit’s GUID after the audit has been created.

Permissions
To create, alter, or drop a server audit principal, you must have ALTER ANY SERVER AUDIT or the CONTROL
SERVER permission.

Examples
A. Changing a server audit name
The following example changes the name of the server audit HIPPA_Audit to HIPAA_Audit_Old .
USE master
GO
ALTER SERVER AUDIT HIPAA_Audit
WITH (STATE = OFF);
GO
ALTER SERVER AUDIT HIPAA_Audit
MODIFY NAME = HIPAA_Audit_Old;
GO
ALTER SERVER AUDIT HIPAA_Audit_Old
WITH (STATE = ON);
GO

B. Changing a server audit target


The following example changes the server audit called HIPPA_Audit to a file target.

USE master
GO
ALTER SERVER AUDIT HIPAA_Audit
WITH (STATE = OFF);
GO
ALTER SERVER AUDIT HIPAA_Audit
TO FILE (FILEPATH ='\\SQLPROD_1\Audit\',
MAXSIZE = 1000 MB,
RESERVE_DISK_SPACE=OFF)
WITH (QUEUE_DELAY = 1000,
ON_FAILURE = CONTINUE);
GO
ALTER SERVER AUDIT HIPAA_Audit
WITH (STATE = ON);
GO

C. Changing a server audit WHERE clause


The following example modifies the where clause created in example C of CREATE SERVER AUDIT (Transact-
SQL ). The new WHERE clause filters for the user-defined event if of 27.

ALTER SERVER AUDIT [FilterForSensitiveData] WITH (STATE = OFF)


GO
ALTER SERVER AUDIT [FilterForSensitiveData]
WHERE user_defined_event_id = 27;
GO
ALTER SERVER AUDIT [FilterForSensitiveData] WITH (STATE = ON);
GO

D. Removing a WHERE clause


The following example removes a WHERE clause predicate expression.

ALTER SERVER AUDIT [FilterForSensitiveData] WITH (STATE = OFF)


GO
ALTER SERVER AUDIT [FilterForSensitiveData]
REMOVE WHERE;
GO
ALTER SERVER AUDIT [FilterForSensitiveData] WITH (STATE = ON);
GO

E. Renaming a server audit


The following example changes the server audit name from FilterForSensitiveData to AuditDataAccess .
ALTER SERVER AUDIT [FilterForSensitiveData] WITH (STATE = OFF)
GO
ALTER SERVER AUDIT [FilterForSensitiveData]
MODIFY NAME = AuditDataAccess;
GO
ALTER SERVER AUDIT [AuditDataAccess] WITH (STATE = ON);
GO

See Also
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
ALTER SERVER AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a server audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions

Syntax
ALTER SERVER AUDIT SPECIFICATION audit_specification_name
{
[ FOR SERVER AUDIT audit_name ]
[ { { ADD | DROP } ( audit_action_group_name )
} [, ...n] ]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]

Arguments
audit_specification_name
The name of the audit specification.
audit_name
The name of the audit to which this specification is applied.
audit_action_group_name
Name of a group of server-level auditable actions. For a list of Audit Action Groups, see SQL Server Audit Action
Groups and Actions.
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification.

Remarks
You must set the state of an audit specification to the OFF option to make changes to an audit specification. If
ALTER SERVER AUDIT SPECIFICATION is executed when an audit specification is enabled with any options
other than STATE=OFF, you will receive an error message.

Permissions
Users with the ALTER ANY SERVER AUDIT permission can alter server audit specifications and bind them to any
audit.
After a server audit specification is created, it can be viewed by principals with the CONTROL SERVER, or ALTER
ANY SERVER AUDIT permissions, the sysadmin account, or principals having explicit access to the audit.
Examples
The following example creates a server audit specification called HIPPA_Audit_Specification . It drops the audit
action group for failed logins, and adds an audit action group for Database Object Access for a SQL Server audit
called HIPPA_Audit .

ALTER SERVER AUDIT SPECIFICATION HIPPA_Audit_Specification


FOR SERVER AUDIT HIPPA_Audit
DROP (FAILED_LOGIN_GROUP)
ADD (DATABASE_OBJECT_ACCESS_GROUP);
GO

For a full example about how to create an audit, see SQL Server Audit (Database Engine).

See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
ALTER SERVER CONFIGURATION (Transact-SQL)
5/3/2018 • 12 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies global configuration settings for the current server in SQL Server.
Transact-SQL Syntax Conventions

Syntax
ALTER SERVER CONFIGURATION
SET <optionspec>
[;]

<optionspec> ::=
{
<process_affinity>
| <diagnostic_log>
| <failover_cluster_property>
| <hadr_cluster_context>
| <buffer_pool_extension>
| <soft_numa>
}

<process_affinity> ::=
PROCESS AFFINITY
{
CPU = { AUTO | <CPU_range_spec> }
| NUMANODE = <NUMA_node_range_spec>
}
<CPU_range_spec> ::=
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]

<NUMA_node_range_spec> ::=
{ NUMA_node_ID | NUMA_node_ID TO NUMA_node_ID } [ ,...n ]

<diagnostic_log> ::=
DIAGNOSTICS LOG
{
ON
| OFF
| PATH = { 'os_file_path' | DEFAULT }
| MAX_SIZE = { 'log_max_size' MB | DEFAULT }
| MAX_FILES = { 'max_file_count' | DEFAULT }
}

<failover_cluster_property> ::=
FAILOVER CLUSTER PROPERTY <resource_property>
<resource_property> ::=
{
VerboseLogging = { 'logging_detail' | DEFAULT }
| SqlDumperDumpFlags = { 'dump_file_type' | DEFAULT }
| SqlDumperDumpPath = { 'os_file_path'| DEFAULT }
| SqlDumperDumpTimeOut = { 'dump_time-out' | DEFAULT }
| FailureConditionLevel = { 'failure_condition_level' | DEFAULT }
| HealthCheckTimeout = { 'health_check_time-out' | DEFAULT }
}

<hadr_cluster_context> ::=
HADR CLUSTER CONTEXT = { 'remote_windows_cluster' | LOCAL }

<buffer_pool_extension>::=
BUFFER POOL EXTENSION
{ ON ( FILENAME = 'os_file_path_and_name' , SIZE = <size_spec> )
| OFF }

<size_spec> ::=
{ size [ KB | MB | GB ] }

<soft_numa> ::=
SET SOFTNUMA
{ ON | OFF }

Arguments
<process_affinity> ::=
PROCESS AFFINITY
Enables hardware threads to be associated with CPUs.
CPU = { AUTO | <CPU_range_spec> }
Distributes SQL Server worker threads to each CPU within the specified range. CPUs outside the specified range
will not have assigned threads.
AUTO
Specifies that no thread is assigned a CPU. The operating system can freely move threads among CPUs based on
the server workload. This is the default and recommended setting.
<CPU_range_spec> ::=
Specifies the CPU or range of CPUs to assign threads to.
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]
Is the list of one or more CPUs. CPU IDs begin at 0 and are integer values.
NUMANODE = <NUMA_node_range_spec>
Assigns threads to all CPUs that belong to the specified NUMA node or range of nodes.
<NUMA_node_range_spec> ::=
Specifies the NUMA node or range of NUMA nodes.
{ NUMA_node_ID | NUMA_node_ID TO NUMA_node_ID } [ ,...n ]
Is the list of one or more NUMA nodes. NUMA node IDs begin at 0 and are integer values.
<diagnostic_log> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
DIAGNOSTICS LOG
Starts or stops logging diagnostic data captured by the sp_server_diagnostics procedure, and sets SQLDIAG log
configuration parameters such as the log file rollover count, log file size, and file location. For more information,
see View and Read Failover Cluster Instance Diagnostics Log.
ON
Starts SQL Server logging diagnostic data in the location specified in the PATH file option. This is the default.
OFF
Stops logging diagnostic data.
PATH = { 'os_file_path' | DEFAULT }
Path indicating the location of the diagnostic logs. The default location is <\MSSQL\Log> within the installation
folder of the SQL Server failover cluster instance.
MAX_SIZE = { 'log_max_size' MB | DEFAULT }
Maximum size in megabytes to which each diagnostic log can grow. The default is 100 MB.
MAX_FILES = { 'max_file_count' | DEFAULT }
Maximum number of diagnostic log files that can be stored on the computer before they are recycled for new
diagnostic logs.
<failover_cluster_property> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
FAILOVER CLUSTER PROPERTY
Modifies the SQL Server resource private failover cluster properties.
VERBOSE LOGGING = { 'logging_detail' | DEFAULT }
Sets the logging level for SQL Server Failover Clustering. It can be turned on to provide additional details in the
error logs for troubleshooting.
0 – Logging is turned off (default)
1 - Errors only
2 – Errors and warnings
SQLDUMPEREDUMPFL AGS
Determines the type of dump files generated by SQL Server SQLDumper utility. The default setting is 0. For more
information, see SQL Server Dumper Utility Knowledgebase article.
SQLDUMPERDUMPPATH = { 'os_file_path' | DEFAULT }
The location where the SQLDumper utility stores the dump files. For more information, see SQL Server Dumper
Utility Knowledgebase article.
SQLDUMPERDUMPTIMEOUT = { 'dump_time-out' | DEFAULT }
The time-out value in milliseconds for the SQLDumper utility to generate a dump in case of a SQL Server failure.
The default value is 0, which means there is no time limit to complete the dump. For more information, see SQL
Server Dumper Utility Knowledgebase article.
FAILURECONDITIONLEVEL = { 'failure_condition_level' | DEFAULT }
Tthe conditions under which the SQL Server failover cluster instance should failover or restart. The default value is
3, which means that the SQL Server resource will failover or restart on critical server errors. For more information
about this and other failure condition levels, see Configure FailureConditionLevel Property Settings.
HEALTHCHECKTIMEOUT = { 'health_check_time-out' | DEFAULT }
The time-out value for how long the SQL Server Database Engine resource DLL should wait for the server health
information before it considers the instance of SQL Server as unresponsive. The time-out value is expressed in
milliseconds. The default is 60000 milliseconds (60 seconds).
<hadr_cluster_context> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
HADR CLUSTER CONTEXT = { 'remote_windows_cluster' | LOCAL }
Switches the HADR cluster context of the server instance to the specified Windows Server Failover Cluster
(WSFC ). The HADR cluster context determines what WSFC manages the metadata for availability replicas hosted
by the server instance. Use the SET HADR CLUSTER CONTEXT option only during a cross-cluster migration of
Always On availability groups to an instance of SQL Server 2012 SP1 (11.0.3x) or higher version on a new WSFC
r.
You can switch the HADR cluster context only from the local WSFC to a remote WSFC and then back from the
remote WSFC to the local WSFC. The HADR cluster context can be switched to a remote cluster only when the
instance of SQL Server is not hosting any availability replicas.
A remote HADR cluster context can be switched back to the local cluster at any time. However, the context cannot
be switched again as long as the server instance is hosting any availability replicas.
To identify the destination cluster, specify one of the following values:
windows_cluster
The netwirj name of a WSFC. You can specify either the short name or the full domain name. To find the target IP
address of a short name, ALTER SERVER CONFIGURATION uses DNS resolution. Under some situations, a short
name could cause confusion, and DNS could return the wrong IP address. Therefore, we recommend that you
specify the full domain name.
NOTE
A cross-cluster migration using this setting is no longer supported. To perform a cross-cluster migration, use a Distributed
Availability Group or some other method such as log shipping.

LOCAL
The local WSFC.
For more information, see Change the HADR Cluster Context of Server Instance (SQL Server).
<buffer_pool_extension>::=
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
ON
Enables the buffer pool extension option. This option extends the size of the buffer pool by using nonvolatile
storage such as solid-state drives (SSD ) to persist clean data pages in the pool. For more information about this
feature, see Buffer Pool Extension.The buffer pool extension is not available in every SQL Server edition. For more
information, see Editions and Supported Features for SQL Server 2016.
FILENAME = 'os_file_path_and_name'
Defines the directory path and name of the buffer pool extension cache file. The file extension must be specified as
.BPE. You must turn off BUFFER POOL EXTENSION before you can modify FILENAME.
SIZE = size [ KB | MB | GB ]
Defines the size of the cache. The default size specification is KB. The minimum size is the size of Max Server
Memory. The maximum limit is 32 times the size of Max Server Memory. For more information about Max Server
Memory, see sp_configure (Transact-SQL ).
You must turn BUFFER POOL EXTENSION off before you can modify the size of the file. To specify a size that is
smaller than the current size, the instance of SQL Server must be restarted to reclaim memory. Otherwise, the
specified size must be the same as or larger than the current size.
OFF
Disables the buffer pool extension option. You must disable the buffer pool extension option before you modify any
associated parameters such as the size or name of the file. When this option is disabled, all related configuration
information is removed from the registry.

WARNING
Disabling the buffer pool extension might have a negative impact server performance because the buffer pool is significantly
reduced in size.

<soft_numa>
Applies to: SQL Server 2017 through SQL Server 2017.
ON
Enables automatic partitioning to split large NUMA hardware nodes into smaller NUMA nodes. Changing the
running value requires a restart of the database engine.
OFF
Disables automatic software partitioning of large NUMA hardware nodes into smaller NUMA nodes. Changing
the running value requires a restart of the database engine.
WARNING
There are known issues with the behavior of the ALTER SERVER CONFIGURATION statement with the SOFT NUMA option
and SQL Server Agent. The following is the recommended sequence of operations:
1) Stop the instance of SQL Server Agent.
2) Execute your ALTER SERVER CONFGURATION SOFT NUMA option.
3) Re-start the SQL Server instance.
4) Start the instance of SQL Server Agent.

More Information: If an ALTER SERVER CONFIGURATION with SET SOFTNUMA command is executed
before the SQL Server service is restarted, then when the SQL Server Agent service is stopped, it will execute a T-
SQL RECONFIGURE command that will revert the SOFTNUMA settings back to what they were before the
ALTER SERVER CONFIGURATION.

General Remarks
This statement does not require a restart of SQL Server, unless explicitly stated otherwise. In the case of a SQL
Server failover cluster instance, it does not require a restart of the SQL Server cluster resource.

Limitations and Restrictions


This statement does not support DDL triggers.

Permissions
Requires ALTER SETTINGS permissions for the process affinity option. ALTER SETTINGS and VIEW SERVER
STATE permissions for the diagnostic log and failover cluster property options, and CONTROL SERVER
permission for the HADR cluster context option.
Requires ALTER SERVER STATE permission for the buffer pool entension option.
The SQL Server Database Engine resource DLL runs under the Local System account. Therefore, the Local System
account must have read and write access to the specified path in the Diagnostic Log option.

Examples
CATEGORY FEATURED SYNTAX ELEMENTS

Setting process affinity CPU • NUMANODE • AUTO

Setting diagnostic log options ON • OFF • PATH • MAX_SIZE

Setting failover cluster properties HealthCheckTimeout

Changing the cluster context of an availability replica ' windows_cluster '

Setting the buffer pool extension BUFFER POOL EXTENSION

Setting process affinity


The examples in this section show how to set process affinity to CPUs and NUMA nodes. The examples assume
that the server contains 256 CPUs that are arranged into four groups of 16 NUMA nodes each. Threads are not
assigned to any NUMA node or CPU.
Group 0: NUMA nodes 0 through 3, CPUs 0 to 63
Group 1: NUMA nodes 4 through 7, CPUs 64 to 127
Group 2: NUMA nodes 8 through 12, CPUs 128 to 191
Group 3: NUMA nodes 13 through 16, CPUs 192 to 255
A. Setting affinity to all CPUs in groups 0 and 2
The following example sets affinity to all the CPUs in groups 0 and 2.

ALTER SERVER CONFIGURATION


SET PROCESS AFFINITY CPU=0 TO 63, 128 TO 191;

B. Setting affinity to all CPUs in NUMA nodes 0 and 7


The following example sets the CPU affinity to nodes 0 and 7 only.

ALTER SERVER CONFIGURATION


SET PROCESS AFFINITY NUMANODE=0, 7;

C. Setting affinity to CPUs 60 through 200


The following example sets affinity to CPUs 60 through 200.

ALTER SERVER CONFIGURATION


SET PROCESS AFFINITY CPU=60 TO 200;

D. Setting affinity to CPU 0 on a system that has two CPUs


The following example sets the affinity to CPU=0 on a computer that has two CPUs. Before the following statement
is executed the internal affinity bitmask is 00.

ALTER SERVER CONFIGURATION SET PROCESS AFFINITY CPU=0;

E. Setting affinity to AUTO


The following example sets affinity to AUTO .

ALTER SERVER CONFIGURATION


SET PROCESS AFFINITY CPU=AUTO;

Setting diagnostic log options


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
The examples in this section show how to set the values for the diagnostic log option.
A. Starting diagnostic logging
The following example starts the logging of diagnostic data.

ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG ON;

B. Stopping diagnostic logging


The following example stops the logging of diagnostic data.

ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG OFF;

C. Specifying the location of the diagnostic logs


The following example sets the location of the diagnostic logs to the specified file path.
ALTER SERVER CONFIGURATION
SET DIAGNOSTICS LOG PATH = 'C:\logs';

D. Specifying the maximum size of each diagnostic log


The following example set the maximum size of each diagnostic log to 10 megabytes.

ALTER SERVER CONFIGURATION


SET DIAGNOSTICS LOG MAX_SIZE = 10 MB;

Setting failover cluster properties


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
The following example illustrates setting the values of the SQL Server failover cluster resource properties.
A. Specifying the value for the HealthCheckTimeout property
The following example sets the HealthCheckTimeout option to 15,000 milliseconds (15 seconds).

ALTER SERVER CONFIGURATION


SET FAILOVER CLUSTER PROPERTY HealthCheckTimeout = 15000;

B. Changing the cluster context of an availability replica


The following example changes the HADR cluster context of the instance of SQL Server. To specify the destination
WSFC cluster, clus01 , the example specifies the full cluster object name, clus01.xyz.com .

ALTER SERVER CONFIGURATION SET HADR CLUSTER CONTEXT = 'clus01.xyz.com';

Setting Buffer Pool Extension Options


A. Setting the buffer pool extension option
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
The following example enables the buffer pool extension option and specifies a file name and size.

ALTER SERVER CONFIGURATION


SET BUFFER POOL EXTENSION ON
(FILENAME = 'F:\SSDCACHE\Example.BPE', SIZE = 50 GB);

B. Modifying buffer pool extension parameters


The following example modifies the size of a buffer pool extension file. The buffer pool extension option must be
disabled before any of the parameters are modified.

ALTER SERVER CONFIGURATION


SET BUFFER POOL EXTENSION OFF;
GO
EXEC sp_configure 'max server memory (MB)', 12000;
GO
RECONFIGURE;
GO
ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON
(FILENAME = 'F:\SSDCACHE\Example.BPE', SIZE = 60 GB);
GO

See Also
Soft-NUMA (SQL Server)
Change the HADR Cluster Context of Server Instance (SQL Server)
sys.dm_os_schedulers (Transact-SQL )
sys.dm_os_memory_nodes (Transact-SQL )
sys.dm_os_buffer_pool_extension_configuration (Transact-SQL )
Buffer Pool Extension
ALTER SERVER ROLE (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the membership of a server role or changes name of a user-defined server role. Fixed server roles
cannot be renamed.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

ALTER SERVER ROLE server_role_name


{
[ ADD MEMBER server_principal ]
| [ DROP MEMBER server_principal ]
| [ WITH NAME = new_server_role_name ]
} [ ; ]

-- Syntax for Parallel Data Warehouse

ALTER SERVER ROLE server_role_name ADD MEMBER login;

ALTER SERVER ROLE server_role_name DROP MEMBER login;

Arguments
server_role_name
Is the name of the server role to be changed.
ADD MEMBER server_principal
Adds the specified server principal to the server role. server_principal can be a login or a user-defined server role.
server_principal cannot be a fixed server role, a database role, or sa.
DROP MEMBER server_principal
Removes the specified server principal from the server role. server_principal can be a login or a user-defined
server role. server_principal cannot be a fixed server role, a database role, or sa.
WITH NAME =new_server_role_name
Specifies the new name of the user-defined server role. This name cannot already exist in the server.

Remarks
Changing the name of a user-defined server role does not change ID number, owner, or permissions of the role.
For changing role membership, ALTER SERVER ROLE replaces sp_addsrvrolemember and sp_dropsrvrolemember.
These stored procedures are deprecated.
You can view server roles by querying the sys.server_role_members and sys.server_principals catalog views.
To change the owner of a user-defined server role, use ALTER AUTHORIZATION (Transact-SQL ).

Permissions
Requires ALTER ANY SERVER ROLE permission on the server to change the name of a user-defined server role.
Fixed server roles
To add a member to a fixed server role, you must be a member of that fixed server role, or be a member of the
sysadmin fixed server role.

NOTE
The CONTROL SERVER and ALTER ANY SERVER ROLE permissions are not sufficient to execute ALTER SERVER ROLE for a
fixed server role, and ALTER permission cannot be granted on a fixed server role.

User-defined server roles


To add a member to a user-defined server role, you must be a member of the sysadmin fixed server role or have
CONTROL SERVER or ALTER ANY SERVER ROLE permission. Or you must have ALTER permission on that role.

NOTE
Unlike fixed server roles, members of a user-defined server role do not inherently have permission to add members to that
same role.

Examples
A. Changing the name of a server role
The following example creates a server role named Product , and then changes the name of server role to
Production .

CREATE SERVER ROLE Product ;


ALTER SERVER ROLE Product WITH NAME = Production ;
GO

B. Adding a domain account to a server role


The following example adds a domain account named adventure-works\roberto0 to the user-defined server role
named Production .

ALTER SERVER ROLE Production ADD MEMBER [adventure-works\roberto0] ;

C. Adding a SQL Server login to a server role


The following example adds a SQL Server login named Ted to the diskadmin fixed server role.

ALTER SERVER ROLE diskadmin ADD MEMBER Ted ;


GO

D. Removing a domain account from a server role


The following example removes a domain account named adventure-works\roberto0 from the user-defined
server role named Production .
ALTER SERVER ROLE Production DROP MEMBER [adventure-works\roberto0] ;

E. Removing a SQL Server login from a server role


The following example removes the SQL Server login Ted from the diskadmin fixed server role.

ALTER SERVER ROLE Production DROP MEMBER Ted ;


GO

F. Granting a login the permission to add logins to a user-defined server role


The following example allows Ted to add other logins to the user-defined server role named Production .

GRANT ALTER ON SERVER ROLE::Production TO Ted ;


GO

G. To view role membership


To view role membership, use the Server Role (Members) page in SQL Server Management Studio or execute
the following query:

SELECT SRM.role_principal_id, SP.name AS Role_Name,


SRM.member_principal_id, SP2.name AS Member_Name
FROM sys.server_role_members AS SRM
JOIN sys.server_principals AS SP
ON SRM.Role_principal_id = SP.principal_id
JOIN sys.server_principals AS SP2
ON SRM.member_principal_id = SP2.principal_id
ORDER BY SP.name, SP2.name

Examples: Parallel Data Warehouse


H. Basic Syntax
The following example adds the login Anna to the LargeRC server role.

ALTER SERVER ROLE LargeRC ADD MEMBER Anna;

I. Remove a login from a resource class.


The following example drops Anna’s membership in the LargeRC server role.

ALTER SERVER ROLE LargeRC DROP MEMBER Anna;

See Also
CREATE SERVER ROLE (Transact-SQL )
DROP SERVER ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
ALTER ROLE (Transact-SQL )
DROP ROLE (Transact-SQL )
Security Stored Procedures (Transact-SQL )
Security Functions (Transact-SQL )
Principals (Database Engine)
sys.server_role_members (Transact-SQL )
sys.server_principals (Transact-SQL )
ALTER SERVICE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes an existing service.
Transact-SQL Syntax Conventions

Syntax
ALTER SERVICE service_name
[ ON QUEUE [ schema_name . ]queue_name ]
[ ( < opt_arg > [ , ...n ] ) ]
[ ; ]

<opt_arg> ::=
ADD CONTRACT contract_name | DROP CONTRACT contract_name

Arguments
service_name
Is the name of the service to change. Server, database, and schema names cannot be specified.
ON QUEUE [ schema_name. ] queue_name
Specifies the new queue for this service. Service Broker moves all messages for this service from the current
queue to the new queue.
ADD CONTRACT contract_name
Specifies a contract to add to the contract set exposed by this service.
DROP CONTRACT contract_name
Specifies a contract to delete from the contract set exposed by this service. Service Broker sends an error message
on any existing conversations with this service that use this contract.

Remarks
When the ALTER SERVICE statement deletes a contract from a service, the service can no longer be a target for
conversations that use that contract. Therefore, Service Broker does not allow new conversations to the service on
that contract. Existing conversations that use the contract are unaffected.
To alter the AUTHORIZATION for a service, use the ALTER AUTHORIZATION statement.

Permissions
Permission for altering a service defaults to the owner of the service, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.

Examples
A. Changing the queue for a service
The following example changes the //Adventure-Works.com/Expenses service to use the queue NewQueue .

ALTER SERVICE [//Adventure-Works.com/Expenses]


ON QUEUE NewQueue ;

B. Adding a new contract to the service


The following example changes the //Adventure-Works.com/Expenses service to allow dialogs on the contract
//Adventure-Works.com/Expenses .

ALTER SERVICE [//Adventure-Works.com/Expenses]


(ADD CONTRACT [//Adventure-Works.com/Expenses/ExpenseSubmission]) ;

C. Adding a new contract to the service, dropping existing contract


The following example changes the //Adventure-Works.com/Expenses service to allow dialogs on the contract
//Adventure-Works.com/Expenses/ExpenseProcessing and to disallow dialogs on the contract
//Adventure-Works.com/Expenses/ExpenseSubmission .

ALTER SERVICE [//Adventure-Works.com/Expenses]


(ADD CONTRACT [//Adventure-Works.com/Expenses/ExpenseProcessing],
DROP CONTRACT [//Adventure-Works.com/Expenses/ExpenseSubmission]) ;

See Also
CREATE SERVICE (Transact-SQL )
DROP SERVICE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER SERVICE MASTER KEY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the service master key of an instance of SQL Server.
Transact-SQL Syntax Conventions

Syntax
ALTER SERVICE MASTER KEY
[ { <regenerate_option> | <recover_option> } ] [;]

<regenerate_option> ::=
[ FORCE ] REGENERATE

<recover_option> ::=
{ WITH OLD_ACCOUNT = 'account_name' , OLD_PASSWORD = 'password' }
|
{ WITH NEW_ACCOUNT = 'account_name' , NEW_PASSWORD = 'password' }

Arguments
FORCE
Indicates that the service master key should be regenerated, even at the risk of data loss. For more information,
see Changing the SQL Server Service Account later in this topic.
REGENERATE
Indicates that the service master key should be regenerated.
OLD_ACCOUNT ='account_name'
Specifies the name of the old Windows service account.

WARNING
This option is obsolete. Do not use. Use SQL Server Configuration Manager instead.

OLD_PASSWORD ='password'
Specifies the password of the old Windows service account.

WARNING
This option is obsolete. Do not use. Use SQL Server Configuration Manager instead.

NEW_ACCOUNT ='account_name'
Specifies the name of the new Windows service account.
WARNING
This option is obsolete. Do not use. Use SQL Server Configuration Manager instead.

NEW_PASSWORD ='password'
Specifies the password of the new Windows service account.

WARNING
This option is obsolete. Do not use. Use SQL Server Configuration Manager instead.

Remarks
The service master key is automatically generated the first time it is needed to encrypt a linked server password,
credential, or database master key. The service master key is encrypted using the local machine key or the
Windows Data Protection API. This API uses a key that is derived from the Windows credentials of the SQL
Server service account.
SQL Server 2012 (11.x) uses the AES encryption algorithm to protect the service master key (SMK) and the
database master key (DMK). AES is a newer encryption algorithm than 3DES used in earlier versions. After
upgrading an instance of the Database Engine to SQL Server 2012 (11.x) the SMK and DMK should be
regenerated in order to upgrade the master keys to AES. For more information about regenerating the DMK, see
ALTER MASTER KEY (Transact-SQL ).

Changing the SQL Server Service Account


To change the SQL Server service account, use SQL Server Configuration Manager. To manage a change of the
service account, SQL Server stores a redundant copy of the service master key protected by the machine account
that has the necessary permissions granted to the SQL Server service group. If the computer is rebuilt, the same
domain user that was previously used by the service account can recover the service master key. This does not
work with local accounts or the Local System, Local Service, or Network Service accounts. When you are moving
SQL Server to another computer, migrate the service master key by using backup and restore.
The REGENERATE phrase regenerates the service master key. When the service master key is regenerated, SQL
Server decrypts all the keys that have been encrypted with it, and then encrypts them with the new service master
key. This is a resource-intensive operation. You should schedule this operation during a period of low demand,
unless the key has been compromised. If any one of the decryptions fail, the whole statement fails.
The FORCE option causes the key regeneration process to continue even if the process cannot retrieve the current
master key, or cannot decrypt all the private keys that are encrypted with it. Use FORCE only if regeneration fails
and you cannot restore the service master key by using the RESTORE SERVICE MASTER KEY statement.
Cau t i on

The service master key is the root of the SQL Server encryption hierarchy. The service master key directly or
indirectly protects all other keys and secrets in the tree. If a dependent key cannot be decrypted during a forced
regeneration, the data the key secures will be lost.
If you move SQL to another machine, then you have to use the same service account to decrypt the SMK – SQL
Server will fix the Machine account encryption automatically.

Permissions
Requires CONTROL SERVER permission on the server.
Examples
The following example regenerates the service master key.

ALTER SERVICE MASTER KEY REGENERATE;


GO

See Also
RESTORE SERVICE MASTER KEY (Transact-SQL )
BACKUP SERVICE MASTER KEY (Transact-SQL )
Encryption Hierarchy
ALTER SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a symmetric key.
Transact-SQL Syntax Conventions

Syntax
ALTER SYMMETRIC KEY Key_name <alter_option>

<alter_option> ::=
ADD ENCRYPTION BY <encrypting_mechanism> [ , ... n ]
|
DROP ENCRYPTION BY <encrypting_mechanism> [ , ... n ]
<encrypting_mechanism> ::=
CERTIFICATE certificate_name
|
PASSWORD = 'password'
|
SYMMETRIC KEY Symmetric_Key_Name
|
ASYMMETRIC KEY Asym_Key_Name

Arguments
Key_name
Is the name by which the symmetric key to be changed is known in the database.
ADD ENCRYPTION BY
Adds encryption by using the specified method.
DROP ENCRYPTION BY
Drops encryption by the specified method. You cannot remove all the encryptions from a symmetric key.
CERTIFICATE Certificate_name
Specifies the certificate that is used to encrypt the symmetric key. This certificate must already exist in the
database.
PASSWORD ='password'
Specifies the password that is used to encrypt the symmetric key. password must meet the Windows password
policy requirements of the computer that is running the instance of SQL Server.
SYMMETRIC KEY Symmetric_Key_Name
Specifies the symmetric key that is used to encrypt the symmetric key that is being changed. This symmetric key
must already exist in the database and must be open.
ASYMMETRIC KEY Asym_Key_Name
Specifies the asymmetric key that is used to encrypt the symmetric key that is being changed. This asymmetric key
must already exist in the database.
Remarks
Cau t i on

When a symmetric key is encrypted with a password instead of with the public key of the database master key, the
TRIPLE_DES encryption algorithm is used. Because of this, keys that are created with a strong encryption
algorithm, such as AES, are themselves secured by a weaker algorithm.
To change the encryption of the symmetric key, use the ADD ENCRYPTION and DROP ENCRYPTION phrases. It
is never possible for a key to be entirely without encryption. For this reason, the best practice is to add the new
form of encryption before removing the old form of encryption.
To change the owner of a symmetric key, use ALTER AUTHORIZATION.

NOTE
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or RC4_128
when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of the AES
algorithms instead. In SQL Server 2012 (11.x) material encrypted using RC4 or RC4_128 can be decrypted in any
compatibility level.

Permissions
Requires ALTER permission on the symmetric key. If adding encryption by a certificate or asymmetric key,
requires VIEW DEFINITION permission on the certificate or asymmetric key. If dropping encryption by a
certificate or asymmetric key, requires CONTROL permission on the certificate or asymmetric key.

Examples
The following example changes the encryption method that is used to protect a symmetric key. The symmetric key
JanainaKey043 is encrypted using certificate Shipping04 when the key was created. Because the key can never be
stored unencrypted, in this example, encryption is added by password, and then encryption is removed by
certificate.

CREATE SYMMETRIC KEY JanainaKey043 WITH ALGORITHM = AES_256


ENCRYPTION BY CERTIFICATE Shipping04;
-- Open the key.
OPEN SYMMETRIC KEY JanainaKey043 DECRYPTION BY CERTIFICATE Shipping04
WITH PASSWORD = '<enterStrongPasswordHere>';
-- First, encrypt the key with a password.
ALTER SYMMETRIC KEY JanainaKey043
ADD ENCRYPTION BY PASSWORD = '<enterStrongPasswordHere>';
-- Now remove encryption by the certificate.
ALTER SYMMETRIC KEY JanainaKey043
DROP ENCRYPTION BY CERTIFICATE Shipping04;
CLOSE SYMMETRIC KEY JanainaKey043;

See Also
CREATE SYMMETRIC KEY (Transact-SQL )
OPEN SYMMETRIC KEY (Transact-SQL )
CLOSE SYMMETRIC KEY (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
ALTER TABLE (Transact-SQL)
5/3/2018 • 64 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies a table definition by altering, adding, or dropping columns and constraints, reassigning and
rebuilding partitions, or disabling or enabling constraints and triggers.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

ALTER TABLE [ database_name . [ schema_name ] . | schema_name . ] table_name


{
ALTER COLUMN column_name
{
[ type_schema_name. ] type_name
[ (
{
precision [ , scale ]
| max
| xml_schema_collection
}
) ]
[ COLLATE collation_name ]
[ NULL | NOT NULL ] [ SPARSE ]
| { ADD | DROP }
{ ROWGUIDCOL | PERSISTED | NOT FOR REPLICATION | SPARSE | HIDDEN }
| { ADD | DROP } MASKED [ WITH ( FUNCTION = ' mask_function ') ]
}
[ WITH ( ONLINE = ON | OFF ) ]
| [ WITH { CHECK | NOCHECK } ]

| ADD
{
<column_definition>
| <computed_column_definition>
| <table_constraint>
| <column_set_definition>
} [ ,...n ]
| [ system_start_time_column_name datetime2 GENERATED ALWAYS AS ROW START
[ HIDDEN ] [ NOT NULL ] [ CONSTRAINT constraint_name ]
DEFAULT constant_expression [WITH VALUES] ,
system_end_time_column_name datetime2 GENERATED ALWAYS AS ROW END
[ HIDDEN ] [ NOT NULL ] [ CONSTRAINT constraint_name ]
DEFAULT constant_expression [WITH VALUES] ,
]
PERIOD FOR SYSTEM_TIME ( system_start_time_column_name, system_end_time_column_name )
| DROP
[ {
[ CONSTRAINT ] [ IF EXISTS ]
[ CONSTRAINT ] [ IF EXISTS ]
{
constraint_name
[ WITH
( <drop_clustered_constraint_option> [ ,...n ] )
]
} [ ,...n ]
| COLUMN [ IF EXISTS ]
{
column_name
} [ ,...n ]
| PERIOD FOR SYSTEM_TIME
} [ ,...n ]
| [ WITH { CHECK | NOCHECK } ] { CHECK | NOCHECK } CONSTRAINT
{ ALL | constraint_name [ ,...n ] }

| { ENABLE | DISABLE } TRIGGER


{ ALL | trigger_name [ ,...n ] }

| { ENABLE | DISABLE } CHANGE_TRACKING


[ WITH ( TRACK_COLUMNS_UPDATED = { ON | OFF } ) ]

| SWITCH [ PARTITION source_partition_number_expression ]


TO target_table
[ PARTITION target_partition_number_expression ]
[ WITH ( <low_priority_lock_wait> ) ]
| SET
(
[ FILESTREAM_ON =
{ partition_scheme_name | filegroup | "default" | "NULL" } ]
| SYSTEM_VERSIONING =
{
OFF
| ON
[ ( HISTORY_TABLE = schema_name . history_table_name
[, DATA_CONSISTENCY_CHECK = { ON | OFF } ]
[, HISTORY_RETENTION_PERIOD =
{
INFINITE | number {DAY | DAYS | WEEK | WEEKS
| MONTH | MONTHS | YEAR | YEARS }
}
]
)
]
}
)
| REBUILD
[ [PARTITION = ALL]
[ WITH ( <rebuild_option> [ ,...n ] ) ]
| [ PARTITION = partition_number
[ WITH ( <single_partition_rebuild_option> [ ,...n ] ) ]
]
]

| <table_option>

| <filetable_option>

| <stretch_configuration>

}
[ ; ]

-- ALTER TABLE options

<column_set_definition> ::=
column_set_name XML COLUMN_SET FOR ALL_SPARSE_COLUMNS

<drop_clustered_constraint_option> ::=
{
{
MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
| MOVE TO
{ partition_scheme_name ( column_name ) | filegroup | "default" }
}
<table_option> ::=
{
SET ( LOCK_ESCALATION = { AUTO | TABLE | DISABLE } )
}

<filetable_option> ::=
{
[ { ENABLE | DISABLE } FILETABLE_NAMESPACE ]
[ SET ( FILETABLE_DIRECTORY = directory_name ) ]
}

<stretch_configuration> ::=
{
SET (
REMOTE_DATA_ARCHIVE
{
= ON ( <table_stretch_options> )
| = OFF_WITHOUT_DATA_RECOVERY ( MIGRATION_STATE = PAUSED )
| ( <table_stretch_options> [, ...n] )
}
)
}

<table_stretch_options> ::=
{
[ FILTER_PREDICATE = { null | table_predicate_function } , ]
MIGRATION_STATE = { OUTBOUND | INBOUND | PAUSED }
}

<single_partition_rebuild__option> ::=
{
SORT_IN_TEMPDB = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE} }
| ONLINE = { ON [( <low_priority_lock_wait> ) ] | OFF }
}

<low_priority_lock_wait>::=
{
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = <time> [ MINUTES ],
ABORT_AFTER_WAIT = { NONE | SELF | BLOCKERS } )
}
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

ALTER TABLE [ database_name . [schema_name ] . | schema_name. ] source_table_name


{
ALTER COLUMN column_name
{
type_name [ ( precision [ , scale ] ) ]
[ COLLATE Windows_collation_name ]
[ NULL | NOT NULL ]
}
| ADD { <column_definition> | <column_constraint> FOR column_name} [ ,...n ]
| DROP { COLUMN column_name | [CONSTRAINT] constraint_name } [ ,...n ]
| REBUILD {
[ PARTITION = ALL [ WITH ( <rebuild_option> ) ] ]
| [ PARTITION = partition_number [ WITH ( <single_partition_rebuild_option> ] ]
}
| { SPLIT | MERGE } RANGE (boundary_value)
| SWITCH [ PARTITION source_partition_number
TO target_table_name [ PARTITION target_partition_number ]
}
[;]

<column_definition>::=
{
column_name
type_name [ ( precision [ , scale ] ) ]
[ <column_constraint> ]
[ COLLATE Windows_collation_name ]
[ NULL | NOT NULL ]
}

<column_constraint>::=
[ CONSTRAINT constraint_name ] DEFAULT constant_expression

<rebuild_option > ::=


{
DATA_COMPRESSION = { COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( {<partition_number> [ TO <partition_number>] } [ , ...n ] ) ]
}

<single_partition_rebuild_option > ::=


{
DATA_COMPRESSION = { COLUMNSTORE | COLUMNSTORE_ARCHIVE }
}

Arguments
database_name
Is the name of the database in which the table was created.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to be altered. If the table is not in the current database or is not contained by the
schema owned by the current user, the database and schema must be explicitly specified.
ALTER COLUMN
Specifies that the named column is to be changed or altered.
The modified column cannot be any one of the following:
A column with a timestamp data type.
The ROWGUIDCOL for the table.
A computed column or used in a computed column.
Used in statistics generated by the CREATE STATISTICS statement unless the column is a varchar,
nvarchar, or varbinary data type, the data type is not changed, and the new size is equal to or greater
than the old size, or if the column is changed from not null to null. First, remove the statistics using the
DROP STATISTICS statement. Statistics that are automatically generated by the query optimizer are
automatically dropped by ALTER COLUMN.
Used in a PRIMARY KEY or [FOREIGN KEY ] REFERENCES constraint.
Used in a CHECK or UNIQUE constraint. However, changing the length of a variable-length column
used in a CHECK or UNIQUE constraint is allowed.
Associated with a default definition. However, the length, precision, or scale of a column can be
changed if the data type is not changed.
The data type of text, ntext and image columns can be changed only in the following ways:
text to varchar(max), nvarchar(max), or xml
ntext to varchar(max), nvarchar(max), or xml
image to varbinary(max)
Some data type changes may cause a change in the data. For example, changing an nchar or nvarchar
column to char or varchar may cause the conversion of extended characters. For more information, see
CAST and CONVERT (Transact-SQL ). Reducing the precision or scale of a column may cause data truncation.

NOTE
The data type of a column of a partitioned table cannot be changed.
The data type of columns included in an index cannot be changed unless the column is a varchar, nvarchar, or
varbinary data type, and the new size is equal to or larger than the old size.
A column included in a primary key constraint, cannot be changed from NOT NULL to NULL.

If the column being modified is encrypted using ENCRYPTED WITH , you can change the datatype to a
compatible datatype (such as INT to BIGINT) but you cannot change any encryption settings.
column_name
Is the name of the column to be altered, added, or dropped. column_name can be a maximum of 128
characters. For new columns, column_name can be omitted for columns created with a timestamp data type.
The name timestamp is used if no column_name is specified for a timestamp data type column.
[ type_schema_name. ] type_name
Is the new data type for the altered column, or the data type for the added column. type_name cannot be
specified for existing columns of partitioned tables. type_name can be any one of the following:
A SQL Server system data type.
An alias data type based on a SQL Server system data type. Alias data types are created with the
CREATE TYPE statement before they can be used in a table definition.
A .NET Framework user-defined type, and the schema to which it belongs. .NET Framework user-
defined types are created with the CREATE TYPE statement before they can be used in a table
definition.
The following are criteria for type_name of an altered column:
The previous data type must be implicitly convertible to the new data type.
type_name cannot be timestamp.
ANSI_NULL defaults are always on for ALTER COLUMN; if not specified, the column is nullable.
ANSI_PADDING padding is always ON for ALTER COLUMN.
If the modified column is an identity column, new_data_type must be a data type that supports the identity
property.
The current setting for SET ARITHABORT is ignored. ALTER TABLE operates as if ARITHABORT is set to
ON.

NOTE
If the COLLATE clause is not specified, changing the data type of a column will cause a collation change to the default
collation of the database.

precision
Is the precision for the specified data type. For more information about valid precision values, see Precision,
Scale, and Length (Transact-SQL ).
scale
Is the scale for the specified data type. For more information about valid scale values, see Precision, Scale, and
Length (Transact-SQL ).
max
Applies only to the varchar, nvarchar, and varbinary data types for storing 2^31-1 bytes of character,
binary data, and of Unicode data.
xml_schema_collection
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Applies only to the xml data type for associating an XML schema with the type. Before typing an xml column
to a schema collection, the schema collection must first be created in the database by using CREATE XML
SCHEMA COLLECTION.
COLL ATE < collation_name > Specifies the new collation for the altered column. If not specified, the column
is assigned the default collation of the database. Collation name can be either a Windows collation name or a
SQL collation name. For a list and more information, see Windows Collation Name (Transact-SQL ) and SQL
Server Collation Name (Transact-SQL ).
The COLL ATE clause can be used to change the collations only of columns of the char, varchar, nchar, and
nvarchar data types. To change the collation of a user-defined alias data type column, you must execute
separate ALTER TABLE statements to change the column to a SQL Server system data type and change its
collation, and then change the column back to an alias data type.
ALTER COLUMN cannot have a collation change if one or more of the following conditions exist:
If a CHECK constraint, FOREIGN KEY constraint, or computed columns reference the column changed.
If any index, statistics, or full-text index are created on the column. Statistics created automatically on the
column changed are dropped if the column collation is changed.
If a schema-bound view or function references the column.
For more information, see COLL ATE (Transact-SQL ).
NULL | NOT NULL
Specifies whether the column can accept null values. Columns that do not allow null values can be added with
ALTER TABLE only if they have a default specified or if the table is empty. NOT NULL can be specified for
computed columns only if PERSISTED is also specified. If the new column allows null values and no default is
specified, the new column contains a null value for each row in the table. If the new column allows null values
and a default definition is added with the new column, WITH VALUES can be used to store the default value
in the new column for each existing row in the table.
If the new column does not allow null values and the table is not empty, a DEFAULT definition must be added
with the new column, and the new column automatically loads with the default value in the new columns in
each existing row.
NULL can be specified in ALTER COLUMN to force a NOT NULL column to allow null values, except for
columns in PRIMARY KEY constraints. NOT NULL can be specified in ALTER COLUMN only if the column
contains no null values. The null values must be updated to some value before the ALTER COLUMN NOT
NULL is allowed, for example:

UPDATE MyTable SET NullCol = N'some_value' WHERE NullCol IS NULL;


ALTER TABLE MyTable ALTER COLUMN NullCOl NVARCHAR(20) NOT NULL;

When you create or alter a table with the CREATE TABLE or ALTER TABLE statements, the database and
session settings influence and possibly override the nullability of the data type that is used in a column
definition. We recommend that you always explicitly define a column as NULL or NOT NULL for
noncomputed columns.
If you add a column with a user-defined data type, we recommend that you define the column with the same
nullability as the user-defined data type and specify a default value for the column. For more information, see
CREATE TABLE (Transact-SQL ).

NOTE
If NULL or NOT NULL is specified with ALTER COLUMN, new_data_type [(precision [, scale ])] must also be specified. If
the data type, precision, and scale are not changed, specify the current column values.

[ {ADD | DROP } ROWGUIDCOL ]


Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the ROWGUIDCOL property is added to or dropped from the specified column. ROWGUIDCOL
indicates that the column is a row GUID column. Only one uniqueidentifier column per table can be
designated as the ROWGUIDCOL column, and the ROWGUIDCOL property can be assigned only to a
uniqueidentifier column. ROWGUIDCOL cannot be assigned to a column of a user-defined data type.
ROWGUIDCOL does not enforce uniqueness of the values that are stored in the column and does not
automatically generate values for new rows that are inserted into the table. To generate unique values for
each column, either use the NEWID or NEWSEQUENTIALID function on INSERT statements or specify the
NEWID or NEWSEQUENTIALID function as the default for the column.
[ {ADD | DROP } PERSISTED ]
Specifies that the PERSISTED property is added to or dropped from the specified column. The column must
be a computed column that is defined with a deterministic expression. For columns specified as PERSISTED,
the Database Engine physically stores the computed values in the table and updates the values when any
other columns on which the computed column depends are updated. By marking a computed column as
PERSISTED, you can create indexes on computed columns defined on expressions that are deterministic, but
not precise. For more information, see Indexes on Computed Columns.
Any computed column that is used as a partitioning column of a partitioned table must be explicitly marked
PERSISTED.
DROP NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies that values are incremented in identity columns when replication agents perform insert operations.
This clause can be specified only if column_name is an identity column.
SPARSE
Indicates that the column is a sparse column. The storage of sparse columns is optimized for null values.
Sparse columns cannot be designated as NOT NULL. Converting a column from sparse to nonsparse or from
nonsparse to sparse locks the table for the duration of the command execution. You may need to use the
REBUILD clause to reclaim any space savings. For additional restrictions and more information about sparse
columns, see Use Sparse Columns.
ADD MASKED WITH ( FUNCTION = ' mask_function ')
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Specifies a dynamic data mask. mask_function is the name of the masking function with the appropriate
parameters. Three functions are available:
default()
email()
partial()
random()
To drop a mask, use DROP MASKED . For function parameters, see Dynamic Data Masking.

WITH ( ONLINE = ON | OFF ) <as applies to altering a column>


Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Allows many alter column actions to be performed while the table remains available. Default is OFF. Alter
column can be performed on line for column changes related to data type, column length or precision,
nullability, sparseness, and collation.
Online alter column allows user created and auto statistics to reference the altered column for the duration of
the ALTER COLUMN operation. This allows queries to perform as usual. At the end of the operation, auto-
stats that reference the column are dropped and user-created stats are invalidated. The user must manually
update user-generated statistics after the operation is completed. If the column is part of a filter expression for
any statistics or indexes then you cannot perform an alter column operation.
While the online alter column operation is running, all operations that could take a dependency on the
column (index, views, etc.) will block or fail with an appropriate error. This guarantees that online alter
column will not fail because of dependencies introduced while the operation was running.
Altering a column from NOT NULL to NULL is not supported as an online operation when the altered
column is references by nonclustered indexes.
Online alter is not supported when the column is referenced by a check constraint and the alter
operation is restricting the precision of the column (numeric or datetime).
The WAIT_AT_LOW_PRIORITY option cannot be used with online alter column.
ALTER COLUMN … ADD/DROP PERSISTED is not supported for online alter column.
ALTER COLUMN … ADD/DROP ROWGUIDCOL/NOT FOR REPLICATION is not affected by online alter column.
Online alter column does not support altering a table where change tracking is enabled or that is a
publisher of merge replication.
Online alter column does not support altering from or to CLR data types.
Online alter column does not support altering to an XML data type that has a schema collection
different than the current schema collection.
Online alter column does not reduce the restrictions on when a column can be altered. References by
index/stats, etc. might cause the alter to fail.
Online alter column does not support altering more than one column concurrently.
Online alter column has no effect in case of system-versioned temporal table. ALTER column is not
performed as online regardless of which value was specified for ONLINE option.
Online alter column has similar requirements, restrictions, and functionality as online index rebuild. This
includes:
Online index rebuild is not supported when the table contains legacy LOB or filestream columns or
when the table has a columnstore index. The same limitations apply for online alter column.
An existing column being altered requires twice the space allocation; for the original column and for
the newly created hidden column.
The locking strategy during an alter column online operation follows the same locking pattern used for
online index build.
WITH CHECK | WITH NOCHECK
Specifies whether the data in the table is or is not validated against a newly added or re-enabled FOREIGN
KEY or CHECK constraint. If not specified, WITH CHECK is assumed for new constraints, and WITH
NOCHECK is assumed for re-enabled constraints.
If you do not want to verify new CHECK or FOREIGN KEY constraints against existing data, use WITH
NOCHECK. We do not recommend doing this, except in rare cases. The new constraint will be evaluated in all
later data updates. Any constraint violations that are suppressed by WITH NOCHECK when the constraint is
added may cause future updates to fail if they update rows with data that does not comply with the constraint.
The query optimizer does not consider constraints that are defined WITH NOCHECK. Such constraints are
ignored until they are re-enabled by using ALTER TABLE table WITH CHECK CHECK CONSTRAINT ALL .
ADD
Specifies that one or more column definitions, computed column definitions, or table constraints are added,
or the columns that the system will use for system versioning.
PERIOD FOR SYSTEM_TIME ( system_start_time_column_name, system_end_time_column_name )
Applies to: SQL Server 2017 through SQL Server 2017 and Azure SQL Database.
Specifies the names of the columns that the system will use to record the period for which a record is valid.
You can specify existing columns or create new columns as part of the ADD PERIOD FOR SYSTEM_TIME
argument. The columns must have the datatype of datetime2 and must be defined as NOT NULL. If a period
column is defined as NULL, an error will be thrown. You can define a column_constraint (Transact-SQL )
and/or Specify Default Values for Columns for the system_start_time and system_end_time columns. See
Example A in the System Versioning examples below demonstrating the use of a default value for the
system_end_time column.
Use this argument in conjunction with the SET SYSTEM_VERSIONING argument to enable system
versioning on an existing table. For more information, see Temporal Tables and Getting Started with Temporal
Tables in Azure SQL Database.
As of SQL Server 2017, users will be able to mark one or both period columns with HIDDEN flag to
implicitly hide these columns such that SELECT * FROM<table> does not return a value for those columns.
By default, period columns are not hidden. In order to be used, hidden columns must be explicitly included in
all queries that directly reference the temporal table.
DROP
Specifies that one or more column definitions, computed column definitions, or table constraints are dropped,
or to drop the specification for the columns that the system will use for system versioning.
CONSTRAINT constraint_name
Specifies that constraint_name is removed from the table. Multiple constraints can be listed.
The user-defined or system-supplied name of the constraint can be determined by querying the
sys.check_constraint, sys.default_constraints, sys.key_constraints, and sys.foreign_keys catalog views.
A PRIMARY KEY constraint cannot be dropped if an XML index exists on the table.
COLUMN column_name
Specifies that constraint_name or column_name is removed from the table. Multiple columns can be listed.
A column cannot be dropped when it is:
Used in an index.
Used in a CHECK, FOREIGN KEY, UNIQUE, or PRIMARY KEY constraint.
Associated with a default that is defined with the DEFAULT keyword, or bound to a default object.
Bound to a rule.

NOTE
Dropping a column does not reclaim the disk space of the column. You may have to reclaim the disk space of a
dropped column when the row size of a table is near, or has exceeded, its limit. Reclaim space by creating a clustered
index on the table or rebuilding an existing clustered index by using ALTER INDEX. For information about the impact of
dropping LOB data types, see this CSS blog entry.

PERIOD FOR SYSTEM_TIME


Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Drops the specification for the columns that the system will use for system versioning.
WITH <drop_clustered_constraint_option>
Specifies that one or more drop clustered constraint options are set.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Overrides the max degree of parallelism configuration option only for the duration of the operation. For
more information, see Configure the max degree of parallelism Server Configuration Option.
Use the MAXDOP option to limit the number of processors used in parallel plan execution. The maximum is
64 processors.
max_degree_of_parallelism can be one of the following values:
1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.

NOTE
Parallel index operations are not available in every edition of SQL Server. For more information, see Editions and
Supported Features for SQL Server 2016.

ONLINE = { ON | OFF } <as applies to drop_clustered_constraint_option>


Specifies whether underlying tables and associated indexes are available for queries and data modification
during the index operation. The default is OFF. REBUILD can be performed as an ONLINE operation.
ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. This enables queries or updates to the
underlying table and indexes to continue. At the start of the operation, a Shared (S ) lock is held on the source
object for a very short time. At the end of the operation, for a short time, an S (Shared) lock is acquired on the
source if a nonclustered index is being created; or an SCH-M (Schema Modification) lock is acquired when a
clustered index is created or dropped online and when a clustered or nonclustered index is being rebuilt.
ONLINE cannot be set to ON when an index is being created on a local temporary table. Only single-
threaded heap rebuild operation is allowed.
To execute the DDL for SWITCH or online index rebuild, all active blocking transactions running on a
particular table must be completed. When executing, the SWITCH or rebuild operation prevents new
transaction from starting and might significantly affect the workload throughput and temporarily delay access
to the underlying table.
OFF
Table locks are applied for the duration of the index operation. An offline index operation that creates,
rebuilds, or drops a clustered index, or rebuilds or drops a nonclustered index, acquires a Schema
modification (Sch-M ) lock on the table. This prevents all user access to the underlying table for the duration of
the operation. An offline index operation that creates a nonclustered index acquires a Shared (S ) lock on the
table. This prevents updates to the underlying table but allows read operations, such as SELECT statements.
Multi-threaded heap rebuild operations are allowed.
For more information, see How Online Index Operations Work.

NOTE
Online index operations are not available in every edition of SQL Server. For more information, see Editions and
Supported Features for SQL Server 2016.

MOVE TO { partition_scheme_name(column_name [ 1, ... n] ) | filegroup | "default" }


Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies a location to move the data rows currently in the leaf level of the clustered index. The table is moved
to the new location. This option applies only to constraints that create a clustered index.

NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in MOVE
TO "default" or MOVE TO [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the
current session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).
{ CHECK | NOCHECK } CONSTRAINT
Specifies that constraint_name is enabled or disabled. This option can only be used with FOREIGN KEY and
CHECK constraints. When NOCHECK is specified, the constraint is disabled and future inserts or updates to
the column are not validated against the constraint conditions. DEFAULT, PRIMARY KEY, and UNIQUE
constraints cannot be disabled.
ALL
Specifies that all constraints are either disabled with the NOCHECK option or enabled with the CHECK
option.
{ ENABLE | DISABLE } TRIGGER
Specifies that trigger_name is enabled or disabled. When a trigger is disabled it is still defined for the table;
however, when INSERT, UPDATE, or DELETE statements are executed against the table, the actions in the
trigger are not performed until the trigger is re-enabled.
ALL
Specifies that all triggers in the table are enabled or disabled.
trigger_name
Specifies the name of the trigger to disable or enable.
{ ENABLE | DISABLE } CHANGE_TRACKING
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether change tracking is enabled disabled for the table. By default, change tracking is disabled.
This option is available only when change tracking is enabled for the database. For more information, see
ALTER DATABASE SET Options (Transact-SQL ).
To enable change tracking, the table must have a primary key.
WITH ( TRACK_COLUMNS_UPDATED = { ON | OFF } )
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether the Database Engine tracks which change tracked columns were updated. The default value
is OFF.
SWITCH [ PARTITION source_partition_number_expression ] TO [ schema_name. ] target_table [ PARTITION
target_partition_number_expression ]
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Switches a block of data in one of the following ways:
Reassigns all data of a table as a partition to an already-existing partitioned table.
Switches a partition from one partitioned table to another.
Reassigns all data in one partition of a partitioned table to an existing non-partitioned table.
If table is a partitioned table, source_partition_number_expression must be specified. If target_table is
partitioned, target_partition_number_expression must be specified. If reassigning a table's data as a partition
to an already-existing partitioned table, or switching a partition from one partitioned table to another, the
target partition must exist and it must be empty.
If reassigning one partition's data to form a single table, the target table must already be created and it must
be empty. Both the source table or partition, and the target table or partition, must reside in the same
filegroup. The corresponding indexes, or index partitions, must also reside in the same filegroup. Many
additional restrictions apply to switching partitions. table and target_table cannot be the same. target_table
can be a multi-part identifier.
source_partition_number_expression and target_partition_number_expression are constant expressions that
can reference variables and functions. These include user-defined type variables and user-defined functions.
They cannot reference Transact-SQL expressions.
A partitioned table with a clustered columstore index behaves like a partitioned heap:
The primary key must include the partition key.
A unique index must include the partition key. Note that including the partition key to an existing
unique index can change the uniqueness.
In order to switch partitions, all non-clustered indexes must include the partition key.
For SWITCH restriction when using replication, see Replicate Partitioned Tables and Indexes.
Nonclustered columnstore indexes built for SQL Server 2016 CTP1, and for SQL Database before version
V12 were in a read-only format. Nonclustered columnstore indexes must be rebuilt to the current format
(which is updatable) before any PARTITION operations can be performed.
SET ( FILESTREAM_ON = { partition_scheme_name | filestream_filegroup_name | "default" | "NULL" })
Applies to: SQL Server 2008 through SQL Server 2017.|
Specifies where FILESTREAM data is stored.
ALTER TABLE with the SET FILESTREAM_ON clause will succeed only if the table has no FILESTREAM
columns. The FILESTREAM columns can be added by using a second ALTER TABLE statement.
If partition_scheme_name is specified, the rules for CREATE TABLE apply. The table should already be
partitioned for row data, and its partition scheme must use the same partition function and columns as the
FILESTREAM partition scheme.
filestream_filegroup_name specifies the name of a FILESTREAM filegroup. The filegroup must have one file
that is defined for the filegroup by using a CREATE DATABASE or ALTER DATABASE statement, or an error
is raised.
"default" specifies the FILESTREAM filegroup with the DEFAULT property set. If there is no FILESTREAM
filegroup, an error is raised.
"NULL" specifies that all references to FILESTREAM filegroups for the table will be removed. All
FILESTREAM columns must be dropped first. You must use SET FILESTREAM_ON="NULL" to delete all
FILESTREAM data that is associated with a table.
SET ( SYSTEM_VERSIONING = { OFF | ON [ ( HISTORY_TABLE = schema_name . history_table_name [ ,
DATA_CONSISTENCY_CHECK = { ON | OFF } ] ) ] } )
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Either disables system versioning of a table or enables system versioning of a table. To enable system
versioning of a table, the system verifies that the datatype, nullability constraint, and primary key constraint
requirements for system versioning are met. If the HISTORY_TABLE argument is not used, the system
generates a new history table matching the schema of the current table, creating a link between the two tables
and enables the system to record the history of each record in the current table in the history table. The name
of this history table will be MSSQL_TemporalHistoryFor<primary_table_object_id> . If the HISTORY_TABLE
argument is used to create a link to and use an existing history table, the link is created between the current
table and the specified table. When creating a link to an existing history table, you can choose to perform a
data consistency check. This data consistency check ensures that existing records do not overlap. Performing
the data consistency check is the default. For more information, see Temporal Tables.
HISTORY_RETENTION_PERIOD = { INFINITE | number {DAY | DAYS | WEEK | WEEKS | MONTH |
MONTHS | YEAR | YEARS } } Applies to: Azure SQL Database.
Specifies finite or infinte retention for historical data in temporal table. If omitted, infinite retention is
assumed.
SET ( LOCK_ESCAL ATION = { AUTO | TABLE | DISABLE } )
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the allowed methods of lock escalation for a table.
AUTO
This option allows SQL Server Database Engine to select the lock escalation granularity that is appropriate
for the table schema.
If the table is partitioned, lock escalation will be allowed to partition. After the lock is escalated to the
partition level, the lock will not be escalated later to TABLE granularity.
If the table is not partitioned, the lock escalation will be done to the TABLE granularity.
TABLE
Lock escalation will be done at table-level granularity regardless whether the table is partitioned or not
partitioned. TABLE is the default value.
DISABLE
Prevents lock escalation in most cases. Table-level locks are not completely disallowed. For example, when
you are scanning a table that has no clustered index under the serializable isolation level, Database Engine
must take a table lock to protect data integrity.
REBUILD
Use the REBUILD WITH syntax to rebuild an entire table including all the partitions in a partitioned table. If
the table has a clustered index, the REBUILD option rebuilds the clustered index. REBUILD can be performed
as an ONLINE operation.
Use the REBUILD PARTITION syntax to rebuild a single partition in a partitioned table.
PARTITION = ALL
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Rebuilds all partitions when changing the partition compression settings.
REBUILD WITH ( <rebuild_option> )
All options apply to a table with a clustered index. If the table does not have a clustered index, the heap
structure is only affected by some of the options.
When a specific compression setting is not specified with the REBUILD operation, the current compression
setting for the partition is used. To return the current setting, query the data_compression column in the
sys.partitions catalog view.
For complete descriptions of the rebuild options, see index_option (Transact-SQL ).
DATA_COMPRESSION
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the data compression option for the specified table, partition number, or range of partitions. The
options are as follows:
NONE
Table or specified partitions are not compressed. This does not apply to columnstore tables.
ROW
Table or specified partitions are compressed by using row compression. This does not apply to columnstore
tables.
PAGE
Table or specified partitions are compressed by using page compression. This does not apply to columnstore
tables.
COLUMNSTORE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Applies only to columnstore tables. COLUMNSTORE specifies to decompress a partition that was
compressed with the COLUMNSTORE_ARCHIVE option. When the data is restored, it will continue to be
compressed with the columnstore compression that is used for all columnstore tables.
COLUMNSTORE_ARCHIVE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Applies only to columnstore tables, which are tables stored with a clustered columnstore index.
COLUMNSTORE_ARCHIVE will further compress the specified partition to a smaller size. This can be used
for archival, or for other situations that require less storage and can afford more time for storage and retrieval
To rebuild multiple partitions at the same time, see index_option (Transact-SQL ). If the table does not have a
clustered index, changing the data compression rebuilds the heap and the nonclustered indexes. For more
information about compression, see Data Compression.
ONLINE = { ON | OFF } <as applies to single_partition_rebuild_option>
Specifies whether a single partition of the underlying tables and associated indexes are available for queries
and data modification during the index operation. The default is OFF. REBUILD can be performed as an
ONLINE operation.
ON
Long-term table locks are not held for the duration of the index operation. A S -lock on the table is required in
the beginning of the index rebuild and a Sch-M lock on the table at the end of the online index rebuild.
Although both locks are short metadata locks, especially the Sch-M lock must wait for all blocking
transactions to be completed. During the wait time the Sch-M lock blocks all other transactions that wait
behind this lock when accessing the same table.

NOTE
Online index rebuild can set the low_priority_lock_wait options described later in this section.

OFF
Table locks are applied for the duration of the index operation. This prevents all user access to the underlying
table for the duration of the operation.
column_set_name XML COLUMN_SET FOR ALL_SPARSE_COLUMNS
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Is the name of the column set. A column set is an untyped XML representation that combines all of the sparse
columns of a table into a structured output. A column set cannot be added to a table that contains sparse
columns. For more information about column sets, see Use Column Sets.
{ ENABLE | DISABLE } FILETABLE_NAMESPACE
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Enables or disables the system-defined constraints on a FileTable. Can only be used with a FileTable.
SET ( FILETABLE_DIRECTORY = directory_name )
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the Windows-compatible FileTable directory name. This name should be unique among all the
FileTable directory names in the database. Uniqueness comparison is case-insensitive, regardless of SQL
collation settings. Can only be used with a FileTable.

SET (
REMOTE_DATA_ARCHIVE
{
= ON ( <table_stretch_options> )
| = OFF_WITHOUT_DATA_RECOVERY
( MIGRATION_STATE = PAUSED ) | ( <table_stretch_options> [, ...n] )
} )

Applies to: SQL Server 2017.


Enables or disables Stretch Database for a table. For more info, see Stretch Database.
Enabling Stretch Database for a table
When you enable Stretch for a table by specifying ON , you also have to specify MIGRATION_STATE = OUTBOUND
to begin migrating data immediately, or MIGRATION_STATE = PAUSED to postpone data migration. The default
value is MIGRATION_STATE = OUTBOUND . For more info about enabling Stretch for a table, see Enable Stretch
Database for a table.
Prerequisites. Before you enable Stretch for a table, you have to enable Stretch on the server and on the
database. For more info, see Enable Stretch Database for a database.
Permissions. Enabling Stretch for a database or a table requires db_owner permissions. Enabling Stretch for
a table also requires ALTER permissions on the table.
Disabling Stretch Database for a table
When you disable Stretch for a table, you have two options for the remote data that has already been
migrated to Azure. For more info, see Disable Stretch Database and bring back remote data.
To disable Stretch for a table and copy the remote data for the table from Azure back to SQL Server,
run the following command. This command can't be canceled.

ALTER TABLE \<table name>


SET ( REMOTE_DATA_ARCHIVE ( MIGRATION_STATE = INBOUND ) ) ;

This operation incurs data transfer costs, and it can't be canceled. For more info, see Data Transfers
Pricing Details.
After all the remote data has been copied from Azure back to SQL Server, Stretch is disabled for the
table.
To disable Stretch for a table and abandon the remote data, run the following command.

ALTER TABLE \<table_name>


SET ( REMOTE_DATA_ARCHIVE = OFF_WITHOUT_DATA_RECOVERY ( MIGRATION_STATE = PAUSED ) ) ;

After you disable Stretch Database for a table, data migration stops and query results no longer
include results from the remote table.
Disabling Stretch does not remove the remote table. If you want to delete the remote table, you have to
drop it by using the Azure management portal.
[ FILTER_PREDICATE = { null | predicate } ]
Applies to: SQL Server 2017.
Optionally specifies a filter predicate to select rows to migrate from a table that contains both historical and
current data. The predicate must call a deterministic inline table-valued function. For more info, see Enable
Stretch Database for a table and Select rows to migrate by using a filter function (Stretch Database).

IMPORTANT
If you provide a filter predicate that performs poorly, data migration also performs poorly. Stretch Database applies the
filter predicate to the table by using the CROSS APPLY operator.

If you don't specify a filter predicate, the entire table is migrated.


When you specify a filter predicate, you also have to specify MIGRATION_STATE.
MIGRATION_STATE = { OUTBOUND | INBOUND | PAUSED }
Applies to: SQL Server 2017.
Specify OUTBOUND to migrate data from SQL Server to Azure.
Specify INBOUND to copy the remote data for the table from Azure back to SQL Server and to disable
Stretch for the table. For more info, see Disable Stretch Database and bring back remote data.
This operation incurs data transfer costs, and it can't be canceled.
Specify PAUSED to pause or postpone data migration. For more info, see Pause and resume data
migration (Stretch Database).
WAIT_AT_LOW_PRIORITY
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
An online index rebuild has to wait for blocking operations on this table. WAIT_AT_LOW_PRIORITY
indicates that the online index rebuild operation will wait for low priority locks, allowing other operations to
proceed while the online index build operation is waiting. Omitting the WAIT AT LOW PRIORITY option is
equivalent to WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE) .
MAX_DURATION = time [MINUTES ]
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
The wait time (an integer value specified in minutes) that the SWITCH or online index rebuild locks will wait
with low priority when executing the DDL command. If the operation is blocked for the MAX_DURATION
time, one of the ABORT_AFTER_WAIT actions will be executed. MAX_DURATION time is always in
minutes, and the word MINUTES can be omitted.
ABORT_AFTER_WAIT = [NONE | SELF | BLOCKERS } ]
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
NONE
Continue waiting for the lock with normal (regular) priority.
SELF
Exit the SWITCH or online index rebuild DDL operation currently being executed without taking any action.
BLOCKERS
Kill all user transactions that block currently the SWITCH or online index rebuild DDL operation so that the
operation can continue.
Requires ALTER ANY CONNECTION permission.
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version) and Azure SQL Database.
Conditionally drops the column or constraint only if it already exists.

Remarks
To add new rows of data, use INSERT. To remove rows of data, use DELETE or TRUNCATE TABLE. To change
the values in existing rows, use UPDATE.
If there are any execution plans in the procedure cache that reference the table, ALTER TABLE marks them to
be recompiled on their next execution.

Changing the Size of a Column


You can change the length, precision, or scale of a column by specifying a new size for the column data type in
the ALTER COLUMN clause. If data exists in the column, the new size cannot be smaller than the maximum
size of the data. Also, the column cannot be defined in an index, unless the column is a varchar, nvarchar, or
varbinary data type and the index is not the result of a PRIMARY KEY constraint. See example P.

Locks and ALTER TABLE


The changes specified in ALTER TABLE are implemented immediately. If the changes require modifications of
the rows in the table, ALTER TABLE updates the rows. ALTER TABLE acquires a schema modify (SCH-M ) lock
on the table to make sure that no other connections reference even the metadata for the table during the
change, except online index operations that require a very short SCH-M lock at the end. In an
ALTER TABLE…SWITCH operation, the lock is acquired on both the source and target tables. The modifications
made to the table are logged and fully recoverable. Changes that affect all the rows in very large tables, such
as dropping a column or, on some editions of SQL Server, adding a NOT NULL column with a default value,
can take a long time to complete and generate many log records. These ALTER TABLE statements should be
executed with the same care as any INSERT, UPDATE, or DELETE statement that affects many rows.
Adding NOT NULL Columns as an Online Operation
Starting with SQL Server 2012 (11.x) Enterprise Edition, adding a NOT NULL column with a default value is
an online operation when the default value is a runtime constant. This means that the operation is completed
almost instantaneously regardless of the number of rows in the table. This is because the existing rows in the
table are not updated during the operation; instead, the default value is stored only in the metadata of the
table and the value is looked up as needed in queries that access these rows. This behavior is automatic; no
additional syntax is required to implement the online operation beyond the ADD COLUMN syntax. A runtime
constant is an expression that produces the same value at runtime for each row in the table regardless of its
determinism. For example, the constant expression "My temporary data", or the system function
GETUTCDATETIME () are runtime constants. In contrast, the functions NEWID() or NEWSEQUENTIALID() are not
runtime constants because a unique value is produced for each row in the table. Adding a NOT NULL column
with a default value that is not a runtime constant is always performed offline and an exclusive (SCH-M ) lock
is acquired for the duration of the operation.
While the existing rows reference the value stored in metadata, the default value is stored on the row for any
new rows that are inserted and do not specify another value for the column. The default value stored in
metadata is moved to an existing row when the row is updated (even if the actual column is not specified in
the UPDATE statement), or if the table or clustered index is rebuilt.
Columns of type varchar(max), nvarchar(max), varbinary(max), xml, text, ntext, image, hierarchyid,
geometry, geography, or CLR UDTS, cannot be added in an online operation. A column cannot be added
online if doing so causes the maximum possible row size to exceed the 8,060 byte limit. The column is added
as an offline operation in this case.

Parallel Plan Execution


In Microsoft SQL Server 2012 Enterprise and higher, the number of processors employed to run a single
ALTER TABLE ADD (index based) CONSTRAINT or DROP (clustered index) CONSTRAINT statement is
determined by the max degree of parallelism configuration option and the current workload. If the
Database Engine detects that the system is busy, the degree of parallelism of the operation is automatically
reduced before statement execution starts. You can manually configure the number of processors that are
used to run the statement by specifying the MAXDOP option. For more information, see Configure the max
degree of parallelism Server Configuration Option.

Partitioned Tables
In addition to performing SWITCH operations that involve partitioned tables, ALTER TABLE can be used to
change the state of the columns, constraints, and triggers of a partitioned table just like it is used for
nonpartitioned tables. However, this statement cannot be used to change the way the table itself is
partitioned. To repartition a partitioned table, use ALTER PARTITION SCHEME and ALTER PARTITION
FUNCTION. Additionally, you cannot change the data type of a column of a partitioned table.

Restrictions on Tables with Schema-Bound Views


The restrictions that apply to ALTER TABLE statements on tables with schema-bound views are the same as
the restrictions currently applied when modifying tables with a simple index. Adding a column is allowed.
However, removing or changing a column that participates in any schema-bound view is not allowed. If the
ALTER TABLE statement requires changing a column used in a schema-bound view, ALTER TABLE fails and
the Database Engine raises an error message. For more information about schema binding and indexed
views, see CREATE VIEW (Transact-SQL ).
Adding or removing triggers on base tables is not affected by creating a schema-bound view that references
the tables.

Indexes and ALTER TABLE


Indexes created as part of a constraint are dropped when the constraint is dropped. Indexes that were created
with CREATE INDEX must be dropped with DROP INDEX. The ALTER INDEX statement can be used to
rebuild an index part of a constraint definition; the constraint does not have to be dropped and added again
with ALTER TABLE.
All indexes and constraints based on a column must be removed before the column can be removed.
When a constraint that created a clustered index is deleted, the data rows that were stored in the leaf level of
the clustered index are stored in a nonclustered table. You can drop the clustered index and move the
resulting table to another filegroup or partition scheme in a single transaction by specifying the MOVE TO
option. The MOVE TO option has the following restrictions:
MOVE TO is not valid for indexed views or nonclustered indexes.
The partition scheme or filegroup must already exist.
If MOVE TO is not specified, the table will be located in the same partition scheme or filegroup as was
defined for the clustered index.
When you drop a clustered index, you can specify ONLINE = ON option so the DROP INDEX transaction
does not block queries and modifications to the underlying data and associated nonclustered indexes.
ONLINE = ON has the following restrictions:
ONLINE = ON is not valid for clustered indexes that are also disabled. Disabled indexes must be dropped
by using ONLINE = OFF.
Only one index at a time can be dropped.
ONLINE = ON is not valid for indexed views, nonclustered indexes or indexes on local temp tables.
ONLINE = ON is not valid for columnstore indexes.
Temporary disk space equal to the size of the existing clustered index is required to drop a clustered index.
This additional space is released as soon as the operation is completed.

NOTE
The options listed under <drop_clustered_constraint_option> apply to clustered indexes on tables and cannot be
applied to clustered indexes on views or nonclustered indexes.

Replicating Schema Changes


By default, when you run ALTER TABLE on a published table at a SQL Server Publisher, that change is
propagated to all SQL Server Subscribers. This functionality has some restrictions and can be disabled. For
more information, see Make Schema Changes on Publication Databases.

Data Compression
System tables cannot be enabled for compression. If the table is a heap, the rebuild operation for ONLINE
mode will be single threaded. Use OFFLINE mode for a multi-threaded heap rebuild operation. For a more
information about data compression, seeData Compression.
To evaluate how changing the compression state will affect a table, an index, or a partition, use the
sp_estimate_data_compression_savings stored procedure.
The following restrictions apply to partitioned tables:
You cannot change the compression setting of a single partition if the table has nonaligned indexes.
The ALTER TABLE <table> REBUILD PARTITION ... syntax rebuilds the specified partition.
The ALTER TABLE <table> REBUILD WITH ... syntax rebuilds all partitions.

Dropping NTEXT Columns


When dropping NTEXT columns, the cleanup of the deleted data occurs as a serialized operation on all rows.
This can require a substantial time. When dropping an NTEXT column in a table with a large number rows,
update the NTEXT column to NULL value first, then drop the column. This can be performed with parallel
operations and can be much faster.

Online Index Rebuild


In order to execute the DDL statement for an online index rebuild, all active blocking transactions running on
a particular table must be completed. When the online index rebuild executes, it blocks all new transactions
that are ready to start execution on this table. Although the duration of the lock for online index rebuild is
very short, waiting for all open transactions on a given table to complete and blocking the new transactions to
start, might significantly affect the throughput, causing a workload slow down or timeout, and significantly
limit access to the underlying table. The WAIT_AT_LOW_PRIORITY option allows DBA's to manage the S -
lock and Sch-M locks required for online index rebuilds and allows them to select one of 3 options. In all 3
cases, if during the wait time ( (MAX_DURATION =n [minutes]) ) there are no blocking activities, the online index
rebuild is executed immediately without waiting and the DDL statement is completed.
Compatibility Support
The ALTER TABLE statement allows only two-part (schema.object) table names. In SQL Server 2017,
specifying a table name using the following formats fails at compile time with error 117.
server.database.schema.table
.database.schema.table
..schema.table
In earlier versions specifying the format server.database.schema.table returned error 4902. Specifying the
format .database.schema.table or the format ..schema.table succeeded.
To resolve the problem, remove the use of a 4-part prefix.

Permissions
Requires ALTER permission on the table.
ALTER TABLE permissions apply to both tables involved in an ALTER TABLE SWITCH statement. Any data
that is switched inherits the security of the target table.
If any columns in the ALTER TABLE statement are defined to be of a common language runtime (CLR ) user-
defined type or alias data type, REFERENCES permission on the type is required.
Adding a column that updates the rows of the table requires UPDATE permission on the table. For example,
adding a NOT NULL column with a default value or adding an identity column when the table is not empty.

Examples
CATEGORY FEATURED SYNTAX ELEMENTS

Adding columns and constraints ADD • PRIMARY KEY with index options • sparse columns
and column sets •

Dropping columns and constraints DROP

Altering a column definition change data type • change column size • collation

Altering a table definition DATA_COMPRESSION • SWITCH PARTITION • LOCK


ESCALATION • change tracking

Disabling and enabling constraints and triggers CHECK • NO CHECK • ENABLE TRIGGER • DISABLE
TRIGGER

Adding Columns and Constraints


Examples in this section demonstrate adding columns and constraints to a table.
A. Adding a new column
The following example adds a column that allows null values and has no values provided through a DEFAULT
definition. In the new column, each row will have NULL .

CREATE TABLE dbo.doc_exa (column_a INT) ;


GO
ALTER TABLE dbo.doc_exa ADD column_b VARCHAR(20) NULL ;
GO
B. Adding a column with a constraint
The following example adds a new column with a UNIQUE constraint.

CREATE TABLE dbo.doc_exc (column_a INT) ;


GO
ALTER TABLE dbo.doc_exc ADD column_b VARCHAR(20) NULL
CONSTRAINT exb_unique UNIQUE ;
GO
EXEC sp_help doc_exc ;
GO
DROP TABLE dbo.doc_exc ;
GO

C. Adding an unverified CHECK constraint to an existing column


The following example adds a constraint to an existing column in the table. The column has a value that
violates the constraint. Therefore, WITH NOCHECK is used to prevent the constraint from being validated against
existing rows, and to allow for the constraint to be added.

CREATE TABLE dbo.doc_exd ( column_a INT) ;


GO
INSERT INTO dbo.doc_exd VALUES (-1) ;
GO
ALTER TABLE dbo.doc_exd WITH NOCHECK
ADD CONSTRAINT exd_check CHECK (column_a > 1) ;
GO
EXEC sp_help doc_exd ;
GO
DROP TABLE dbo.doc_exd ;
GO

D. Adding a DEFAULT constraint to an existing column


The following example creates a table with two columns and inserts a value into the first column, and the
other column remains NULL. A DEFAULT constraint is then added to the second column. To verify that the
default is applied, another value is inserted into the first column, and the table is queried.

CREATE TABLE dbo.doc_exz ( column_a INT, column_b INT) ;


GO
INSERT INTO dbo.doc_exz (column_a)VALUES ( 7 ) ;
GO
ALTER TABLE dbo.doc_exz
ADD CONSTRAINT col_b_def
DEFAULT 50 FOR column_b ;
GO
INSERT INTO dbo.doc_exz (column_a) VALUES ( 10 ) ;
GO
SELECT * FROM dbo.doc_exz ;
GO
DROP TABLE dbo.doc_exz ;
GO

E. Adding several columns with constraints


The following example adds several columns with constraints defined with the new column. The first new
column has an IDENTITY property. Each row in the table has new incremental values in the identity column.
CREATE TABLE dbo.doc_exe ( column_a INT CONSTRAINT column_a_un UNIQUE) ;
GO
ALTER TABLE dbo.doc_exe ADD

-- Add a PRIMARY KEY identity column.


column_b INT IDENTITY
CONSTRAINT column_b_pk PRIMARY KEY,

-- Add a column that references another column in the same table.


column_c INT NULL
CONSTRAINT column_c_fk
REFERENCES doc_exe(column_a),

-- Add a column with a constraint to enforce that


-- nonnull data is in a valid telephone number format.
column_d VARCHAR(16) NULL
CONSTRAINT column_d_chk
CHECK
(column_d LIKE '[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]' OR
column_d LIKE
'([0-9][0-9][0-9]) [0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]'),

-- Add a nonnull column with a default.


column_e DECIMAL(3,3)
CONSTRAINT column_e_default
DEFAULT .081 ;
GO
EXEC sp_help doc_exe ;
GO
DROP TABLE dbo.doc_exe ;
GO

F. Adding a nullable column with default values


The following example adds a nullable column with a DEFAULT definition, and uses WITH VALUES to provide
values for each existing row in the table. If WITH VALUES is not used, each row has the value NULL in the
new column.

CREATE TABLE dbo.doc_exf ( column_a INT) ;


GO
INSERT INTO dbo.doc_exf VALUES (1) ;
GO
ALTER TABLE dbo.doc_exf
ADD AddDate smalldatetime NULL
CONSTRAINT AddDateDflt
DEFAULT GETDATE() WITH VALUES ;
GO
DROP TABLE dbo.doc_exf ;
GO

G. Creating a PRIMARY KEY constraint with index options


The following example creates the PRIMARY KEY constraint PK_TransactionHistoryArchive_TransactionID and
sets the options FILLFACTOR , ONLINE , and PAD_INDEX . The resulting clustered index will have the same name
as the constraint.
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
USE AdventureWorks2012;
GO
ALTER TABLE Production.TransactionHistoryArchive WITH NOCHECK
ADD CONSTRAINT PK_TransactionHistoryArchive_TransactionID PRIMARY KEY CLUSTERED (TransactionID)
WITH (FILLFACTOR = 75, ONLINE = ON, PAD_INDEX = ON);
GO

H. Adding a sparse column


The following examples show adding and modifying sparse columns in table T1. The code to create table T1
is as follows.

CREATE TABLE T1
(C1 int PRIMARY KEY,
C2 varchar(50) SPARSE NULL,
C3 int SPARSE NULL,
C4 int ) ;
GO

To add an additional sparse column C5 , execute the following statement.

ALTER TABLE T1
ADD C5 char(100) SPARSE NULL ;
GO

To convert the C4 non-sparse column to a sparse column, execute the following statement.

ALTER TABLE T1
ALTER COLUMN C4 ADD SPARSE ;
GO

To convert the C4 sparse column to a nonsparse column, execute the following statement.

ALTER TABLE T1
ALTER COLUMN C4 DROP SPARSE;
GO

I. Adding a column set


The following examples show adding a column to table T2 . A column set cannot be added to a table that
already contains sparse columns. The code to create table T2 is as follows.

CREATE TABLE T2
(C1 int PRIMARY KEY,
C2 varchar(50) NULL,
C3 int NULL,
C4 int ) ;
GO

The following three statements add a column set named CS , and then modify columns C2 and C3 to
SPARSE .
ALTER TABLE T2
ADD CS XML COLUMN_SET FOR ALL_SPARSE_COLUMNS ;
GO

ALTER TABLE T2
ALTER COLUMN C2 ADD SPARSE ;
GO

ALTER TABLE T2
ALTER COLUMN C3 ADD SPARSE ;
GO

J. Adding an encrypted column


The following statement adds an encrypted column named PromotionCode .

ALTER TABLE Customers ADD


PromotionCode nvarchar(100)
ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = MyCEK,
ENCRYPTION_TYPE = RANDOMIZED,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') ;

Dropping Columns and Constraints


The examples in this section demonstrate dropping columns and constraints.
A. Dropping a column or columns
The first example modifies a table to remove a column. The second example removes multiple columns.

CREATE TABLE dbo.doc_exb


(column_a INT
,column_b VARCHAR(20) NULL
,column_c datetime
,column_d int) ;
GO
-- Remove a single column.
ALTER TABLE dbo.doc_exb DROP COLUMN column_b ;
GO
-- Remove multiple columns.
ALTER TABLE dbo.doc_exb DROP COLUMN column_c, column_d;

B. Dropping constraints and columns


The first example removes a UNIQUE constraint from a table. The second example removes two constraints
and a single column.
CREATE TABLE dbo.doc_exc ( column_a int NOT NULL CONSTRAINT my_constraint UNIQUE) ;
GO

-- Example 1. Remove a single constraint.


ALTER TABLE dbo.doc_exc DROP my_constraint ;
GO

DROP TABLE dbo.doc_exc;


GO

CREATE TABLE dbo.doc_exc ( column_a int


NOT NULL CONSTRAINT my_constraint UNIQUE
,column_b int
NOT NULL CONSTRAINT my_pk_constraint PRIMARY KEY) ;
GO

-- Example 2. Remove two constraints and one column


-- The keyword CONSTRAINT is optional. The keyword COLUMN is required.
ALTER TABLE dbo.doc_exc

DROP CONSTRAINT CONSTRAINT my_constraint, my_pk_constraint, COLUMN column_b ;


GO

C. Dropping a PRIMARY KEY constraint in the ONLINE mode


The following example deletes a PRIMARY KEY constraint with the ONLINE option set to ON .

ALTER TABLE Production.TransactionHistoryArchive


DROP CONSTRAINT PK_TransactionHistoryArchive_TransactionID
WITH (ONLINE = ON);
GO

D. Adding and dropping a FOREIGN KEY constraint


The following example creates the table ContactBackup , and then alters the table, first by adding a
FOREIGN KEY constraint that references the table Person.Person , then by dropping the FOREIGN KEY
constraint.

CREATE TABLE Person.ContactBackup


(ContactID int) ;
GO

ALTER TABLE Person.ContactBackup


ADD CONSTRAINT FK_ContactBacup_Contact FOREIGN KEY (ContactID)
REFERENCES Person.Person (BusinessEntityID) ;
GO

ALTER TABLE Person.ContactBackup


DROP CONSTRAINT FK_ContactBacup_Contact ;
GO

DROP TABLE Person.ContactBackup ;

Examples
Altering a Column Definition
A. Changing the data type of a column
The following example changes a column of a table from INT to DECIMAL .
CREATE TABLE dbo.doc_exy (column_a INT ) ;
GO
INSERT INTO dbo.doc_exy (column_a) VALUES (10) ;
GO
ALTER TABLE dbo.doc_exy ALTER COLUMN column_a DECIMAL (5, 2) ;
GO
DROP TABLE dbo.doc_exy ;
GO

B. Changing the size of a column


The following example increases the size of a varchar column and the precision and scale of a decimal
column. Because the columns contain data, the column size can only be increased. Also notice that col_a is
defined in a unique index. The size of col_a can still be increased because the data type is a varchar and the
index is not the result of a PRIMARY KEY constraint.

-- Create a two-column table with a unique index on the varchar column.


CREATE TABLE dbo.doc_exy ( col_a varchar(5) UNIQUE NOT NULL, col_b decimal (4,2));
GO
INSERT INTO dbo.doc_exy VALUES ('Test', 99.99);
GO
-- Verify the current column size.
SELECT name, TYPE_NAME(system_type_id), max_length, precision, scale
FROM sys.columns WHERE object_id = OBJECT_ID(N'dbo.doc_exy');
GO
-- Increase the size of the varchar column.
ALTER TABLE dbo.doc_exy ALTER COLUMN col_a varchar(25);
GO
-- Increase the scale and precision of the decimal column.
ALTER TABLE dbo.doc_exy ALTER COLUMN col_b decimal (10,4);
GO
-- Insert a new row.
INSERT INTO dbo.doc_exy VALUES ('MyNewColumnSize', 99999.9999) ;
GO
-- Verify the current column size.
SELECT name, TYPE_NAME(system_type_id), max_length, precision, scale
FROM sys.columns WHERE object_id = OBJECT_ID(N'dbo.doc_exy');

C. Changing column collation


The following example shows how to change the collation of a column. Frist, a table is created table with the
default user collation.

CREATE TABLE T3
(C1 int PRIMARY KEY,
C2 varchar(50) NULL,
C3 int NULL,
C4 int ) ;
GO

Next, column C2 collation is changed to Latin1_General_BIN. Note that the data type is required, even
though it is not changed.

ALTER TABLE T3
ALTER COLUMN C2 varchar(50) COLLATE Latin1_General_BIN;
GO

Altering a Table Definition


The examples in this section demonstrate how to alter the definition of a table.
A. Modifying a table to change the compression
The following example changes the compression of a nonpartitioned table. The heap or clustered index will be
rebuilt. If the table is a heap, all nonclustered indexes will be rebuilt.

ALTER TABLE T1
REBUILD WITH (DATA_COMPRESSION = PAGE);

The following example changes the compression of a partitioned table. The REBUILD PARTITION = 1 syntax
causes only partition number 1 to be rebuilt.
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.

ALTER TABLE PartitionTable1


REBUILD PARTITION = 1 WITH (DATA_COMPRESSION = NONE) ;
GO

The same operation using the following alternate syntax causes all partitions in the table to be rebuilt.
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.

ALTER TABLE PartitionTable1


REBUILD PARTITION = ALL
WITH (DATA_COMPRESSION = PAGE ON PARTITIONS(1) ) ;

For additional data compression examples, see Data Compression.


B. Modifying a columnstore table to change archival compression
The following example further compresses a columnstore table partition by applying an additional
compression algorithm. This reduces the table to a smaller size, but also increases the time required for
storage and retrieval. This is useful for archiving or for situations that require less space and can afford more
time for storage and retrieval.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.

ALTER TABLE PartitionTable1


REBUILD PARTITION = 1 WITH (DATA_COMPRESSION = COLUMNSTORE_ARCHIVE) ;
GO

The following example decompresses a columnstore table partition that was compressed with
COLUMNSTORE_ARCHIVE option. When the data is restored, it will continue to be compressed with the
columnstore compression that is used for all columnstore tables.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.

ALTER TABLE PartitionTable1


REBUILD PARTITION = 1 WITH (DATA_COMPRESSION = COLUMNSTORE) ;
GO

C. Switching partitions between tables


The following example creates a partitioned table, assuming that partition scheme myRangePS1 is already
created in the database. Next, a non-partitioned table is created with the same structure as the partitioned
table and on the same filegroup as PARTITION 2 of table PartitionTable . The data of PARTITION 2 of table
PartitionTable is then switched into table NonPartitionTable .
CREATE TABLE PartitionTable (col1 int, col2 char(10))
ON myRangePS1 (col1) ;
GO
CREATE TABLE NonPartitionTable (col1 int, col2 char(10))
ON test2fg ;
GO
ALTER TABLE PartitionTable SWITCH PARTITION 2 TO NonPartitionTable ;
GO

D. Allowing lock escalation on partitioned tables


The following example enables lock escalation to the partition level on a partitioned table. If the table is not
partitioned, lock escalation is set at the TABLE level.
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.

ALTER TABLE dbo.T1 SET (LOCK_ESCALATION = AUTO);


GO

E. Configuring change tracking on a table


The following example enables change tracking on the Person.Person table.
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.

USE AdventureWorks2012;
ALTER TABLE Person.Person
ENABLE CHANGE_TRACKING;

The following example enables change tracking and enables the tracking of the columns that are updated
during a change.
Applies to: SQL Server 2008 through SQL Server 2017.

USE AdventureWorks2012;
GO
ALTER TABLE Person.Person
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)

The following example disables change tracking on the Person.Person table.


Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.

USE AdventureWorks2012;
Go
ALTER TABLE Person.Person
DISABLE CHANGE_TRACKING;

Disabling and Enabling Constraints and Triggers


A. Disabling and re-enabling a constraint
The following example disables a constraint that limits the salaries accepted in the data. NOCHECK CONSTRAINT is
used with ALTER TABLE to disable the constraint and allow for an insert that would typically violate the
constraint. CHECK CONSTRAINT re-enables the constraint.
CREATE TABLE dbo.cnst_example
(id INT NOT NULL,
name VARCHAR(10) NOT NULL,
salary MONEY NOT NULL
CONSTRAINT salary_cap CHECK (salary < 100000)
);

-- Valid inserts
INSERT INTO dbo.cnst_example VALUES (1,'Joe Brown',65000);
INSERT INTO dbo.cnst_example VALUES (2,'Mary Smith',75000);

-- This insert violates the constraint.


INSERT INTO dbo.cnst_example VALUES (3,'Pat Jones',105000);

-- Disable the constraint and try again.


ALTER TABLE dbo.cnst_example NOCHECK CONSTRAINT salary_cap;
INSERT INTO dbo.cnst_example VALUES (3,'Pat Jones',105000);

-- Re-enable the constraint and try another insert; this will fail.
ALTER TABLE dbo.cnst_example CHECK CONSTRAINT salary_cap;
INSERT INTO dbo.cnst_example VALUES (4,'Eric James',110000) ;

B. Disabling and re-enabling a trigger


The following example uses the DISABLE TRIGGER option of ALTER TABLE to disable the trigger and allow for
an insert that would typically violate the trigger. ENABLE TRIGGER is then used to re-enable the trigger.

CREATE TABLE dbo.trig_example


(id INT,
name VARCHAR(12),
salary MONEY) ;
GO
-- Create the trigger.
CREATE TRIGGER dbo.trig1 ON dbo.trig_example FOR INSERT
AS
IF (SELECT COUNT(*) FROM INSERTED
WHERE salary > 100000) > 0
BEGIN
print 'TRIG1 Error: you attempted to insert a salary > $100,000'
ROLLBACK TRANSACTION
END ;
GO
-- Try an insert that violates the trigger.
INSERT INTO dbo.trig_example VALUES (1,'Pat Smith',100001) ;
GO
-- Disable the trigger.
ALTER TABLE dbo.trig_example DISABLE TRIGGER trig1 ;
GO
-- Try an insert that would typically violate the trigger.
INSERT INTO dbo.trig_example VALUES (2,'Chuck Jones',100001) ;
GO
-- Re-enable the trigger.
ALTER TABLE dbo.trig_example ENABLE TRIGGER trig1 ;
GO
-- Try an insert that violates the trigger.
INSERT INTO dbo.trig_example VALUES (3,'Mary Booth',100001) ;
GO

Online Operations
A. Online index rebuild using low priority wait options
The following example shows how to perform an online index rebuild specifying the low priority wait options.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
ALTER TABLE T1
REBUILD WITH
(
PAD_INDEX = ON,
ONLINE = ON ( WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 4 MINUTES,
ABORT_AFTER_WAIT = BLOCKERS ) )
)
;

B. Online Alter Column


The following example shows how to perform an alter column operation with the ONLINE option.
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.

CREATE TABLE dbo.doc_exy (column_a INT ) ;


GO
INSERT INTO dbo.doc_exy (column_a) VALUES (10) ;
GO
ALTER TABLE dbo.doc_exy
ALTER COLUMN column_a DECIMAL (5, 2) WITH (ONLINE = ON);
GO
sp_help doc_exy;
DROP TABLE dbo.doc_exy ;
GO

System Versioning
The following four examples will help you become familiar with the syntax for using system versioning. For
additional assistance, see Getting Started with System-Versioned Temporal Tables.
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
A. Add System Versioning to Existing Tables
The following example shows how to add system versioning to an existing table and create a future history
table. This example assumes that there is an existing table called InsurancePolicy with a primary key defined.
This example populates the newly created period columns for system versioning using default values for the
start and end times because these values cannot be null. This example uses the HIDDEN clause to ensure no
impact on existing applications interacting with the current table. It also uses
HISTORY_RETENTION_PERIOD that is available on SQL Database only.

--Alter non-temporal table to define periods for system versioning


ALTER TABLE InsurancePolicy
ADD PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime),
SysStartTime datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL
DEFAULT GETUTCDATE(),
SysEndTime datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL
DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.99999999');
--Enable system versioning with 1 year retention for historical data
ALTER TABLE InsurancePolicy
SET (SYSTEM_VERSIONING = ON (HISTORY_RETENTION_PERIOD = 1 YEAR));

B. Migrate An Existing Solution to Use System Versioning


The following example shows how to migrate to system versioning from a solution that uses triggers to
mimic temporal support. The example assumes there is an existing solution that uses a ProjectTaskCurrent
table and a ProjectTaskHistory table for its existing solution, that is uses the Changed Date and Revised Date
columns for its periods, that these period columns do not use the datetime2 datatype and that the
ProjectTaskCurrent table has a primary key defined.
-- Drop existing trigger
DROP TRIGGER ProjectTaskCurrent_Trigger;
-- Adjust the schema for current and history table
-- Change data types for existing period columns
ALTER TABLE ProjectTaskCurrent ALTER COLUMN [Changed Date] datetime2 NOT NULL;
ALTER TABLE ProjectTaskCurrent ALTER COLUMN [Revised Date] datetime2 NOT NULL;

ALTER TABLE ProjectTaskHistory ALTER COLUMN [Changed Date] datetime2 NOT NULL;
ALTER TABLE ProjectTaskHistory ALTER COLUMN [Revised Date] datetime2 NOT NULL;

-- Add SYSTEM_TIME period and set system versioning with linking two existing tables
-- (a certain set of data checks happen in the background)
ALTER TABLE ProjectTaskCurrent
ADD PERIOD FOR SYSTEM_TIME ([Changed Date], [Revised Date])

ALTER TABLE ProjectTaskCurrent


SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.ProjectTaskHistory, DATA_CONSISTENCY_CHECK = ON))

C. Disabling and Re-Enabling System Versioning to Change Table Schema


This example shows how to disable system versioning on the Department table, add a column, and re-enable
system versioning. Disabling system versioning is required in order to modify the table schema. Perform
these steps within a transaction to prevent updates to both tables while updating the table schema, which
enables the DBA to skip the data consistency check when re-enabling system versioning and gain a
performance benefit. Note that tasks such as creating statistics, switching partitions or applying compression
to one or both tables does not require disabling system versioning.

BEGIN TRAN
/* Takes schema lock on both tables */
ALTER TABLE Department
SET (SYSTEM_VERSIONING = OFF);
/* expand table schema for temporal table */
ALTER TABLE Department
ADD Col5 int NOT NULL DEFAULT 0;
/* Expand table schema for history table */
ALTER TABLE DepartmentHistory
ADD Col5 int NOT NULL DEFAULT 0;
/* Re-establish versioning again */
ALTER TABLE Department
SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE=dbo.DepartmentHistory,
DATA_CONSISTENCY_CHECK = OFF));
COMMIT

D. Removing System Versioning


This example shows how to completely remove system versioning from the Department table and drop the
DepartmentHistory table. Optionally, you might also want to drop the period columns used by the system to
record system versioning information. Note that you cannot drop either the Department or the
DepartmentHistory tables while system versioning is enabled.

ALTER TABLE Department


SET (SYSTEM_VERSIONING = OFF);
ALTER TABLE Department
DROP PEROD FOR SYSTEM_TIME;
DROP TABLE DepartmentHistory;

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


The following examples A through C use the FactResellerSales table in the AdventureWorksPDW2012
database.
A. Determining if a table is partitioned
The following query returns one or more rows if the table FactResellerSales is partitioned. If the table is not
partitioned, no rows are returned.

SELECT * FROM sys.partitions AS p


JOIN sys.tables AS t
ON p.object_id = t.object_id
WHERE p.partition_id IS NOT NULL
AND t.name = 'FactResellerSales';

B. Determining boundary values for a partitioned table


The following query returns the boundary values for each partition in the FactResellerSales table.

SELECT t.name AS TableName, i.name AS IndexName, p.partition_number,


p.partition_id, i.data_space_id, f.function_id, f.type_desc,
r.boundary_id, r.value AS BoundaryValue
FROM sys.tables AS t
JOIN sys.indexes AS i
ON t.object_id = i.object_id
JOIN sys.partitions AS p
ON i.object_id = p.object_id AND i.index_id = p.index_id
JOIN sys.partition_schemes AS s
ON i.data_space_id = s.data_space_id
JOIN sys.partition_functions AS f
ON s.function_id = f.function_id
LEFT JOIN sys.partition_range_values AS r
ON f.function_id = r.function_id and r.boundary_id = p.partition_number
WHERE t.name = 'FactResellerSales' AND i.type <= 1
ORDER BY p.partition_number;

C. Determining the partition column for a partitioned table


The following query returns the name of the partitioning column for table. FactResellerSales .

SELECT t.object_id AS Object_ID, t.name AS TableName,


ic.column_id as PartitioningColumnID, c.name AS PartitioningColumnName
FROM sys.tables AS t
JOIN sys.indexes AS i
ON t.object_id = i.object_id
JOIN sys.columns AS c
ON t.object_id = c.object_id
JOIN sys.partition_schemes AS ps
ON ps.data_space_id = i.data_space_id
JOIN sys.index_columns AS ic
ON ic.object_id = i.object_id
AND ic.index_id = i.index_id AND ic.partition_ordinal > 0
WHERE t.name = 'FactResellerSales'
AND i.type <= 1
AND c.column_id = ic.column_id;

D. Merging two partitions


The following example merges two partitions on a table.
The Customer table has the following definition:
CREATE TABLE Customer (
id int NOT NULL,
lastName varchar(20),
orderCount int,
orderDate date)
WITH
( DISTRIBUTION = HASH(id),
PARTITION ( orderCount RANGE LEFT
FOR VALUES (1, 5, 10, 25, 50, 100)));

The following command combines the 10 and 25 partition boundaries.

ALTER TABLE Customer MERGE RANGE (10);

The new DDL for the table is:

CREATE TABLE Customer (


id int NOT NULL,
lastName varchar(20),
orderCount int,
orderDate date)
WITH
( DISTRIBUTION = HASH(id),
PARTITION ( orderCount RANGE LEFT
FOR VALUES (1, 5, 25, 50, 100)));

E. Splitting a partition
The following example splits a partition on a table.
The Customer table has the following DDL:

DROP TABLE Customer;

CREATE TABLE Customer (


id int NOT NULL,
lastName varchar(20),
orderCount int,
orderDate date)
WITH
( DISTRIBUTION = HASH(id),
PARTITION ( orderCount RANGE LEFT
FOR VALUES (1, 5, 10, 25, 50, 100 )));

The following command creates a new partition bound by the value 75, between 50 and 100.

ALTER TABLE Customer SPLIT RANGE (75);

The new DDL for the table is:

CREATE TABLE Customer (


id int NOT NULL,
lastName varchar(20),
orderCount int,
orderDate date)
WITH DISTRIBUTION = HASH(id),
PARTITION ( orderCount (RANGE LEFT
FOR VALUES (1, 5, 10, 25, 50, 75, 100 )));
F. Using SWITCH to move a partition to a history table
The following example moves the data in a partition of the Orders table to a partition in the OrdersHistory
table.
The Orders table has the following DDL:

CREATE TABLE Orders (


id INT,
city VARCHAR (25),
lastUpdateDate DATE,
orderDate DATE )
WITH
(DISTRIBUTION = HASH ( id ),
PARTITION ( orderDate RANGE RIGHT
FOR VALUES ('2004-01-01', '2005-01-01', '2006-01-01', '2007-01-01' )));

In this example, the Orders table has the following partitions. Each partition contains data.

PARTITION HAS DATA? BOUNDARY RANGE

1 Yes OrderDate < '2004-01-01'

2 Yes '2004-01-01' <= OrderDate < '2005-


01-01'

3 Yes '2005-01-01' <= OrderDate< '2006-


01-01'

4 Yes '2006-01-01'<= OrderDate < '2007-


01-01'

5 Yes '2007-01-01' <= OrderDate

Partition 1 (has data): OrderDate < '2004-01-01'


Partition 2 (has data): '2004-01-01' <= OrderDate < '2005-01-01'
Partition 3 (has data): '2005-01-01' <= OrderDate< '2006-01-01'
Partition 4 (has data): '2006-01-01'<= OrderDate < '2007-01-01'
Partition 5 (has data): '2007-01-01' <= OrderDate
The OrdersHistory table has the following DDL, which has identical columns and column names as the
Orders table. Both are hash-distributed on the id column.

CREATE TABLE OrdersHistory (


id INT,
city VARCHAR (25),
lastUpdateDate DATE,
orderDate DATE )
WITH
(DISTRIBUTION = HASH ( id ),
PARTITION ( orderDate RANGE RIGHT
FOR VALUES ( '2004-01-01' )));

Although the columns and column names must be the same, the partition boundaries do not need to be the
same. In this example, the OrdersHistory table has the following two partitions and both partitions are
empty:
Partition 1 (no data): OrderDate < '2004-01-01'
Partition 2 (empty): '2004-01-01' <= OrderDate
For the previous two tables, the following command moves all rows with OrderDate < '2004-01-01' from the
Orders table to the OrdersHistory table.

ALTER TABLE Orders SWITCH PARTITION 1 TO OrdersHistory PARTITION 1;

As a result, the first partition in Orders is empty and the first partition in OrdersHistory contains data. The
tables now appear as follows:
Orders table
Partition 1 (empty): OrderDate < '2004-01-01'
Partition 2 (has data): '2004-01-01' <= OrderDate < '2005-01-01'
Partition 3 (has data): '2005-01-01' <= OrderDate< '2006-01-01'
Partition 4 (has data): '2006-01-01'<= OrderDate < '2007-01-01'
Partition 5 (has data): '2007-01-01' <= OrderDate
OrdersHistory table
Partition 1 (has data): OrderDate < '2004-01-01'
Partition 2 (empty): '2004-01-01' <= OrderDate
To clean up the Orders table, you can remove the empty partition by merging partitions 1 and 2 as follows:

ALTER TABLE Orders MERGE RANGE ('2004-01-01');

After the merge, the Orders table has the following partitions:
Orders table
Partition 1 (has data): OrderDate < '2005-01-01'
Partition 2 (has data): '2005-01-01' <= OrderDate< '2006-01-01'
Partition 3 (has data): '2006-01-01'<= OrderDate < '2007-01-01'
Partition 4 (has data): '2007-01-01' <= OrderDate
Suppose another year passes and you are ready to archive the year 2005. You can allocate an empty partition
for the year 2005 in the OrdersHistory table by splitting the empty partition as follows:

ALTER TABLE OrdersHistory SPLIT RANGE ('2005-01-01');

After the split, the OrdersHistory table has the following partitions:
OrdersHistory table
Partition 1 (has data): OrderDate < '2004-01-01'
Partition 2 (empty): '2004-01-01' < '2005-01-01'
Partition 3 (empty): '2005-01-01' <= OrderDate

See Also
sys.tables (Transact-SQL )
sp_rename (Transact-SQL )
CREATE TABLE (Transact-SQL )
DROP TABLE (Transact-SQL )
sp_help (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER TABLE column_constraint (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the properties of a PRIMARY KEY, FOREIGN KEY, UNIQUE, or CHECK constraint that is part of a new
column definition added to a table by using ALTER TABLE.
Transact-SQL Syntax Conventions

Syntax
[ CONSTRAINT constraint_name ]
{
[ NULL | NOT NULL ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor ]
[ WITH ( index_option [, ...n ] ) ]
[ ON { partition_scheme_name (partition_column_name)
| filegroup | "default" } ]
| [ FOREIGN KEY ]
REFERENCES [ schema_name . ] referenced_table_name
[ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )
}

Arguments
CONSTRAINT
Specifies the start of the definition for a PRIMARY KEY, UNIQUE, FOREIGN KEY, or CHECK constraint.
constraint_name
Is the name of the constraint. Constraint names must follow the rules for identifiers, except that the name cannot
start with a number sign (#). If constraint_name is not supplied, a system-generated name is assigned to the
constraint.
NULL | NOT NULL
Specifies whether the column can accept null values. Columns that do not allow null values can be added only if
they have a default specified. If the new column allows null values and no default is specified, the new column
contains NULL for each row in the table. If the new column allows null values and a default definition is added
with the new column, the WITH VALUES option can be used to store the default value in the new column for each
existing row in the table.
If the new column does not allow null values, a DEFAULT definition must be added with the new column. The new
column automatically loads with the default value in the new columns in each existing row.
When the addition of a column requires physical changes to the data rows of a table, such as adding DEFAULT
values to each row, locks are held on the table while ALTER TABLE runs. This affects the ability to change the
content of the table while the lock is in place. In contrast, adding a column that allows null values and does not
specify a default value is a metadata operation only, and involves no locks.
When you use CREATE TABLE or ALTER TABLE, database and session settings influence and possibly override the
nullability of the data type that is used in a column definition. We recommend that you always explicitly define
noncomputed columns as NULL or NOT NULL or, if you use a user-defined data type, that you allow the column
to use the default nullability of the data type. For more information, see CREATE TABLE (Transact-SQL ).
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column or columns by using a unique index. Only one
PRIMARY KEY constraint can be created for each table.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns by using a unique index.
CLUSTERED | NONCLUSTERED
Specifies that a clustered or nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints default to CLUSTERED. UNIQUE constraints default to NONCLUSTERED.
If a clustered constraint or index already exists on a table, CLUSTERED cannot be specified. If a clustered
constraint or index already exists on a table, PRIMARY KEY constraints default to NONCLUSTERED.
Columns that are of the ntext, text, varchar(max), nvarchar(max), varbinary(max), xml, or image data types
cannot be specified as columns for an index.
WITH FILLFACTOR =fillfactor
Specifies how full the Database Engine should make each index page used to store the index data. User-specified
fill factor values can be from 1 through 100. If a value is not specified, the default is 0.

IMPORTANT
Documenting WITH FILLFACTOR = fillfactor as the only index option that applies to PRIMARY KEY or UNIQUE constraints is
maintained for backward compatibility, but will not be documented in this manner in future releases. Other index options can
be specified in the index_option clause of ALTER TABLE.

ON { partition_scheme_name(partition_column_name) | filegroup | "default" }


Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the storage location of the index created for the constraint. If partition_scheme_name is specified, the
index is partitioned and the partitions are mapped to the filegroups that are specified by partition_scheme_name.
If filegroup is specified, the index is created in the named filegroup. If "default" is specified or if ON is not specified
at all, the index is created in the same filegroup as the table. If ON is specified when a clustered index is added for
a PRIMARY KEY or UNIQUE constraint, the whole table is moved to the specified filegroup when the clustered
index is created.
In this context, default, is not a keyword. It is an identifier for the default filegroup and must be delimited, as in ON
"default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current
session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL ).
FOREIGN KEY REFERENCES
Is a constraint that provides referential integrity for the data in the column. FOREIGN KEY constraints require that
each value in the column exist in the specified column in the referenced table.
schema_name
Is the name of the schema to which the table referenced by the FOREIGN KEY constraint belongs.
referenced_table_name
Is the table referenced by the FOREIGN KEY constraint.
ref_column
Is a column in parentheses referenced by the new FOREIGN KEY constraint.
ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
Specifies what action happens to rows in the table that is altered, if those rows have a referential relationship and
the referenced row is deleted from the parent table. The default is NO ACTION.
NO ACTION
The SQL Server Database Engine raises an error and the delete action on the row in the parent table is rolled
back.
CASCADE
Corresponding rows are deleted from the referencing table if that row is deleted from the parent table.
SET NULL
All the values that make up the foreign key are set to NULL when the corresponding row in the parent table is
deleted. For this constraint to execute, the foreign key columns must be nullable.
SET DEFAULT
All the values that comprise the foreign key are set to their default values when the corresponding row in the
parent table is deleted. For this constraint to execute, all foreign key columns must have default definitions. If a
column is nullable and there is no explicit default value set, NULL becomes the implicit default value of the
column.
Do not specify CASCADE if the table will be included in a merge publication that uses logical records. For more
information about logical records, see Group Changes to Related Rows with Logical Records.
The ON DELETE CASCADE cannot be defined if an INSTEAD OF trigger ON DELETE already exists on the table
that is being altered.
For example, in the AdventureWorks2012 database, the ProductVendor table has a referential relationship with
the Vendor table. The ProductVendor.VendorID foreign key references the Vendor.VendorID primary key.
If a DELETE statement is executed on a row in the Vendor table, and an ON DELETE CASCADE action is specified
for ProductVendor.VendorID, the Database Engine checks for one or more dependent rows in the
ProductVendor table. If any exist, the dependent rows in the ProductVendor table will be deleted, in addition to
the row referenced in the Vendor table.
Conversely, if NO ACTION is specified, the Database Engine raises an error and rolls back the delete action on the
Vendor row when there is at least one row in the ProductVendor table that references it.
ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
Specifies what action happens to rows in the table altered when those rows have a referential relationship and the
referenced row is updated in the parent table. The default is NO ACTION.
NO ACTION
The Database Engine raises an error, and the update action on the row in the parent table is rolled back.
CASCADE
Corresponding rows are updated in the referencing table when that row is updated in the parent table.
SET NULL
All the values that make up the foreign key are set to NULL when the corresponding row in the parent table is
updated. For this constraint to execute, the foreign key columns must be nullable.
SET DEFAULT
All the values that make up the foreign key are set to their default values when the corresponding row in the
parent table is updated. For this constraint to execute, all foreign key columns must have default definitions. If a
column is nullable and there is no explicit default value set, NULL becomes the implicit default value of the
column.
Do not specify CASCADE if the table will be included in a merge publication that uses logical records. For more
information about logical records, see Group Changes to Related Rows with Logical Records.
ON UPDATE CASCADE, SET NULL, or SET DEFAULT cannot be defined if an INSTEAD OF trigger ON UPDATE
already exists on the table that is being altered.
For example, in the AdventureWorks2012 database, the ProductVendor table has a referential relationship with
the Vendor table. The ProductVendor.VendorID foreign key references the Vendor.VendorID primary key.
If an UPDATE statement is executed on a row in the Vendor table and an ON UPDATE CASCADE action is
specified for ProductVendor.VendorID, the Database Engine checks for one or more dependent rows in the
ProductVendor table. If any exist, the dependent row in the ProductVendor table will be updated, in addition to
the row referenced in the Vendor table.
Conversely, if NO ACTION is specified, the Database Engine raises an error and rolls back the update action on
the Vendor row when there is at least one row in the ProductVendor table that references it.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Can be specified for FOREIGN KEY constraints and CHECK constraints. If this clause is specified for a constraint,
the constraint is not enforced when replication agents perform insert, update, or delete operations.
CHECK
Is a constraint that enforces domain integrity by limiting the possible values that can be entered into a column or
columns.
logical_expression
Is a logical expression used in a CHECK constraint and returns TRUE or FALSE. logical_expression used with
CHECK constraints cannot reference another table but can reference other columns in the same table for the same
row. The expression cannot reference an alias data type.

Remarks
When FOREIGN KEY or CHECK constraints are added, all existing data is verified for constraint violations unless
the WITH NOCHECK option is specified. If any violations occur, ALTER TABLE fails and an error is returned. When
a new PRIMARY KEY or UNIQUE constraint is added to an existing column, the data in the column or columns
must be unique. If duplicate values are found, ALTER TABLE fails. The WITH NOCHECK option has no effect when
PRIMARY KEY or UNIQUE constraints are added.
Each PRIMARY KEY and UNIQUE constraint generates an index. The number of UNIQUE and PRIMARY KEY
constraints cannot cause the number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered
index. Foreign key constraints do not automatically generate an index. However, foreign key columns are
frequently used in join criteria in queries by matching the column or columns in the foreign key constraint of one
table with the primary or unique key column or columns in the other table. An index on the foreign key columns
enables the Database Engine to quickly find related data in the foreign key table.

Examples
For examples, see ALTER TABLE (Transact-SQL ).

See Also
ALTER TABLE (Transact-SQL )
column_definition (Transact-SQL )
ALTER TABLE column_definition (Transact-SQL)
5/3/2018 • 10 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the properties of a column that are added to a table by using ALTER TABLE.
Transact-SQL Syntax Conventions

Syntax
column_name <data_type>
[ FILESTREAM ]
[ COLLATE collation_name ]
[ NULL | NOT NULL ]
[
[ CONSTRAINT constraint_name ] DEFAULT constant_expression [ WITH VALUES ]
| IDENTITY [ ( seed , increment ) ] [ NOT FOR REPLICATION ]
]
[ ROWGUIDCOL ]
[ SPARSE ]
[ ENCRYPTED WITH
( COLUMN_ENCRYPTION_KEY = key_name ,
ENCRYPTION_TYPE = { DETERMINISTIC | RANDOMIZED } ,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
) ]
[ MASKED WITH ( FUNCTION = ' mask_function ') ]
[ <column_constraint> [ ...n ] ]

<data type> ::=


[ type_schema_name . ] type_name
[ ( precision [ , scale ] | max |
[ { CONTENT | DOCUMENT } ] xml_schema_collection ) ]

<column_constraint> ::=
[ CONSTRAINT constraint_name ]
{ { PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
]
[ ON { partition_scheme_name ( partition_column_name )
| filegroup | "default" } ]
| [ FOREIGN KEY ]
REFERENCES [ schema_name . ] referenced_table_name [ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )
}

Arguments
column_name
Is the name of the column to be altered, added, or dropped. column_name can consist of 1 through 128 characters.
For new columns, created with a timestamp data type, column_name can be omitted. If no column_name is
specified for a timestamp data type column, the name timestamp is used.
[ type_schema_name. ] type_name
Is the data type for the column that is added and the schema to which it belongs.
type_name can be:
A Microsoft SQL Server system data type.
An alias data type based on a SQL Server system data type. Alias data types must be created by using
CREATE TYPE before they can be used in a table definition.
A Microsoft .NET Framework user-defined type and the schema to which it belongs. A .NET Framework
user-defined type must be created by using CREATE TYPE before it can be used in a table definition.
If type_schema_name is not specified, the Microsoft Database Engine references type_name in the following
order:
The SQL Server system data type.
The default schema of the current user in the current database.
The dbo schema in the current database.
precision
Is the precision for the specified data type. For more information about valid precision values, see Precision, Scale,
and Length (Transact-SQL ).
scale
Is the scale for the specified data type. For more information about valid scale values, see Precision, Scale, and
Length (Transact-SQL ).
max
Applies only to the varchar, nvarchar, and varbinary data types. These are used for storing 2^31 bytes of
character and binary data, and 2^30 bytes of Unicode data.
CONTENT
Specifies that each instance of the xml data type in column_name can comprise multiple top-level elements.
CONTENT applies only to the xml data type and can be specified only if xml_schema_collection is also specified. If
this is not specified, CONTENT is the default behavior.
DOCUMENT
Specifies that each instance of the xml data type in column_name can comprise only one top-level element.
DOCUMENT applies only to the xml data type and can be specified only if xml_schema_collection is also specified.
xml_schema_collection
Applies to: SQL Server 2008 through SQL Server 2017.
Applies only to the xml data type for associating an XML schema collection with the type. Before typing an xml
column to a schema, the schema must first be created in the database by using CREATE XML SCHEMA
COLLECTION.
FILESTREAM
Optionally specifies the FILESTREAM storage attribute for column that has a type_name of varbinary(max).
When FILESTREAM is specified for a column, the table must also have a column of the uniqueidentifier data
type that has the ROWGUIDCOL attribute. This column must not allow null values and must have either a
UNIQUE or PRIMARY KEY single-column constraint. The GUID value for the column must be supplied either by
an application when data is being inserted, or by a DEFAULT constraint that uses the NEWID () function.
The ROWGUIDCOL column cannot be dropped and the related constraints cannot be changed while there is a
FILESTREAM column defined for the table. The ROWGUIDCOL column can be dropped only after the last
FILESTREAM column is dropped.
When the FILESTREAM storage attribute is specified for a column, all values for that column are stored in a
FILESTREAM data container on the file system.
For an example that shows how to use column definition, see FILESTREAM (SQL Server).
COLL ATE collation_name
Specifies the collation of the column. If not specified, the column is assigned the default collation of the database.
Collation name can be either a Windows collation name or an SQL collation name. For a list and more
information, see Windows Collation Name (Transact-SQL ) and SQL Server Collation Name (Transact-SQL ).
The COLL ATE clause can be used to specify the collations only of columns of the char, varchar, nchar, and
nvarchar data types.
For more information about the COLL ATE clause, see COLL ATE (Transact-SQL ).
NULL | NOT NULL
Determines whether null values are allowed in the column. NULL is not strictly a constraint but can be specified
just like NOT NULL.
[ CONSTRAINT constraint_name ]
Specifies the start of a DEFAULT value definition. To maintain compatibility with earlier versions of SQL Server, a
constraint name can be assigned to a DEFAULT. constraint_name must follow the rules for identifiers, except that
the name cannot start with a number sign (#). If constraint_name is not specified, a system-generated name is
assigned to the DEFAULT definition.
DEFAULT
Is a keyword that specifies the default value for the column. DEFAULT definitions can be used to provide values for
a new column in the existing rows of data. DEFAULT definitions cannot be applied to timestamp columns, or
columns with an IDENTITY property. If a default value is specified for a user-defined type column, the type must
support an implicit conversion from constant_expression to the user-defined type.
constant_expression
Is a literal value, a NULL, or a system function used as the default column value. If used in conjunction with a
column defined to be of a .NET Framework user-defined type, the implementation of the type must support an
implicit conversion from the constant_expression to the user-defined type.
WITH VALUES
Specifies that the value given in DEFAULT constant_expression is stored in a new column added to existing rows. If
the added column allows null values and WITH VALUES is specified, the default value is stored in the new column,
added to existing rows. If WITH VALUES is not specified for columns that allow nulls, the value NULL is stored in
the new column in existing rows. If the new column does not allow nulls, the default value is stored in new rows
regardless of whether WITH VALUES is specified.
IDENTITY
Specifies that the new column is an identity column. The SQL Server Database Engine provides a unique,
incremental value for the column. When you add identifier columns to existing tables, the identity numbers are
added to the existing rows of the table with the seed and increment values. The order in which the rows are
updated is not guaranteed. Identity numbers are also generated for any new rows that are added.
Identity columns are commonly used in conjunction with PRIMARY KEY constraints to serve as the unique row
identifier for the table. The IDENTITY property can be assigned to a tinyint, smallint, int, bigint, decimal(p,0),
or numeric(p,0) column. Only one identity column can be created per table. The DEFAULT keyword and bound
defaults cannot be used with an identity column. Either both the seed and increment must be specified, or neither.
If neither are specified, the default is (1,1).

NOTE
You cannot modify an existing table column to add the IDENTITY property.

Adding an identity column to a published table is not supported because it can result in nonconvergence when the
column is replicated to the Subscriber. The values in the identity column at the Publisher depend on the order in
which the rows for the affected table are physically stored. The rows might be stored differently at the Subscriber;
therefore, the value for the identity column can be different for the same rows..
To disable the IDENTITY property of a column by allowing values to be explicitly inserted, use SET
IDENTITY_INSERT.
seed
Is the value used for the first row loaded into the table.
increment
Is the incremental value added to the identity value of the previous row that is loaded.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Can be specified for the IDENTITY property. If this clause is specified for the IDENTITY property, values are not
incremented in identity columns when replication agents perform insert operations.
ROWGUIDCOL
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies that the column is a row globally unique identifier column. ROWGUIDCOL can only be assigned to a
uniqueidentifier column, and only one uniqueidentifier column per table can be designated as the
ROWGUIDCOL column. ROWGUIDCOL cannot be assigned to columns of user-defined data types.
ROWGUIDCOL does not enforce uniqueness of the values stored in the column. Also, ROWGUIDCOL does not
automatically generate values for new rows that are inserted into the table. To generate unique values for each
column, either use the NEWID function on INSERT statements or specify the NEWID function as the default for
the column. For more information, see NEWID (Transact-SQL )and INSERT (Transact-SQL ).
SPARSE
Indicates that the column is a sparse column. The storage of sparse columns is optimized for null values. Sparse
columns cannot be designated as NOT NULL. For additional restrictions and more information about sparse
columns, see Use Sparse Columns.
<column_constraint>
For the definitions of the column constraint arguments, see column_constraint (Transact-SQL ).
ENCRYPTED WITH
Specifies encrypting columns by using the Always Encrypted feature.
COLUMN_ENCRYPTION_KEY = key_name
Specifies the column encryption key. For more information, see CREATE COLUMN ENCRYPTION KEY (Transact-
SQL ).
ENCRYPTION_TYPE = { DETERMINISTIC | RANDOMIZED }
Deterministic encryption uses a method which always generates the same encrypted value for any given plain
text value. Using deterministic encryption allows searching using equality comparison, grouping, and joining tables
using equality joins based on encrypted values, but can also allow unauthorized users to guess information about
encrypted values by examining patterns in the encrypted column. Joining two tables on columns encrypted
deterministically is only possible if both columns are encrypted using the same column encryption key.
Deterministic encryption must use a column collation with a binary2 sort order for character columns.
Randomized encryption uses a method that encrypts data in a less predictable manner. Randomized encryption
is more secure, but prevents equality searches, grouping, and joining on encrypted columns. Columns using
randomized encryption cannot be indexed.
Use deterministic encryption for columns that will be search parameters or grouping parameters, for example a
government ID number. Use randomized encryption, for data such as a credit card number, which is not grouped
with other records, or used to join tables, and which is not searched for because you use other columns (such as a
transaction number) to find the row which contains the encrypted column of interest.
Columns must be of a qualifying data type.
ALGORITHM
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.
Must be 'AEAD_AES_256_CBC_HMAC_SHA_256'.
For more information including feature constraints, see Always Encrypted (Database Engine).
ADD MASKED WITH ( FUNCTION = ' mask_function ')
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.
Specifies a dynamic data mask. mask_function is the name of the masking function with the appropriate
parameters. The following functions are available:
default()
email()
partial()
random()
For function parameters, see Dynamic Data Masking.

Remarks
If a column is added having a uniqueidentifier data type, it can be defined with a default that uses the NEWID ()
function to supply the unique identifier values in the new column for each existing row in the table.
The Database Engine does not enforce an order for specifying DEFAULT, IDENTITY, ROWGUIDCOL, or column
constraints in a column definition.
ALTER TABLE statement will fail if adding the column will cause the data row size to exceed 8060 bytes.

Examples
For examples, see ALTER TABLE (Transact-SQL ).

See Also
ALTER TABLE (Transact-SQL )
ALTER TABLE computed_column_definition (Transact-
SQL)
5/3/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the properties of a computed column that is added to a table by using ALTER TABLE.
Transact-SQL Syntax Conventions

Syntax
column_name AS computed_column_expression
[ PERSISTED [ NOT NULL ] ]
[
[ CONSTRAINT constraint_name ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor ]
[ WITH ( <index_option> [, ...n ] ) ]
[ ON { partition_scheme_name ( partition_column_name ) | filegroup
| "default" } ]
| [ FOREIGN KEY ]
REFERENCES ref_table [ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE } ]
[ ON UPDATE { NO ACTION } ]
[ NOT FOR REPLICATION ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )
]

Arguments
column_name
Is the name of the column to be altered, added, or dropped. column_name can be 1 through 128 characters. For
new columns, column_name can be omitted for columns created with a timestamp data type. If no column_name
is specified for a timestamp data type column, the name timestamp is used.
computed_column_expression
Is an expression that defines the value of a computed column. A computed column is a virtual column that is not
physically stored in the table but is computed from an expression that uses other columns in the same table. For
example, a computed column could have the definition: cost AS price * qty. The expression can be a noncomputed
column name, constant, function, variable, and any combination of these connected by one or more operators. The
expression cannot be a subquery or include an alias data type.
Computed columns can be used in select lists, WHERE clauses, ORDER BY clauses, or any other locations where
regular expressions can be used, but with the following exceptions:
A computed column cannot be used as a DEFAULT or FOREIGN KEY constraint definition or with a NOT
NULL constraint definition. However, if the computed column value is defined by a deterministic expression
and the data type of the result is allowed in index columns, a computed column can be used as a key column
in an index or as part of any PRIMARY KEY or UNIQUE constraint.
For example, if the table has integer columns a and b, the computed column a + b may be indexed, but
computed column a + DATEPART(dd, GETDATE ()) cannot be indexed, because the value might change in
subsequent invocations.
A computed column cannot be the target of an INSERT or UPDATE statement.

NOTE
Because each row in a table can have different values for columns involved in a computed column, the computed
column may not have the same result for each row.

PERSISTED
Specifies that the Database Engine will physically store the computed values in the table, and update the values
when any other columns on which the computed column depends are updated. Marking a computed column as
PERSISTED allows an index to be created on a computed column that is deterministic, but not precise. For more
information, see Indexes on Computed Columns. Any computed columns used as partitioning columns of a
partitioned table must be explicitly marked PERSISTED. computed_column_expression must be deterministic when
PERSISTED is specified.
NULL | NOT NULL
Specifies whether null values are allowed in the column. NULL is not strictly a constraint but can be specified like
NOT NULL. NOT NULL can be specified for computed columns only if PERSISTED is also specified.
CONSTRAINT
Specifies the start of the definition for a PRIMARY KEY or UNIQUE constraint.
constraint_name
Is the new constraint. Constraint names must follow the rules for identifiers, except that the name cannot start with
a number sign (#). If constraint_name is not supplied, a system-generated name is assigned to the constraint.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column or columns by using a unique index. Only one
PRIMARY KEY constraint can be created for each table.
UNIQUE
Is a constraint that provides entity integrity for a specific column or columns by using a unique index.
CLUSTERED | NONCLUSTERED
Specifies that a clustered or nonclustered index is created for the PRIMARY KEY or UNIQUE constraint. PRIMARY
KEY constraints default to CLUSTERED. UNIQUE constraints default to NONCLUSTERED.
If a clustered constraint or index already exists on a table, CLUSTERED cannot be specified. If a clustered constraint
or index already exists on a table, PRIMARY KEY constraints default to NONCLUSTERED.
WITH FILLFACTOR =fillfactor
Specifies how full the SQL Server Database Engine should make each index page used to store the index data.
User-specified fillfactor values can be from 1 through 100. If a value is not specified, the default is 0.

IMPORTANT
Documenting WITH FILLFACTOR = fillfactor as the only index option that applies to PRIMARY KEY or UNIQUE constraints is
maintained for backward compatibility, but will not be documented in this manner in future releases. Other index options can
be specified in the index_option (Transact-SQL) clause of ALTER TABLE.

FOREIGN KEY REFERENCES


Is a constraint that provides referential integrity for the data in the column or columns. FOREIGN KEY constraints
require that each value in the column exists in the corresponding referenced column or columns in the referenced
table. FOREIGN KEY constraints can reference only columns that are PRIMARY KEY or UNIQUE constraints in
the referenced table or columns referenced in a UNIQUE INDEX on the referenced table. Foreign keys on
computed columns must also be marked PERSISTED.
ref_table
Is the name of the table referenced by the FOREIGN KEY constraint.
(ref_column )
Is a column from the table referenced by the FOREIGN KEY constraint.
ON DELETE { NO ACTION | CASCADE }
Specifies what action happens to rows in the table if those rows have a referential relationship and the referenced
row is deleted from the parent table. The default is NO ACTION.
NO ACTION
The Database Engine raises an error and the delete action on the row in the parent table is rolled back.
CASCADE
Corresponding rows are deleted from the referencing table if that row is deleted from the parent table.
For example, in the AdventureWorks2012 database, the ProductVendor table has a referential relationship with
the Vendor table. The ProductVendor.BusinessEntityID foreign key references the Vendor.BusinessEntityID primary
key.
If a DELETE statement is executed on a row in the Vendor table, and an ON DELETE CASCADE action is specified
for ProductVendor.BusinessEntityID, the Database Engine checks for one or more dependent rows in the
ProductVendor table. If any exist, the dependent rows in the ProductVendor table are deleted, in addition to the
row referenced in the Vendor table.
Conversely, if NO ACTION is specified, the Database Engine raises an error and rolls back the delete action on the
Vendor row when there is at least one row in the ProductVendor table that references it.
Do not specify CASCADE if the table will be included in a merge publication that uses logical records. For more
information about logical records, see Group Changes to Related Rows with Logical Records.
ON UPDATE { NO ACTION }
Specifies what action happens to rows in the table created when those rows have a referential relationship and the
referenced row is updated in the parent table. When NO ACTION is specified, the Database Engine raises an error
and rolls back the update action on the Vendor row if there is at least one row in the ProductVendor table that
references it.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Can be specified for FOREIGN KEY constraints and CHECK constraints. If this clause is specified for a constraint,
the constraint is not enforced when replication agents perform insert, update, or delete operations.
CHECK
Is a constraint that enforces domain integrity by limiting the possible values that can be entered into a column or
columns. CHECK constraints on computed columns must also be marked PERSISTED.
logical_expression
Is a logical expression that returns TRUE or FALSE. The expression cannot contain a reference to an alias data type.
ON { partition_scheme_name(partition_column_name) | filegroup| "default"}
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the storage location of the index created for the constraint. If partition_scheme_name is specified, the
index is partitioned and the partitions are mapped to the filegroups that are specified by partition_scheme_name. If
filegroup is specified, the index is created in the named filegroup. If "default" is specified or if ON is not specified at
all, the index is created in the same filegroup as the table. If ON is specified when a clustered index is added for a
PRIMARY KEY or UNIQUE constraint, the whole table is moved to the specified filegroup when the clustered
index is created.

NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in ON "default"
or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current session. This is the
default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).

Remarks
Each PRIMARY KEY and UNIQUE constraint generates an index. The number of UNIQUE and PRIMARY KEY
constraints cannot cause the number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered
index.

See Also
ALTER TABLE (Transact-SQL )
ALTER TABLE index_option (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies a set of options that can be applied to an index that is part of a constraint definition that is created by
using ALTER TABLE.
Transact-SQL Syntax Conventions

Syntax
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| SORT_IN_TEMPDB = { ON | OFF }
| ONLINE = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE |ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]
| ONLINE = { ON [ ( <low_priority_lock_wait> ) ] | OFF }
}

<range> ::=
<partition_number_expression> TO <partition_number_expression>

<single_partition_rebuild__option> ::=
{
SORT_IN_TEMPDB = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = {NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE } }
| ONLINE = { ON [ ( <low_priority_lock_wait> ) ] | OFF }
}

<low_priority_lock_wait>::=
{
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = <time> [ MINUTES ] ,
ABORT_AFTER_WAIT = { NONE | SELF | BLOCKERS } )
}

Arguments
PAD_INDEX = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by FILLFACTOR is applied to the intermediate-level pages of the
index.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity, leaving enough space for at least one row of the
maximum size the index can have, given the set of keys on the intermediate pages.
FILLFACTOR =fillfactor
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or alteration. The value specified must be an integer value from 1 to 100. The default is 0.

NOTE
Fill factor values 0 and 100 are identical in all respects.

IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique index.
The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The option
has no effect when executing CREATE INDEX, ALTER INDEX, or UPDATE. The default is OFF.
ON
A warning message occurs when duplicate key values are inserted into a unique index. Only the rows violating the
uniqueness constraint fail.
OFF
An error message occurs when duplicate key values are inserted into a unique index. The entire INSERT operation
is rolled back.
IGNORE_DUP_KEY cannot be set to ON for indexes created on a view, non-unique indexes, XML indexes, spatial
indexes, and filtered indexes.
To view IGNORE_DUP_KEY, use sys.indexes.
In backward compatible syntax, WITH IGNORE_DUP_KEY is equivalent to WITH IGNORE_DUP_KEY = ON.
STATISTICS_NORECOMPUTE = { ON | OFF }
Specifies whether statistics are recomputed. The default is OFF.
ON
Out-of-date statistics are not automatically recomputed.
OFF
Automatic statistics updating are enabled.
ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when accessing the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
SORT_IN_TEMPDB = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies whether to store sort results in tempdb. The default is OFF.
ON
The intermediate sort results that are used to build the index are stored in tempdb. This may reduce the time
required to create an index if tempdb is on a different set of disks than the user database. However, this increases
the amount of disk space that is used during the index build.
OFF
The intermediate sort results are stored in the same database as the index.
ONLINE = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies whether underlying tables and associated indexes are available for queries and data modification during
the index operation. The default is OFF. REBUILD can be performed as an ONLINE operation.

NOTE
Unique nonclustered indexes cannot be created online. This includes indexes that are created due to a UNIQUE or PRIMARY
KEY constraint.

ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. This enables queries or updates to the
underlying table and indexes to proceed. At the start of the operation, a Shared (S ) lock is held on the source
object for a very short period of time. At the end of the operation, for a short period of time, an S (Shared) lock is
acquired on the source if a nonclustered index is being created; or an SCH-M (Schema Modification) lock is
acquired when a clustered index is created or dropped online and when a clustered or nonclustered index is being
rebuilt. Although the online index locks are short metadata locks, especially the Sch-M lock must wait for all
blocking transactions to be completed on this table. During the wait time the Sch-M lock blocks all other
transactions that wait behind this lock when accessing the same table. ONLINE cannot be set to ON when an
index is being created on a local temporary table.

NOTE
Online index rebuild can set the low_priority_lock_wait options described later in this section. low_priority_lock_wait
manages S and Sch-M lock priority during online index rebuild.

OFF
Table locks are applied for the duration of the index operation. This prevents all user access to the underlying table
for the duration of the operation. An offline index operation that creates, rebuilds, or drops a clustered index, or
rebuilds or drops a nonclustered index, acquires a Schema modification (Sch-M ) lock on the table. This prevents
all user access to the underlying table for the duration of the operation. An offline index operation that creates a
nonclustered index acquires a Shared (S ) lock on the table. This prevents updates to the underlying table but
allows read operations, such as SELECT statements.
For more information, see How Online Index Operations Work.

NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.

MAXDOP =max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017.
Overrides the max degree of parallelism configuration option for the duration of the index operation. For more
information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to limit the
number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism can be:
1 - Suppresses parallel plan generation.
>1 - Restricts the maximum number of processors used in a parallel index operation to the specified number.
0 (default) - Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.

NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.

DATA_COMPRESSION
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the data compression option for the specified table, partition number, or range of partitions. The options
are as follows:
NONE
Table or specified partitions are not compressed. Applies only to rowstore tables; does not apply to columnstore
tables.
ROW
Table or specified partitions are compressed by using row compression. Applies only to rowstore tables; does not
apply to columnstore tables.
PAGE
Table or specified partitions are compressed by using page compression. Applies only to rowstore tables; does not
apply to columnstore tables.
COLUMNSTORE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Applies only to columnstore tables. COLUMNSTORE specifies to decompress a partition that was compressed
with the COLUMNSTORE_ARCHIVE option. When the data is restored, the COLUMNSTORE index continues to
be compressed with the columnstore compression that is used for all columnstore tables.
COLUMNSTORE_ARCHIVE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Applies only to columnstore tables, which are tables stored with a clustered columnstore index.
COLUMNSTORE_ARCHIVE further compresses the specified partition to a smaller size. This can be used for
archival, or for other situations that require less storage and can afford more time for storage and retrieval
For more information about compression, see Data Compression.
ON PARTITIONS ( { <partition_number_expression> | <range> } [ ,...n ] ) Applies to: SQL Server 2008 through
SQL Server 2017.
Specifies the partitions to which the DATA_COMPRESSION setting applies. If the table is not partitioned, the ON
PARTITIONS argument generates an error. If the ON PARTITIONS clause is not provided, the
DATA_COMPRESSION option applies to all partitions of a partitioned table.
<partition_number_expression> can be specified in the following ways:
Provide the number a partition, for example: ON PARTITIONS (2).
Provide the partition numbers for several individual partitions separated by commas, for example: ON
PARTITIONS (1, 5).
Provide both ranges and individual partitions, for example: ON PARTITIONS (2, 4, 6 TO 8).
<range> can be specified as partition numbers separated by the word TO, for example: ON PARTITIONS (6 TO
8 ).
To set different types of data compression for different partitions, specify the DATA_COMPRESSION option more
than once, for example:

--For rowstore tables


REBUILD WITH
(
DATA_COMPRESSION = NONE ON PARTITIONS (1),
DATA_COMPRESSION = ROW ON PARTITIONS (2, 4, 6 TO 8),
DATA_COMPRESSION = PAGE ON PARTITIONS (3, 5)
)

--For columnstore tables


REBUILD WITH
(
DATA_COMPRESSION = COLUMNSTORE ON PARTITIONS (1, 3, 5),
DATA_COMPRESSION = COLUMNSTORE_ARCHIVE ON PARTITIONS (2, 4, 6 TO 8)
)

<single_partition_rebuild__option>
In most cases, rebuilding an index rebuilds all partitions of a partitioned index. The following options, when
applied to a single partition, do not rebuild all partitions.
SORT_IN_TEMPDB
MAXDOP
DATA_COMPRESSION
low_priority_lock_wait
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
A SWITCH or online index rebuild completes as soon as there are no blocking operations for this table.
WAIT_AT_LOW_PRIORITY indicates that if the SWITCH or online index rebuild operation cannot be completed
immediately, it waits. The operation holds low priority locks, allowing other operations that hold locks conflicting
with the DDL statement to proceed. Omitting the WAIT AT LOW PRIORITY option is equivalent to
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE) .

MAX_DURATION = time [MINUTES ]


The wait time (an integer value specified in minutes) that the SWITCH or online index rebuild lock that must be
acquired, waits when executing the DDL command. The SWITCH or online index rebuild operation attempts to
complete immediately. If the operation is blocked for the MAX_DURATION time, one of the
ABORT_AFTER_WAIT actions is executed. MAX_DURATION time is always in minutes, and the word
MINUTES can be omitted.
ABORT_AFTER_WAIT = [NONE | SELF | BLOCKERS } ]
NONE
Continues the SWITCH or online index rebuild operation without changing the lock priority (using regular
priority).
SELF
Exits the SWITCH or online index rebuild DDL operation currently being executed without taking any action.
BLOCKERS
Kills all user transactions that block currently the SWITCH or online index rebuild DDL operation so that the
operation can continue.
BLOCKERS requires the ALTER ANY CONNECTION permission.

Remarks
For a complete description of index options, see CREATE INDEX (Transact-SQL ).

See Also
ALTER TABLE (Transact-SQL )
column_constraint (Transact-SQL )
computed_column_definition (Transact-SQL )
table_constraint (Transact-SQL )
ALTER TABLE table_constraint (Transact-SQL)
5/3/2018 • 10 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the properties of a PRIMARY KEY, UNIQUE, FOREIGN KEY, a CHECK constraint, or a DEFAULT
definition added to a table by using ALTER TABLE.
Transact-SQL Syntax Conventions

Syntax
[ CONSTRAINT constraint_name ]
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
(column [ ASC | DESC ] [ ,...n ] )
[ WITH FILLFACTOR = fillfactor
[ WITH ( <index_option>[ , ...n ] ) ]
[ ON { partition_scheme_name ( partition_column_name ... )
| filegroup | "default" } ]
| FOREIGN KEY
( column [ ,...n ] )
REFERENCES referenced_table_name [ ( ref_column [ ,...n ] ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]
| DEFAULT constant_expression FOR column [ WITH VALUES ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )
}

Arguments
CONSTRAINT
Specifies the start of a definition for a PRIMARY KEY, UNIQUE, FOREIGN KEY, or CHECK constraint, or a
DEFAULT.
constraint_name
Is the name of the constraint. Constraint names must follow the rules for identifiers, except that the name cannot
start with a number sign (#). If constraint_name is not supplied, a system-generated name is assigned to the
constraint.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column or columns by using a unique index. Only one
PRIMARY KEY constraint can be created for each table.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns by using a unique index.
CLUSTERED | NONCLUSTERED
Specifies that a clustered or nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints default to CLUSTERED. UNIQUE constraints default to NONCLUSTERED.
If a clustered constraint or index already exists on a table, CLUSTERED cannot be specified. If a clustered
constraint or index already exists on a table, PRIMARY KEY constraints default to NONCLUSTERED.
Columns that are of the ntext, text, varchar(max), nvarchar(max), varbinary(max), xml, or image data types
cannot be specified as columns for an index.
column
Is a column or list of columns specified in parentheses that are used in a new constraint.
[ ASC | DESC ]
Specifies the order in which the column or columns participating in table constraints are sorted. The default is
ASC.
WITH FILLFACTOR =fillfactor
Specifies how full the Database Engine should make each index page used to store the index data. User-specified
fillfactor values can be from 1 through 100. If a value is not specified, the default is 0.

IMPORTANT
Documenting WITH FILLFACTOR = fillfactor as the only index option that applies to PRIMARY KEY or UNIQUE constraints is
maintained for backward compatibility, but will not be documented in this manner in future releases. Other index options can
be specified in the index_option clause of ALTER TABLE.

ON { partition_scheme_name(partition_column_name) | filegroup| "default" }


Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the storage location of the index created for the constraint. If partition_scheme_name is specified, the
index is partitioned and the partitions are mapped to the filegroups that are specified by partition_scheme_name. If
filegroup is specified, the index is created in the named filegroup. If "default" is specified or if ON is not specified
at all, the index is created in the same filegroup as the table. If ON is specified when a clustered index is added for a
PRIMARY KEY or UNIQUE constraint, the whole table is moved to the specified filegroup when the clustered
index is created.
In this context, default is not a keyword; it is an identifier for the default filegroup and must be delimited, as in ON
"default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current
session. This is the default setting.
FOREIGN KEY REFERENCES
Is a constraint that provides referential integrity for the data in the column. FOREIGN KEY constraints require that
each value in the column exist in the specified column in the referenced table.
referenced_table_name
Is the table referenced by the FOREIGN KEY constraint.
ref_column
Is a column or list of columns in parentheses referenced by the new FOREIGN KEY constraint.
ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
Specifies what action happens to rows in the table that is altered, if those rows have a referential relationship and
the referenced row is deleted from the parent table. The default is NO ACTION.
NO ACTION
The SQL Server Database Engine raises an error and the delete action on the row in the parent table is rolled back.
CASCADE
Corresponding rows are deleted from the referencing table if that row is deleted from the parent table.
SET NULL
All the values that make up the foreign key are set to NULL when the corresponding row in the parent table is
deleted. For this constraint to execute, the foreign key columns must be nullable.
SET DEFAULT
All the values that comprise the foreign key are set to their default values when the corresponding row in the
parent table is deleted. For this constraint to execute, all foreign key columns must have default definitions. If a
column is nullable and there is no explicit default value set, NULL becomes the implicit default value of the column.
Do not specify CASCADE if the table will be included in a merge publication that uses logical records. For more
information about logical records, see Group Changes to Related Rows with Logical Records.
ON DELETE CASCADE cannot be defined if an INSTEAD OF trigger ON DELETE already exists on the table that
is being altered.
For example, in the AdventureWorks2012 database, the ProductVendor table has a referential relationship with
the Vendor table. The ProductVendor.VendorID foreign key references the Vendor.VendorID primary key.
If a DELETE statement is executed on a row in the Vendor table and an ON DELETE CASCADE action is specified
for ProductVendor.VendorID, the Database Engine checks for one or more dependent rows in the
ProductVendor table. If any exist, the dependent rows in the ProductVendor table will be deleted, in addition to
the row referenced in the Vendor table.
Conversely, if NO ACTION is specified, the Database Engine raises an error and rolls back the delete action on the
Vendor row when there is at least one row in the ProductVendor table that references it.
ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
Specifies what action happens to rows in the table altered when those rows have a referential relationship and the
referenced row is updated in the parent table. The default is NO ACTION.
NO ACTION
The Database Engine raises an error, and the update action on the row in the parent table is rolled back.
CASCADE
Corresponding rows are updated in the referencing table when that row is updated in the parent table.
SET NULL
All the values that make up the foreign key are set to NULL when the corresponding row in the parent table is
updated. For this constraint to execute, the foreign key columns must be nullable.
SET DEFAULT
All the values that make up the foreign key are set to their default values when the corresponding row in the
parent table is updated. For this constraint to execute, all foreign key columns must have default definitions. If a
column is nullable, and there is no explicit default value set, NULL becomes the implicit default value of the
column.
Do not specify CASCADE if the table will be included in a merge publication that uses logical records. For more
information about logical records, see Group Changes to Related Rows with Logical Records.
ON UPDATE CASCADE, SET NULL, or SET DEFAULT cannot be defined if an INSTEAD OF trigger ON UPDATE
already exists on the table that is being altered.
For example, in the AdventureWorks2012 database, the ProductVendor table has a referential relationship with
the Vendor table. The ProductVendor.VendorID foreign key references the Vendor.VendorID primary key.
If an UPDATE statement is executed on a row in the Vendor table and an ON UPDATE CASCADE action is
specified for ProductVendor.VendorID, the Database Engine checks for one or more dependent rows in the
ProductVendor table. If any exist, the dependent row in the ProductVendor table will be updated, as well as the
row referenced in the Vendor table.
Conversely, if NO ACTION is specified, the Database Engine raises an error and rolls back the update action on the
Vendor row when there is at least one row in the ProductVendor table that references it.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Can be specified for FOREIGN KEY constraints and CHECK constraints. If this clause is specified for a constraint,
the constraint is not enforced when replication agents perform insert, update, or delete operations.
DEFAULT
Specifies the default value for the column. DEFAULT definitions can be used to provide values for a new column in
the existing rows of data. DEFAULT definitions cannot be added to columns that have a timestamp data type, an
IDENTITY property, an existing DEFAULT definition, or a bound default. If the column has an existing default, the
default must be dropped before the new default can be added. If a default value is specified for a user-defined type
column, the type should support an implicit conversion from constant_expression to the user-defined type. To
maintain compatibility with earlier versions of SQL Server, a constraint name can be assigned to a DEFAULT.
constant_expression
Is a literal value, a NULL, or a system function that is used as the default column value. If constant_expression is
used in conjunction with a column defined to be of a Microsoft .NET Framework user-defined type, the
implementation of the type must support an implicit conversion from the constant_expression to the user-defined
type.
FOR column
Specifies the column associated with a table-level DEFAULT definition.
WITH VALUES
Specifies that the value given in DEFAULT constant_expression is stored in a new column that is added to existing
rows. WITH VALUES can be specified only when DEFAULT is specified in an ADD column clause. If the added
column allows null values and WITH VALUES is specified, the default value is stored in the new column that is
added to existing rows. If WITH VALUES is not specified for columns that allow nulls, NULL is stored in the new
column in existing rows. If the new column does not allow nulls, the default value is stored in new rows regardless
of whether WITH VALUES is specified.
CHECK
Is a constraint that enforces domain integrity by limiting the possible values that can be entered into a column or
columns.
logical_expression
Is a logical expression used in a CHECK constraint and returns TRUE or FALSE. logical_expression used with
CHECK constraints cannot reference another table but can reference other columns in the same table for the same
row. The expression cannot reference an alias data type.

Remarks
When FOREIGN KEY or CHECK constraints are added, all existing data is verified for constraint violations unless
the WITH NOCHECK option is specified. If any violations occur, ALTER TABLE fails and an error is returned. When
a new PRIMARY KEY or UNIQUE constraint is added to an existing column, the data in the column or columns
must be unique. If duplicate values are found, ALTER TABLE fails. The WITH NOCHECK option has no effect when
PRIMARY KEY or UNIQUE constraints are added.
Each PRIMARY KEY and UNIQUE constraint generates an index. The number of UNIQUE and PRIMARY KEY
constraints cannot cause the number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered
index. Foreign key constraints do not automatically generate an index. However, foreign key columns are
frequently used in join criteria in queries by matching the column or columns in the foreign key constraint of one
table with the primary or unique key column or columns in the other table. An index on the foreign key columns
enables the Database Engine to quickly find related data in the foreign key table.
Examples
For examples, see ALTER TABLE (Transact-SQL ).

See Also
ALTER TABLE (Transact-SQL )
ALTER TRIGGER (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the definition of a DML, DDL, or logon trigger that was previously created by the CREATE TRIGGER
statement. Triggers are created by using CREATE TRIGGER. They can be created directly from Transact-SQL
statements or from methods of assemblies that are created in the Microsoft .NET Framework common language
runtime (CLR ) and uploaded to an instance of SQL Server. For more information about the parameters that are
used in the ALTER TRIGGER statement, see CREATE TRIGGER (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
-- SQL Server Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)

ALTER TRIGGER schema_name.trigger_name


ON ( table | view )
[ WITH <dml_trigger_option> [ ,...n ] ]
( FOR | AFTER | INSTEAD OF )
{ [ DELETE ] [ , ] [ INSERT ] [ , ] [ UPDATE ] }
[ NOT FOR REPLICATION ]
AS { sql_statement [ ; ] [ ...n ] | EXTERNAL NAME <method specifier>
[ ; ] }

<dml_trigger_option> ::=
[ ENCRYPTION ]
[ <EXECUTE AS Clause> ]

<method_specifier> ::=
assembly_name.class_name.method_name

-- Trigger on an INSERT, UPDATE, or DELETE statement to a table


-- (DML Trigger on memory-optimized tables)

ALTER TRIGGER schema_name.trigger_name


ON ( table )
[ WITH <dml_trigger_option> [ ,...n ] ]
( FOR | AFTER )
{ [ DELETE ] [ , ] [ INSERT ] [ , ] [ UPDATE ] }
AS { sql_statement [ ; ] [ ...n ] }

<dml_trigger_option> ::=
[ NATIVE_COMPILATION ]
[ SCHEMABINDING ]
[ <EXECUTE AS Clause> ]

-- Trigger on a CREATE, ALTER, DROP, GRANT, DENY, REVOKE,


-- or UPDATE statement (DDL Trigger)

ALTER TRIGGER trigger_name


ON { DATABASE | ALL SERVER }
[ WITH <ddl_trigger_option> [ ,...n ] ]
{ FOR | AFTER } { event_type [ ,...n ] | event_group }
AS { sql_statement [ ; ] | EXTERNAL NAME <method specifier>
[ ; ] }
}
<ddl_trigger_option> ::=
[ ENCRYPTION ]
[ <EXECUTE AS Clause> ]

<method_specifier> ::=
assembly_name.class_name.method_name

-- Trigger on a LOGON event (Logon Trigger)

ALTER TRIGGER trigger_name


ON ALL SERVER
[ WITH <logon_trigger_option> [ ,...n ] ]
{ FOR| AFTER } LOGON
AS { sql_statement [ ; ] [ ,...n ] | EXTERNAL NAME < method specifier >
[ ; ] }

<logon_trigger_option> ::=
[ ENCRYPTION ]
[ EXECUTE AS Clause ]

<method_specifier> ::=
assembly_name.class_name.method_name

-- Windows Azure SQL Database Syntax


-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)

ALTER TRIGGER schema_name. trigger_name


ON (table | view )
[ WITH <dml_trigger_option> [ ,...n ] ]
( FOR | AFTER | INSTEAD OF )
{ [ DELETE ] [ , ] [ INSERT ] [ , ] [ UPDATE ] }
AS { sql_statement [ ; ] [...n ] }

<dml_trigger_option> ::=
[ <EXECUTE AS Clause> ]

-- Trigger on a CREATE, ALTER, DROP, GRANT, DENY, REVOKE, or UPDATE statement (DDL Trigger)

ALTER TRIGGER trigger_name


ON { DATABASE }
[ WITH <ddl_trigger_option> [ ,...n ] ]
{ FOR | AFTER } { event_type [ ,...n ] | event_group }
AS { sql_statement
[ ; ] }
}

<ddl_trigger_option> ::=
[ <EXECUTE AS Clause> ]

Arguments
schema_name
Is the name of the schema to which a DML trigger belongs. DML triggers are scoped to the schema of the table or
view on which they are created. schema_name is optional only if the DML trigger and its corresponding table or
view belong to the default schema. schema_name cannot be specified for DDL or logon triggers.
trigger_name
Is the existing trigger to modify.
table | view
Is the table or view on which the DML trigger is executed. Specifying the fully-qualified name of the table or view
is optional.
DATABASE
Applies the scope of a DDL trigger to the current database. If specified, the trigger fires whenever event_type or
event_group occurs in the current database.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
Applies the scope of a DDL or logon trigger to the current server. If specified, the trigger fires whenever
event_type or event_group occurs anywhere in the current server.
WITH ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017.
Encrypts the sys.syscommentssys.sql_modules entries that contain the text of the ALTER TRIGGER statement.
Using WITH ENCRYPTION prevents the trigger from being published as part of SQL Server replication. WITH
ENCRYPTION cannot be specified for CLR triggers.

NOTE
If a trigger is created by using WITH ENCRYPTION, it must be specified again in the ALTER TRIGGER statement for this option
to remain enabled.

EXECUTE AS
Specifies the security context under which the trigger is executed. Enables you to control the user account the
instance of SQL Server uses to validate permissions on any database objects that are referenced by the trigger.
For more information, see EXECUTE AS Clause (Transact-SQL ).
NATIVE_COMPIL ATION
Indicates that the trigger is natively compiled.
This option is required for triggers on memory-optimized tables.
SCHEMABINDING
Ensures that tables that are referenced by a trigger cannot be dropped or altered.
This option is required for triggers on memory-optimized tables and is not supported for triggers on traditional
tables.
AFTER
Specifies that the trigger is fired only after the triggering SQL statement is executed successfully. All referential
cascade actions and constraint checks also must have been successful before this trigger fires.
AFTER is the default, if only the FOR keyword is specified.
DML AFTER triggers may be defined only on tables.
INSTEAD OF
Specifies that the DML trigger is executed instead of the triggering SQL statement, therefore, overriding the
actions of the triggering statements. INSTEAD OF cannot be specified for DDL or logon triggers.
At most, one INSTEAD OF trigger per INSERT, UPDATE, or DELETE statement can be defined on a table or view.
However, you can define views on views where each view has its own INSTEAD OF trigger.
INSTEAD OF triggers are not allowed on views created by using WITH CHECK OPTION. SQL Server raises an
error when an INSTEAD OF trigger is added to a view for which WITH CHECK OPTION was specified. The user
must remove that option using ALTER VIEW before defining the INSTEAD OF trigger.
{ [ DELETE ] [ , ] [ INSERT ] [ , ] [ UPDATE ] } | { [INSERT ] [ , ] [ UPDATE ] }
Specifies the data modification statements, when tried against this table or view, activate the DML trigger. At least
one option must be specified. Any combination of these in any order is allowed in the trigger definition. If more
than one option is specified, separate the options with commas.
For INSTEAD OF triggers, the DELETE option is not allowed on tables that have a referential relationship
specifying a cascade action ON DELETE. Similarly, the UPDATE option is not allowed on tables that have a
referential relationship specifying a cascade action ON UPDATE. For more information, see ALTER TABLE
(Transact-SQL ).
event_type
Is the name of a Transact-SQL language event that, after execution, causes a DDL trigger to fire. Valid events for
DDL triggers are listed in DDL Events.
event_group
Is the name of a predefined grouping of Transact-SQL language events. The DDL trigger fires after execution of
any Transact-SQL language event that belongs to event_group. Valid event groups for DDL triggers are listed in
DDL Event Groups. After ALTER TRIGGER has finished running, event_group also acts as a macro by adding the
event types it covers to the sys.trigger_events catalog view.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates that the trigger should not be executed when a replication agent modifies the table involved in the
trigger.
sql_statement
Is the trigger conditions and actions.
For triggers on memory-optimized tables, the only sql_statement allowed at the top level is an ATOMIC block. The
T-SQL allowed inside the ATOMIC block is limited by the T-SQL allowed inside native procs.
EXTERNAL NAME <method_specifier>
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the method of an assembly to bind with the trigger. The method must take no arguments and return
void. class_name must be a valid SQL Server identifier and must exist as a class in the assembly with assembly
visibility. The class cannot be a nested class.

Remarks
For more information about ALTER TRIGGER, see Remarks in CREATE TRIGGER (Transact-SQL ).

NOTE
The EXTERNAL_NAME and ON_ALL_SERVER options are not available in a contained database.

DML Triggers
ALTER TRIGGER supports manually updatable views through INSTEAD OF triggers on tables and views. SQL
Server applies ALTER TRIGGER the same way for all kinds of triggers (AFTER, INSTEAD -OF ).
The first and last AFTER triggers to be executed on a table can be specified by using sp_settriggerorder. Only one
first and one last AFTER trigger can be specified on a table. If there are other AFTER triggers on the same table,
they are randomly executed.
If an ALTER TRIGGER statement changes a first or last trigger, the first or last attribute set on the modified trigger
is dropped, and the order value must be reset by using sp_settriggerorder.
An AFTER trigger is executed only after the triggering SQL statement has executed successfully. This successful
execution includes all referential cascade actions and constraint checks associated with the object updated or
deleted. The AFTER trigger operation checks for the effects of the triggering statement and also all referential
cascade UPDATE and DELETE actions that are caused by the triggering statement.
When a DELETE action to a child or referencing table is the result of a CASCADE on a DELETE from the parent
table, and an INSTEAD OF trigger on DELETE is defined on that child table, the trigger is ignored and the DELETE
action is executed.

DDL Triggers
Unlike DML triggers, DDL triggers are not scoped to schemas. Therefore, the OBJECT_ID, OBJECT_NAME,
OBJECTPROPERTY, and OBJECTPROPERTY (EX) cannot be used when querying metadata about DDL triggers.
Use the catalog views instead. For more information, see Get Information About DDL Triggers.

Logon Triggers
Azure SQL Database does not support triggers on logon events.

Permissions
To alter a DML trigger requires ALTER permission on the table or view on which the trigger is defined.
To alter a DDL trigger defined with server scope (ON ALL SERVER ) or a logon trigger requires CONTROL
SERVER permission on the server. To alter a DDL trigger defined with database scope (ON DATABASE ) requires
ALTER ANY DATABASE DDL TRIGGER permission in the current database.

Examples
The following example creates a DML trigger in the AdventureWorks 2012 database, that prints a user-defined
message to the client when a user tries to add or change data in the SalesPersonQuotaHistory table. The trigger is
then modified by using ALTER TRIGGER to apply the trigger only on INSERT activities. This trigger is helpful
because it reminds the user that updates or inserts rows into this table to also notify the Compensation
department.

CREATE TRIGGER Sales.bonus_reminder


ON Sales.SalesPersonQuotaHistory
WITH ENCRYPTION
AFTER INSERT, UPDATE
AS RAISERROR ('Notify Compensation', 16, 10);
GO

-- Now, change the trigger.


ALTER TRIGGER Sales.bonus_reminder
ON Sales.SalesPersonQuotaHistory
AFTER INSERT
AS RAISERROR ('Notify Compensation', 16, 10);
GO

See Also
DROP TRIGGER (Transact-SQL )
ENABLE TRIGGER (Transact-SQL )
DISABLE TRIGGER (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_helptrigger (Transact-SQL )
Create a Stored Procedure
sp_addmessage (Transact-SQL )
Transactions
Get Information About DML Triggers
Get Information About DDL Triggers
sys.triggers (Transact-SQL )
sys.trigger_events (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.assembly_modules (Transact-SQL )
sys.server_triggers (Transact-SQL )
sys.server_trigger_events (Transact-SQL )
sys.server_sql_modules (Transact-SQL )
sys.server_assembly_modules (Transact-SQL )
Make Schema Changes on Publication Databases
ALTER USER (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Renames a database user or changes its default schema.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

ALTER USER userName


WITH <set_item> [ ,...n ]
[;]

<set_item> ::=
NAME = newUserName
| DEFAULT_SCHEMA = { schemaName | NULL }
| LOGIN = loginName
| PASSWORD = 'password' [ OLD_PASSWORD = 'oldpassword' ]
| DEFAULT_LANGUAGE = { NONE | <lcid> | <language name> | <language alias> }
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ]

-- Syntax for Azure SQL Database

ALTER USER userName


WITH <set_item> [ ,...n ]

<set_item> ::=
NAME = newUserName
| DEFAULT_SCHEMA = schemaName
| LOGIN = loginName
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ]
[;]

-- Azure SQL Database Update Syntax


ALTER USER userName
WITH <set_item> [ ,...n ]
[;]

<set_item> ::=
NAME = newUserName
| DEFAULT_SCHEMA = { schemaName | NULL }
| LOGIN = loginName
| PASSWORD = 'password' [ OLD_PASSWORD = 'oldpassword' ]
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ]

-- SQL Database syntax when connected to a federation member


ALTER USER userName
WITH <set_item> [ ,… n ]
[;]

<set_item> ::=
NAME = newUserName
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

ALTER USER userName


WITH <set_item> [ ,...n ]

<set_item> ::=
NAME =newUserName
| LOGIN =loginName
| DEFAULT_SCHEMA = schema_name
[;]

Arguments
userName
Specifies the name by which the user is identified inside this database.
LOGIN =loginName
Re-maps a user to another login by changing the user's Security Identifier (SID ) to match the login's SID.
If the ALTER USER statement is the only statement in a SQL batch, Windows Azure SQL Database supports the
WITH LOGIN clause. If the ALTER USER statement is not the only statement in a SQL batch or is executed in
dynamic SQL, the WITH LOGIN clause is not supported.
NAME =newUserName
Specifies the new name for this user. newUserName must not already occur in the current database.
DEFAULT_SCHEMA = { schemaName | NULL }
Specifies the first schema that will be searched by the server when it resolves the names of objects for this user.
Setting the default schema to NULL removes a default schema from a Windows group. The NULL option cannot
be used with a Windows user.
PASSWORD = 'password'
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Specifies the password for the user that is being changed. Passwords are case-sensitive.

NOTE
This option is available only for contained users. See Contained Databases and sp_migrate_user_to_contained (Transact-SQL)
for more information.

OLD_PASSWORD ='oldpassword'
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
The current user password that will be replaced by 'password'. Passwords are case-sensitive. OLD_PASSWORD is
required to change a password, unless you have ALTER ANY USER permission. Requiring OLD_PASSWORD
prevents users with IMPERSONATION permission from changing the password.

NOTE
This option is available only for contained users.

DEFAULT_L ANGUAGE ={ NONE | <lcid> | <language name> | <language alias> }


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies a default language to be assigned to the user. If this option is set to NONE, the default language is set to
the current default language of the database. If the default language of the database is later changed, the default
language of the user will remain unchanged. DEFAULT_LANGUAGE can be the local ID (lcid), the name of the
language, or the language alias.

NOTE
This option may only be specified in a contained database and only for contained users.

ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.
Suppresses cryptographic metadata checks on the server in bulk copy operations. This enables the user to bulk
copy encrypted data between tables or databases, without decrypting the data. The default is OFF.

WARNING
Improper use of this option can lead to data corruption. For more information, see Migrate Sensitive Data Protected by
Always Encrypted.

Remarks
The default schema will be the first schema that will be searched by the server when it resolves the names of
objects for this database user. Unless otherwise specified, the default schema will be the owner of objects created
by this database user.
If the user has a default schema, that default schema will used. If the user does not have a default schema, but the
user is a member of a group that has a default schema, the default schema of the group will be used. If the user
does not have a default schema, and is a member of more than one group, the default schema for the user will be
that of the Windows group with the lowest principal_id and an explicitly set default schema. If no default schema
can be determined for a user, the dbo schema will be used.
DEFAULT_SCHEMA can be set to a schema that does not currently occur in the database. Therefore, you can
assign a DEFAULT_SCHEMA to a user before that schema is created.
DEFAULT_SCHEMA cannot be specified for a user who is mapped to a certificate, or an asymmetric key.

IMPORTANT
The value of DEFAULT_SCHEMA is ignored if the user is a member of the sysadmin fixed server role. All members of the
sysadmin fixed server role have a default schema of dbo .

You can change the name of a user who is mapped to a Windows login or group only when the SID of the new
user name matches the SID that is recorded in the database. This check helps prevent spoofing of Windows logins
in the database.
The WITH LOGIN clause enables the remapping of a user to a different login. Users without a login, users mapped
to a certificate, or users mapped to an asymmetric key cannot be re-mapped with this clause. Only SQL users and
Windows users (or groups) can be remapped. The WITH LOGIN clause cannot be used to change the type of user,
such as changing a Windows account to a SQL Server login.
The name of the user will be automatically renamed to the login name if the following conditions are true.
The user is a Windows user.
The name is a Windows name (contains a backslash).
No new name was specified.
The current name differs from the login name.
Otherwise, the user will not be renamed unless the caller additionally invokes the NAME clause.
The name of a user mapped to a SQL Server login, a certificate, or an asymmetric key cannot contain the
backslash character (\).
Cau t i on

Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER AUTHORIZATION.
In such databases you must instead use the new catalog views. The new catalog views take into account the
separation of principals and schemas that was introduced in SQL Server 2005. For more information about
catalog views, see Catalog Views (Transact-SQL ).

Security
NOTE
A user who has ALTER ANY USER permission can change the default schema of any user. A user who has an altered schema
might unknowingly select data from the wrong table or execute code from the wrong schema.

Permissions
To change the name of a user requires the ALTER ANY USER permission.
To change the target login of a user requires the CONTROL permission on the database.
To change the user name of a user having CONTROL permission on the database requires the CONTROL
permission on the database.
To change the default schema or language requires ALTER permission on the user. Users can change their own
default schema or language.

Examples
All examples are executed in a user database.
A. Changing the name of a database user
The following example changes the name of the database user Mary5 to Mary51 .

ALTER USER Mary5 WITH NAME = Mary51;


GO

B. Changing the default schema of a user


The following example changes the default schema of the user Mary51 to Purchasing .

ALTER USER Mary51 WITH DEFAULT_SCHEMA = Purchasing;


GO

C. Changing several options at once


The following example changes several options for a contained database user in one statement.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.

ALTER USER Philip


WITH NAME = Philipe
, DEFAULT_SCHEMA = Development
, PASSWORD = 'W1r77TT98%ab@#’ OLD_PASSWORD = 'New Devel0per'
, DEFAULT_LANGUAGE = French ;
GO

See Also
CREATE USER (Transact-SQL )
DROP USER (Transact-SQL )
Contained Databases
EVENTDATA (Transact-SQL )
sp_migrate_user_to_contained (Transact-SQL )
ALTER VIEW (Transact-SQL)
5/30/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies a previously created view. This includes an indexed view. ALTER VIEW does not affect dependent stored
procedures or triggers and does not change permissions.
Transact-SQL Syntax Conventions

Syntax
ALTER VIEW [ schema_name . ] view_name [ ( column [ ,...n ] ) ]
[ WITH <view_attribute> [ ,...n ] ]
AS select_statement
[ WITH CHECK OPTION ] [ ; ]

<view_attribute> ::=
{
[ ENCRYPTION ]
[ SCHEMABINDING ]
[ VIEW_METADATA ]
}

Arguments
schema_name
Is the name of the schema to which the view belongs.
view_name
Is the view to change.
column
Is the name of one or more columns, separated by commas, that are to be part of the specified view.

IMPORTANT
Column permissions are maintained only when columns have the same name before and after ALTER VIEW is performed.

NOTE
In the columns for the view, the permissions for a column name apply across a CREATE VIEW or ALTER VIEW statement,
regardless of the source of the underlying data. For example, if permissions are granted on the SalesOrderID column in a
CREATE VIEW statement, an ALTER VIEW statement can rename the SalesOrderID column, such as to OrderRef, and still
have the permissions associated with the view using SalesOrderID.

ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Encrypts the entries in sys.syscomments that contain the text of the ALTER VIEW statement. WITH ENCRYPTION
prevents the view from being published as part of SQL Server replication.
SCHEMABINDING
Binds the view to the schema of the underlying table or tables. When SCHEMABINDING is specified, the base
tables cannot be modified in a way that would affect the view definition. The view definition itself must first be
modified or dropped to remove dependencies on the table to be modified. When you use SCHEMABINDING, the
select_statement must include the two-part names (schema.object) of tables, views, or user-defined functions that
are referenced. All referenced objects must be in the same database.
Views or tables that participate in a view created with the SCHEMABINDING clause cannot be dropped, unless
that view is dropped or changed so that it no longer has schema binding. Otherwise, the Database Engine raises an
error. Also, executing ALTER TABLE statements on tables that participate in views that have schema binding fail if
these statements affect the view definition.
VIEW_METADATA
Specifies that the instance of SQL Server will return to the DB -Library, ODBC, and OLE DB APIs the metadata
information about the view, instead of the base table or tables, when browse-mode metadata is being requested
for a query that references the view. Browse-mode metadata is additional metadata that the instance of Database
Engine returns to the client-side DB -Library, ODBC, and OLE DB APIs. This metadata enables the client-side APIs
to implement updatable client-side cursors. Browse-mode metadata includes information about the base table that
the columns in the result set belong to.
For views created with VIEW_METADATA, the browse-mode metadata returns the view name and not the base
table names when it describes columns from the view in the result set.
When a view is created by using WITH VIEW_METADATA, all its columns, except a timestamp column, are
updatable if the view has INSERT or UPDATE INSTEAD OF triggers. For more information, see the Remarks
section in CREATE VIEW (Transact-SQL ).
AS
Are the actions the view is to take.
select_statement
Is the SELECT statement that defines the view.
WITH CHECK OPTION
Forces all data modification statements that are executed against the view to follow the criteria set within
select_statement.

Remarks
For more information about ALTER VIEW, see Remarks in CREATE VIEW (Transact-SQL ).

NOTE
If the previous view definition was created by using WITH ENCRYPTION or CHECK OPTION, these options are enabled only if
they are included in ALTER VIEW.

If a view currently used is modified by using ALTER VIEW, the Database Engine takes an exclusive schema lock on
the view. When the lock is granted, and there are no active users of the view, the Database Engine deletes all copies
of the view from the procedure cache. Existing plans referencing the view remain in the cache but are recompiled
when invoked.
ALTER VIEW can be applied to indexed views; however, ALTER VIEW unconditionally drops all indexes on the
view.

Permissions
To execute ALTER VIEW, at a minimum, ALTER permission on OBJECT is required.

Examples
The following example creates a view that contains all employees and their hire dates called EmployeeHireDate .
Permissions are granted to the view, but requirements are changed to select employees whose hire dates fall
before a certain date. Then, ALTER VIEW is used to replace the view.

USE AdventureWorks2012 ;
GO
CREATE VIEW HumanResources.EmployeeHireDate
AS
SELECT p.FirstName, p.LastName, e.HireDate
FROM HumanResources.Employee AS e JOIN Person.Person AS p
ON e.BusinessEntityID = p.BusinessEntityID ;
GO

The view must be changed to include only the employees that were hired before 2002 . If ALTER VIEW is not used,
but instead the view is dropped and re-created, the previously used GRANT statement and any other statements
that deal with permissions pertaining to this view must be re-entered.

ALTER VIEW HumanResources.EmployeeHireDate


AS
SELECT p.FirstName, p.LastName, e.HireDate
FROM HumanResources.Employee AS e JOIN Person.Person AS p
ON e.BusinessEntityID = p.BusinessEntityID
WHERE HireDate < CONVERT(DATETIME,'20020101',101) ;
GO

See Also
CREATE TABLE (Transact-SQL )
CREATE VIEW (Transact-SQL )
DROP VIEW (Transact-SQL )
Create a Stored Procedure
SELECT (Transact-SQL )
EVENTDATA (Transact-SQL )
Make Schema Changes on Publication Databases
ALTER WORKLOAD GROUP (Transact-SQL)
5/4/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes an existing Resource Governor workload group configuration, and optionally assigns it to a to a
Resource Governor resource pool.
Transact-SQL Syntax Conventions.

Syntax
ALTER WORKLOAD GROUP { group_name | "default" }
[ WITH
([ IMPORTANCE = { LOW | MEDIUM | HIGH } ]
[ [ , ] REQUEST_MAX_MEMORY_GRANT_PERCENT = value ]
[ [ , ] REQUEST_MAX_CPU_TIME_SEC = value ]
[ [ , ] REQUEST_MEMORY_GRANT_TIMEOUT_SEC = value ]
[ [ , ] MAX_DOP = value ]
[ [ , ] GROUP_MAX_REQUESTS = value ] )
]
[ USING { pool_name | "default" } ]
[ ; ]

Arguments
group_name | "default"
Is the name of an existing user-defined workload group or the Resource Governor default workload group.

NOTE
Resource Governor creates the "default" and internal groups when SQL Server is installed.

The option "default" must be enclosed by quotation marks ("") or brackets ([]) when used with ALTER
WORKLOAD GROUP to avoid conflict with DEFAULT, which is a system reserved word. For more information,
see Database Identifiers.

NOTE
Predefined workload groups and resource pools all use lowercase names, such as "default". This should be taken into
account for servers that use case-sensitive collation. Servers with case-insensitive collation, such as
SQL_Latin1_General_CP1_CI_AS, will treat "default" and "Default" as the same.

IMPORTANCE = { LOW | MEDIUM | HIGH }


Specifies the relative importance of a request in the workload group. Importance is one of the following:
LOW
MEDIUM (default)
HIGH
NOTE
Internally each importance setting is stored as a number that is used for calculations.

IMPORTANCE is local to the resource pool; workload groups of different importance inside the same resource
pool affect each other, but do not affect workload groups in another resource pool.
REQUEST_MAX_MEMORY_GRANT_PERCENT =value
Specifies the maximum amount of memory that a single request can take from the pool. This percentage is
relative to the resource pool size specified by MAX_MEMORY_PERCENT.

NOTE
The amount specified only refers to query execution grant memory.

value must be 0 or a positive integer. The allowed range for value is from 0 through 100. The default setting for
value is 25.
Note the following:
Setting value to 0 prevents queries with SORT and HASH JOIN operations in user-defined workload
groups from running.
We do not recommend setting value greater than 70 because the server may be unable to set aside
enough free memory if other concurrent queries are running. This may eventually lead to query time-out
error 8645.

NOTE
If the query memory requirements exceed the limit that is specified by this parameter, the server does the following:
For user-defined workload groups, the server tries to reduce the query degree of parallelism until the memory requirement
falls under the limit, or until the degree of parallelism equals 1. If the query memory requirement is still greater than the
limit, error 8657 occurs.
For internal and default workload groups, the server permits the query to obtain the required memory.
Be aware that both cases are subject to time-out error 8645 if the server has insufficient physical memory.

REQUEST_MAX_CPU_TIME_SEC =value
Specifies the maximum amount of CPU time, in seconds, that a request can use. value must be 0 or a positive
integer. The default setting for value is 0, which means unlimited.

NOTE
By default, Resource Governor will not prevent a request from continuing if the maximum time is exceeded. However, an
event will be generated. For more information, see CPU Threshold Exceeded Event Class.

IMPORTANT
Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3, and using trace flag 2422, Resource Governor
will abort a request when the maximum time is exceeded.

REQUEST_MEMORY_GRANT_TIMEOUT_SEC =value
Specifies the maximum time, in seconds, that a query can wait for memory grant (work buffer memory) to
become available.

NOTE
A query does not always fail when memory grant time-out is reached. A query will only fail if there are too many concurrent
queries running. Otherwise, the query may only get the minimum memory grant, resulting in reduced query performance.

value must be a positive integer. The default setting for value, 0, uses an internal calculation based on query cost
to determine the maximum time.
MAX_DOP =value
Specifies the maximum degree of parallelism (DOP ) for parallel requests. value must be 0 or a positive integer, 1
though 255. When value is 0, the server chooses the max degree of parallelism. This is the default and
recommended setting.

NOTE
The actual value that the Database Engine sets for MAX_DOP by might be less than the specified value. The final value is
determined by the formula min(255, number of CPUs ).

Cau t i on

Changing MAX_DOP can adversely affect a server's performance. If you must change MAX_DOP, we recommend
that it be set to a value that is less than or equal to the maximum number of hardware schedulers that are present
in a single NUMA node. We recommend that you do not set MAX_DOP to a value greater than 8.
MAX_DOP is handled as follows:
MAX_DOP as a query hint is honored as long as it does not exceed workload group MAX_DOP.
MAX_DOP as a query hint always overrides sp_configure 'max degree of parallelism'.
Workload group MAX_DOP overrides sp_configure 'max degree of parallelism'.
If the query is marked as serial (MAX_DOP = 1 ) at compile time, it cannot be changed back to parallel at
run time regardless of the workload group or sp_configure setting.
After DOP is configured, it can only be lowered on grant memory pressure. Workload group
reconfiguration is not visible while waiting in the grant memory queue.
GROUP_MAX_REQUESTS =value
Specifies the maximum number of simultaneous requests that are allowed to execute in the workload
group. value must be 0 or a positive integer. The default setting for value, 0, allows unlimited requests.
When the maximum concurrent requests are reached, a user in that group can log in, but is placed in a wait
state until concurrent requests are dropped below the value specified.
USING { pool_name | "default" }
Associates the workload group with the user-defined resource pool identified by pool_name, which in effect
puts the workload group in the resource pool. If pool_name is not provided or if the USING argument is
not used, the workload group is put in the predefined Resource Governor default pool.
The option "default" must be enclosed by quotation marks ("") or brackets ([]) when used with ALTER
WORKLOAD GROUP to avoid conflict with DEFAULT, which is a system reserved word. For more
information, see Database Identifiers.
NOTE
The option "default" is case-sensitive.

Remarks
ALTER WORKLOAD GROUP is allowed on the default group.
Changes to the workload group configuration do not take effect until after ALTER RESOURCE GOVERNOR
RECONFIGURE is executed. When changing a plan affecting setting, the new setting will only take effect in
previously cached plans after executing DBCC FREEPROCCACHE (pool_name), where pool_name is the name of
a Resource Governor resource pool on which the workload group is associated with.
If you are changing MAX_DOP to 1, executing DBCC FREEPROCCACHE is not required because parallel
plans can run in serial mode. However, it may not be as efficient as a plan compiled as a serial plan.
If you are changing MAX_DOP from 1 to 0 or a value greater than 1, executing DBCC FREEPROCCACHE
is not required. However, serial plans cannot run in parallel, so clearing the respective cache will allow new
plans to potentially be compiled using parallelism.
Cau t i on

Clearing cached plans from a resource pool that is associated with more than one workload group will affect all
workload groups with the user-defined resource pool identified by pool_name.
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states. For
more information, see Resource Governor.
REQUEST_MEMORY_GRANT_PERCENT: In SQL Server 2005, index creation is allowed to use more workspace
memory than initially granted for improved performance. This special handling is supported by Resource
Governor in later versions, however, the initial grant and any additional memory grant are limited by resource
pool and workload group settings.
Index Creation on a Partitioned Table
The memory consumed by index creation on non-aligned partitioned table is proportional to the number of
partitions involved. If the total required memory exceeds the per-query limit
(REQUEST_MAX_MEMORY_GRANT_PERCENT) imposed by the Resource Governor workload group setting,
this index creation may fail to execute. Because the "default" workload group allows a query to exceed the per-
query limit with the minimum required memory to start for SQL Server 2005 compatibility, the user may be able
to run the same index creation in "default" workload group, if the "default" resource pool has enough total
memory configured to run such query.

Permissions
Requires CONTROL SERVER permission.

Examples
The following example shows how to change the importance of requests in the default group from MEDIUM to
LOW .
ALTER WORKLOAD GROUP "default"
WITH (IMPORTANCE = LOW);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

The following example shows how to move a workload group from the pool that it is in to the default pool.

ALTER WORKLOAD GROUP adHoc


USING [default];
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

See Also
Resource Governor
CREATE WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
ALTER XML SCHEMA COLLECTION (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds new schema components to an existing XML schema collection.
Transact-SQL Syntax Conventions

Syntax
ALTER XML SCHEMA COLLECTION [ relational_schema. ]sql_identifier ADD 'Schema Component'

Arguments
relational_schema
Identifies the relational schema name. If not specified, the default relational schema is assumed.
sql_identifier
Is the SQL identifier for the XML schema collection.
' Schema Component '
Is the schema component to insert.

Remarks
Use the ALTER XML SCHEMA COLLECTION to add new XML schemas whose namespaces are not already in the
XML schema collection, or add new components to existing namespaces in the collection.
The following example adds a new <element> to the existing namespace http://MySchema/test_xml_schema in the
collection MyColl .

-- First create an XML schema collection.


CREATE XML SCHEMA COLLECTION MyColl AS '
<schema
xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://MySchema/test_xml_schema">
<element name="root" type="string"/>
</schema>'
-- Modify the collection.
ALTER XML SCHEMA COLLECTION MyColl ADD '
<schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://MySchema/test_xml_schema">
<element name="anotherElement" type="byte"/>
</schema>';

ALTER XML SCHEMA adds element <anotherElement> to the previously defined namespace
http://MySchema/test_xml_schema .
Note that if some of the components you want to add in the collection reference components that are already in
the same collection, you must use <import namespace="referenced_component_namespace" /> . However, it is not valid
to use the current schema namespace in <xsd:import> , and therefore components from the same target
namespace as the current schema namespace are automatically imported.
To remove collections, use DROP XML SCHEMA COLLECTION (Transact-SQL ).
If the schema collection already contains a lax validation wildcard or an element of type xs:anyType, adding a new
global element, type, or attribute declaration to the schema collection will cause a revalidation of all the stored data
that is constrained by the schema collection.

Permissions
To alter an XML SCHEMA COLLECTION requires ALTER permission on the collection.

Examples
A. Creating XML schema collection in the database
The following example creates the XML schema collection ManuInstructionsSchemaCollection . The collection has
only one schema namespace.

-- Create a sample database in which to load the XML schema collection.


CREATE DATABASE SampleDB;
GO
USE SampleDB;
GO
CREATE XML SCHEMA COLLECTION ManuInstructionsSchemaCollection AS
N'<?xml version="1.0" encoding="UTF-16"?>
<xsd:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelManuInstructions"
xmlns ="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelManuInstructions"
elementFormDefault="qualified"
attributeFormDefault="unqualified"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" >

<xsd:complexType name="StepType" mixed="true" >


<xsd:choice minOccurs="0" maxOccurs="unbounded" >
<xsd:element name="tool" type="xsd:string" />
<xsd:element name="material" type="xsd:string" />
<xsd:element name="blueprint" type="xsd:string" />
<xsd:element name="specs" type="xsd:string" />
<xsd:element name="diag" type="xsd:string" />
</xsd:choice>
</xsd:complexType>

<xsd:element name="root">
<xsd:complexType mixed="true">
<xsd:sequence>
<xsd:element name="Location" minOccurs="1" maxOccurs="unbounded">
<xsd:complexType mixed="true">
<xsd:sequence>
<xsd:element name="step" type="StepType" minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
<xsd:attribute name="LocationID" type="xsd:integer" use="required"/>
<xsd:attribute name="SetupHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="MachineHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="LaborHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="LotSize" type="xsd:decimal" use="optional"/>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>' ;
GO
GO
-- Verify - list of collections in the database.
SELECT *
FROM sys.xml_schema_collections;
-- Verify - list of namespaces in the database.
SELECT name
FROM sys.xml_schema_namespaces;

-- Use it. Create a typed xml variable. Note the collection name
-- that is specified.
DECLARE @x xml (ManuInstructionsSchemaCollection);
GO
--Or create a typed xml column.
CREATE TABLE T (
i int primary key,
x xml (ManuInstructionsSchemaCollection));
GO
-- Clean up.
DROP TABLE T;
GO
DROP XML SCHEMA COLLECTION ManuInstructionsSchemaCollection;
Go
USE master;
GO
DROP DATABASE SampleDB;

Alternatively, you can assign the schema collection to a variable and specify the variable in the
CREATE XML SCHEMA COLLECTION statement as follows:

DECLARE @MySchemaCollection nvarchar(max);


SET @MySchemaCollection = N' copy the schema collection here';
CREATE XML SCHEMA COLLECTION AS @MySchemaCollection;

The variable in the example is of nvarchar(max) type. The variable can also be of xml data type, in which case, it is
implicitly converted to a string.
For more information, see View a Stored XML Schema Collection.
You can store schema collections in an xml type column. In this case, to create XML schema collection, perform the
following steps:
1. Retrieve the schema collection from the column by using a SELECT statement and assign it to a variable of
xml type, or a varchar type.
2. Specify the variable name in the CREATE XML SCHEMA COLLECTION statement.
The CREATE XML SCHEMA COLLECTION stores only the schema components that SQL Server
understands; everything in the XML schema is not stored in the database. Therefore, if you want the XML
schema collection back exactly the way it was supplied, we recommend that you save your XML schemas in
a database column or some other folder on your computer.
B. Specifying multiple schema namespaces in a schema collection
You can specify multiple XML schemas when you create an XML schema collection. For example:

CREATE XML SCHEMA COLLECTION N'


<xsd:schema>....</xsd:schema>
<xsd:schema>...</xsd:schema>';

The following example creates the XML schema collection ProductDescriptionSchemaCollection that includes two
XML schema namespaces.
CREATE XML SCHEMA COLLECTION ProductDescriptionSchemaCollection AS
'<xsd:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelWarrAndMain"
xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain"
elementFormDefault="qualified"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" >
<xsd:element name="Warranty" >
<xsd:complexType>
<xsd:sequence>
<xsd:element name="WarrantyPeriod" type="xsd:string" />
<xsd:element name="Description" type="xsd:string" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
<xs:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelDescription"
xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription"
elementFormDefault="qualified"
xmlns:mstns="http://tempuri.org/XMLSchema.xsd"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:wm="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain" >
<xs:import
namespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain" />
<xs:element name="ProductDescription" type="ProductDescription" />
<xs:complexType name="ProductDescription">
<xs:sequence>
<xs:element name="Summary" type="Summary" minOccurs="0" />
</xs:sequence>
<xs:attribute name="ProductModelID" type="xs:string" />
<xs:attribute name="ProductModelName" type="xs:string" />
</xs:complexType>
<xs:complexType name="Summary" mixed="true" >
<xs:sequence>
<xs:any processContents="skip" namespace="http://www.w3.org/1999/xhtml" minOccurs="0"
maxOccurs="unbounded" />
</xs:sequence>
</xs:complexType>
</xs:schema>'
;
GO
-- Clean up
DROP XML SCHEMA COLLECTION ProductDescriptionSchemaCollection;
GO

C. Importing a schema that does not specify a target namespace


If a schema that does not contain a targetNamespace attribute is imported in a collection, its components are
associated with the empty string target namespace as shown in the following example. Note that not associating
one or more schemas imported in the collection results in multiple schema components (potentially unrelated)
being associated with the default empty string namespace.
-- Create a collection that contains a schema with no target namespace.
CREATE XML SCHEMA COLLECTION MySampleCollection AS '
<schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:ns="http://ns">
<element name="e" type="dateTime"/>
</schema>';
GO
-- query will return the names of all the collections that
--contain a schema with no target namespace
SELECT sys.xml_schema_collections.name
FROM sys.xml_schema_collections
JOIN sys.xml_schema_namespaces
ON sys.xml_schema_collections.xml_collection_id =
sys.xml_schema_namespaces.xml_collection_id
WHERE sys.xml_schema_namespaces.name='';

See Also
CREATE XML SCHEMA COLLECTION (Transact-SQL )
DROP XML SCHEMA COLLECTION (Transact-SQL )
EVENTDATA (Transact-SQL )
Compare Typed XML to Untyped XML
Requirements and Limitations for XML Schema Collections on the Server
BACKUP (Transact-SQL)
5/16/2018 • 39 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Backs up a complete SQL Server database to create a database backup, or one or more files or filegroups of the
database to create a file backup (BACKUP DATABASE ). Also, under the full recovery model or bulk-logged
recovery model, backs up the transaction log of the database to create a log backup (BACKUP LOG ).

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
Backing Up a Whole Database
BACKUP DATABASE { database_name | @database_name_var }
TO <backup_device> [ ,...n ]
[ <MIRROR TO clause> ] [ next-mirror-to ]
[ WITH { DIFFERENTIAL -- Not supporterd in SQL Database Managed Instance
| <general_WITH_options> [ ,...n ] } ]
[;]

Backing Up Specific Files or Filegroups


BACKUP DATABASE { database_name | @database_name_var }
<file_or_filegroup> [ ,...n ]
TO <backup_device> [ ,...n ]
[ <MIRROR TO clause> ] [ next-mirror-to ]
[ WITH { DIFFERENTIAL | <general_WITH_options> [ ,...n ] } ]
[;]

Creating a Partial Backup


BACKUP DATABASE { database_name | @database_name_var }
READ_WRITE_FILEGROUPS [ , <read_only_filegroup> [ ,...n ] ]
TO <backup_device> [ ,...n ]
[ <MIRROR TO clause> ] [ next-mirror-to ]
[ WITH { DIFFERENTIAL | <general_WITH_options> [ ,...n ] } ]
[;]

Backing Up the Transaction Log (full and bulk-logged recovery models)


BACKUP LOG -- Not supported in SQL Database Managed Instance
{ database_name | @database_name_var }
TO <backup_device> [ ,...n ]
[ <MIRROR TO clause> ] [ next-mirror-to ]
[ WITH { <general_WITH_options> | \<log-specific_optionspec> } [ ,...n ] ]
[;]

<backup_device>::=
{
{ logical_device_name | @logical_device_name_var }
| { DISK -- Not supported in SQL Database Managed Instance
| TAPE -- Not supported in SQL Database Managed Instance
| URL } =
{ 'physical_device_name' | @physical_device_name_var | 'NUL' }
}
}

<MIRROR TO clause>::=
MIRROR TO <backup_device> [ ,...n ]

<file_or_filegroup>::=
{
FILE = { logical_file_name | @logical_file_name_var }
| FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
}

<read_only_filegroup>::=
FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }

<general_WITH_options> [ ,...n ]::=


--Backup Set Options
COPY_ONLY -- Only backup set option supported by SQL Database Managed Instance
| { COMPRESSION | NO_COMPRESSION }
| DESCRIPTION = { 'text' | @text_variable }
| NAME = { backup_set_name | @backup_set_name_var }
| CREDENTIAL
| ENCRYPTION
| FILE_SNAPSHOT --Not supported in SQL Database Managed Instance
| { EXPIREDATE = { 'date' | @date_var }
| RETAINDAYS = { days | @days_var } }

--Media Set Options


{ NOINIT | INIT }
| { NOSKIP | SKIP }
| { NOFORMAT | FORMAT }
| MEDIADESCRIPTION = { 'text' | @text_variable }
| MEDIANAME = { media_name | @media_name_variable }
| BLOCKSIZE = { blocksize | @blocksize_variable }

--Data Transfer Options


BUFFERCOUNT = { buffercount | @buffercount_variable }
| MAXTRANSFERSIZE = { maxtransfersize | @maxtransfersize_variable }

--Error Management Options


{ NO_CHECKSUM | CHECKSUM }
| { STOP_ON_ERROR | CONTINUE_AFTER_ERROR }

--Compatibility Options
RESTART

--Monitoring Options
STATS [ = percentage ]

--Tape Options. These are not supported in SQL Database Managed Instance
{ REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }

--Log-specific Options. These are not supported in SQL Database Managed Instance
{ NORECOVERY | STANDBY = undo_file_name }
| NO_TRUNCATE

--Encryption Options
ENCRYPTION (ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY } , encryptor_options )
<encryptor_options> ::=
SERVER CERTIFICATE = Encryptor_Name | SERVER ASYMMETRIC KEY = Encryptor_Name

Arguments
DATABASE
Specifies a complete database backup. If a list of files and filegroups is specified, only those files and filegroups are
backed up. During a full or differential database backup, SQL Server backs up enough of the transaction log to
produce a consistent database when the backup is restored.
When you restore a backup created by BACKUP DATABASE (a data backup), the entire backup is restored. Only a
log backup can be restored to a specific time or transaction within the backup.

NOTE
Only a full database backup can be performed on the master database.

LOG Applies to: SQL Server


Specifies a backup of the transaction log only. The log is backed up from the last successfully executed log backup
to the current end of the log. Before you can create the first log backup, you must create a full backup.
You can restore a log backup to a specific time or transaction within the backup by specifying WITH STOPAT ,
STOPATMARK , or STOPBEFOREMARK in your RESTORE LOG statement.

NOTE
After a typical log backup, some transaction log records become inactive, unless you specify WITH NO_TRUNCATE or
COPY_ONLY . The log is truncated after all the records within one or more virtual log files become inactive. If the log is not
being truncated after routine log backups, something might be delaying log truncation. For more information, see Factors
that can delay log truncation.

{ database_name | @database_name_var }
Is the database from which the transaction log, partial database, or complete database is backed up. If supplied as
a variable (@database_name_var), this name can be specified either as a string constant
(@database_name_var=database name) or as a variable of character string data type, except for the ntext or text
data types.

NOTE
The mirror database in a database mirroring partnership cannot be backed up.

<file_or_filegroup> [ ,...n ]
Used only with BACKUP DATABASE, specifies a database file or filegroup to include in a file backup, or specifies a
read-only file or filegroup to include in a partial backup.
FILE = { logical_file_name | @logical_file_name_var }
Is the logical name of a file or a variable whose value equates to the logical name of a file that is to be included in
the backup.
FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
Is the logical name of a filegroup or a variable whose value equates to the logical name of a filegroup that is to be
included in the backup. Under the simple recovery model, a filegroup backup is allowed only for a read-only
filegroup.

NOTE
Consider using file backups when the database size and performance requirements make a database backup impractical. The
NUL device can be used to test the performance of backups, but should not be used in production environments.

n
Is a placeholder that indicates that multiple files and filegroups can be specified in a comma-separated list. The
number is unlimited.
For more information, see Full File Backups (SQL Server) and Back Up Files and Filegroups (SQL Server).
READ_WRITE_FILEGROUPS [ , FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var } [ ,...n ]
]
Specifies a partial backup. A partial backup includes all the read/write files in a database: the primary filegroup
and any read/write secondary filegroups, and also any specified read-only files or filegroups.
READ_WRITE_FILEGROUPS
Specifies that all read/write filegroups be backed up in the partial backup. If the database is read-only,
READ_WRITE_FILEGROUPS includes only the primary filegroup.

IMPORTANT
Explicitly listing the read/write filegroups by using FILEGROUP instead of READ_WRITE_FILEGROUPS creates a file backup.

FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }


Is the logical name of a read-only filegroup or a variable whose value equates to the logical name of a read-only
filegroup that is to be included in the partial backup. For more information, see "<file_or_filegroup>," earlier in
this topic.
n
Is a placeholder that indicates that multiple read-only filegroups can be specified in a comma-separated list.
For more information about partial backups, see Partial Backups (SQL Server).
TO <backup_device> [ ,...n ] Indicates that the accompanying set of backup devices is either an unmirrored media
set or the first of the mirrors within a mirrored media set (for which one or more MIRROR TO clauses are
declared).
<backup_device> Applies to: SQL Server Specifies a logical or physical backup device to use for the backup
operation.
{ logical_device_name | @logical_device_name_var } Applies to: SQL Server
Is the logical name of the backup device to which the database is backed up. The logical name must follow the
rules for identifiers. If supplied as a variable (@logical_device_name_var), the backup device name can be
specified either as a string constant (@logical_device_name_var= logical backup device name) or as a variable of
any character string data type except for the ntext or text data types.
{ DISK | TAPE | URL } = { 'physical_device_name' | @physical_device_name_var | 'NUL' } Applies to: DISK, TAPE,
and URL apply to SQL Server. Only URL applies to SQL Database Managed Instance Specifies a disk file or tape
device, or a Windows Azure Blob storage service. The URL format is used for creating backups to the Windows
Azure storage service. For more information and examples, see SQL Server Backup and Restore with Microsoft
Azure Blob Storage Service. For a tutorial, see Tutorial: SQL Server Backup and Restore to Windows Azure Blob
Storage Service.

NOTE
The NUL disk device will discard all information sent to it and should only be used for testing. This is not for production use.
IMPORTANT
Starting with SQL Server 2012 (11.x) SP1 CU2 through SQL Server 2014 (12.x), you can only backup to a single device when
backing up to URL. In order to backup to multiple devices when backing up to URL, you must use SQL Server 2016 (13.x)
through SQL Server 2017 and you must use Shared Access Signature (SAS) tokens. For examples creating a Shared Access
Signature, see SQL Server Backup to URL and Simplifying creation of SQL Credentials with Shared Access Signature (SAS)
tokens on Azure Storage with Powershell.

URL applies to: SQL Server ( SQL Server 2012 (11.x) SP1 CU2 through SQL Server 2017) and SQL Database
Managed Instance.
A disk device does not have to exist before it is specified in a BACKUP statement. If the physical device exists and
the INIT option is not specified in the BACKUP statement, the backup is appended to the device.

NOTE
The NUL device will discard all input sent to this file, however the backup will still mark all pages as backed up.

For more information, see Backup Devices (SQL Server).

NOTE
The TAPE option will be removed in a future version of SQL Server. Avoid using this feature in new development work, and
plan to modify applications that currently use this feature.

n
Is a placeholder that indicates that up to 64 backup devices may be specified in a comma-separated list.
MIRROR TO <backup_device> [ ,...n ] Specifies a set of up to three secondary backup devices, each of which
mirrors the backups devices specified in the TO clause. The MIRROR TO clause must specify the same type and
number of the backup devices as the TO clause. The maximum number of MIRROR TO clauses is three.
This option is available only in the Enterprise edition of SQL Server.

NOTE
For MIRROR TO = DISK, BACKUP automatically determines the appropriate block size for disk devices. For more information
about block size, see "BLOCKSIZE" later in this table.

<backup_device> See "<backup_device>," earlier in this section.


n
Is a placeholder that indicates that up to 64 backup devices may be specified in a comma-separated list. The
number of devices in the MIRROR TO clause must equal the number of devices in the TO clause.
For more information, see "Media Families in Mirrored Media Sets" in the Remarks section, later in this topic.
[ next-mirror-to ]
Is a placeholder that indicates that a single BACKUP statement can contain up to three MIRROR TO clauses, in
addition to the single TO clause.
WITH Options
Specifies options to be used with a backup operation.
CREDENTIAL
Applies to: SQL Server ( SQL Server 2012 (11.x) SP1 CU2 through SQL Server 2017) and SQL Database
Managed Instance.
Used only when creating a backup to the Windows Azure Blob storage service.
FILE_SNAPSHOT Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017).
Used to create an Azure snapshot of the database files when all of the SQL Server database files are stored using
the Azure Blob storage service. For more information, see SQL Server Data Files in Microsoft Azure. SQL Server
Snapshot Backup takes Azure snapshots of the database files (data and log files) at a consistent state. A consistent
set of Azure snapshots make up a backup and are recorded in the backup file. The only difference between
BACKUP DATABASE TO URL WITH FILE_SNAPSHOT and BACKUP LOG TO URL WITH FILE_SNAPSHOT is that the latter also
truncates the transaction log while the former does not. With SQL Server Snapshot Backup, after the initial full
backup that is required by SQL Server to establish the backup chain, only a single transaction log backup is
required to restore a database to the point in time of the transaction log backup. Furthermore, only two
transaction log backups are required to restore a database to a point in time between the time of the two
transaction log backups.
DIFFERENTIAL
Applies to: SQL Server Used only with BACKUP DATABASE, specifies that the database or file backup should
consist only of the portions of the database or file changed since the last full backup. A differential backup usually
takes up less space than a full backup. Use this option so that all individual log backups performed since the last
full backup do not have to be applied.

NOTE
By default, BACKUP DATABASE creates a full backup.

For more information, see Differential Backups (SQL Server).


ENCRYPTION
Used to specify encryption for a backup. You can specify an encryption algorithm to encrypt the backup with or
specify NO_ENCRYPTION to not have the backup encrypted. Encryption is recommended practice to help secure
backup files. The list of algorithms you can specify are:
AES_128
AES_192
AES_256
TRIPLE_DES_3KEY
NO_ENCRYPTION

If you choose to encrypt you will also have to specify the encryptor using the encryptor options:
SERVER CERTIFICATE = Encryptor_Name
SERVER ASYMMETRIC KEY = Encryptor_Name

WARNING
When encryption is used in conjunction with the FILE_SNAPSHOT argument, the metadata file itself is encrypted using the
specified encryption algorithm and the system verifies that Transparent Data Encryption (TDE) was completed for the
database. No additional encryption happens for the data itself. The backup fails if the database was not encrypted or if the
encryption was not completed before the backup statement was issued.

Backup Set Options


These options operate on the backup set that is created by this backup operation.

NOTE
To specify a backup set for a restore operation, use the FILE = <backup_set_file_number> option. For more information
about how to specify a backup set, see "Specifying a Backup Set" in RESTORE Arguments (Transact-SQL).

COPY_ONLY Applies to: SQL Server and SQL Database Managed Instance Specifies that the backup is a copy-
only backup, which does not affect the normal sequence of backups. A copy-only backup is created independently
of your regularly scheduled, conventional backups. A copy-only backup does not affect your overall backup and
restore procedures for the database.
Copy-only backups should be used in situations in which a backup is taken for a special purpose, such as backing
up the log before an online file restore. Typically, a copy-only log backup is used once and then deleted.
When used with BACKUP DATABASE , the COPY_ONLY option creates a full backup that cannot serve as a
differential base. The differential bitmap is not updated, and differential backups behave as if the copy-only
backup does not exist. Subsequent differential backups use the most recent conventional full backup as
their base.

IMPORTANT
If DIFFERENTIAL and COPY_ONLY are used together, COPY_ONLY is ignored, and a differential backup is created.

When used with BACKUP LOG , the COPY_ONLY option creates a copy-only log backup, which does not
truncate the transaction log. The copy-only log backup has no effect on the log chain, and other log
backups behave as if the copy-only backup does not exist.
For more information, see Copy-Only Backups (SQL Server).
{ COMPRESSION | NO_COMPRESSION }
In SQL Server 2008 Enterprise and later versions only, specifies whether backup compression is performed on
this backup, overriding the server-level default.
At installation, the default behavior is no backup compression. But this default can be changed by setting the
backup compression default server configuration option. For information about viewing the current value of this
option, see View or Change Server Properties (SQL Server).
For information about using backup compression with Transparent Data Encryption (TDE ) enabled databases, see
the Remarks section.
COMPRESSION
Explicitly enables backup compression.
NO_COMPRESSION
Explicitly disables backup compression.
DESCRIPTION = { 'text' | @text_variable }
Specifies the free-form text describing the backup set. The string can have a maximum of 255 characters.
NAME = { backup_set_name | @backup_set_var }
Specifies the name of the backup set. Names can have a maximum of 128 characters. If NAME is not specified, it
is blank.
{ EXPIREDATE ='date' | RETAINDAYS = days }
Specifies when the backup set for this backup can be overwritten. If these options are both used, RETAINDAYS
takes precedence over EXPIREDATE.
If neither option is specified, the expiration date is determined by the mediaretention configuration setting. For
more information, see Server Configuration Options (SQL Server).

IMPORTANT
These options only prevent SQL Server from overwriting a file. Tapes can be erased using other methods, and disk files can
be deleted through the operating system. For more information about expiration verification, see SKIP and FORMAT in this
topic.

EXPIREDATE = { 'date' | @date_var }


Specifies when the backup set expires and can be overwritten. If supplied as a variable (@date_var), this date must
follow the configured system datetime format and be specified as one of the following:
A string constant (@date_var = date)
A variable of character string data type (except for the ntext or text data types)
A smalldatetime
A datetime variable
For example:
'Dec 31, 2020 11:59 PM'
'1/1/2021'

For information about how to specify datetime values, see Date and Time Types.

NOTE
To ignore the expiration date, use the SKIP option.

RETAINDAYS = { days | @days_var }


Specifies the number of days that must elapse before this backup media set can be overwritten. If supplied as a
variable (@days_var), it must be specified as an integer.
Media Set Options
These options operate on the media set as a whole.
{ NOINIT | INIT }
Controls whether the backup operation appends to or overwrites the existing backup sets on the backup media.
The default is to append to the most recent backup set on the media (NOINIT).

NOTE
For information about the interactions between { NOINIT | INIT } and { NOSKIP | SKIP }, see Remarks later in this topic.

NOINIT
Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media
password is defined for the media set, the password must be supplied. NOINIT is the default.
For more information, see Media Sets, Media Families, and Backup Sets (SQL Server).
INIT
Specifies that all backup sets should be overwritten, but preserves the media header. If INIT is specified, any
existing backup set on that device is overwritten, if conditions permit. By default, BACKUP checks for the
following conditions and does not overwrite the backup media if either condition exists:
Any backup set has not yet expired. For more information, see the EXPIREDATE and RETAINDAYS options.
The backup set name given in the BACKUP statement, if provided, does not match the name on the backup
media. For more information, see the NAME option, earlier in this section.
To override these checks, use the SKIP option.
For more information, see Media Sets, Media Families, and Backup Sets (SQL Server).
{ NOSKIP | SKIP }
Controls whether a backup operation checks the expiration date and time of the backup sets on the media before
overwriting them.

NOTE
For information about the interactions between { NOINIT | INIT } and { NOSKIP | SKIP }, see "Remarks," later in this topic.

NOSKIP
Instructs the BACKUP statement to check the expiration date of all backup sets on the media before allowing
them to be overwritten. This is the default behavior.
SKIP
Disables the checking of backup set expiration and name that is usually performed by the BACKUP statement to
prevent overwrites of backup sets. For information about the interactions between { INIT | NOINIT } and {
NOSKIP | SKIP }, see "Remarks," later in this topic.
To view the expiration dates of backup sets, query the expiration_date column of the backupset history table.
{ NOFORMAT | FORMAT }
Specifies whether the media header should be written on the volumes used for this backup operation, overwriting
any existing media header and backup sets.
NOFORMAT
Specifies that the backup operation preserves the existing media header and backup sets on the media volumes
used for this backup operation. This is the default behavior.
FORMAT
Specifies that a new media set be created. FORMAT causes the backup operation to write a new media header on
all media volumes used for the backup operation. The existing contents of the volume become invalid, because
any existing media header and backup sets are overwritten.

IMPORTANT
Use FORMAT carefully. Formatting any volume of a media set renders the entire media set unusable. For example, if you
initialize a single tape belonging to an existing striped media set, the entire media set is rendered useless.

Specifying FORMAT implies SKIP ; SKIP does not need to be explicitly stated.
MEDIADESCRIPTION = { text | @text_variable }
Specifies the free-form text description, maximum of 255 characters, of the media set.
MEDIANAME = { media_name | @media_name_variable }
Specifies the media name for the entire backup media set. The media name must be no longer than 128
characters, If MEDIANAME is specified, it must match the previously specified media name already existing on the
backup volumes. If it is not specified, or if the SKIP option is specified, there is no verification check of the media
name.
BLOCKSIZE = { blocksize | @blocksize_variable }
Specifies the physical block size, in bytes. The supported sizes are 512, 1024, 2048, 4096, 8192, 16384, 32768,
and 65536 (64 KB ) bytes. The default is 65536 for tape devices and 512 otherwise. Typically, this option is
unnecessary because BACKUP automatically selects a block size that is appropriate to the device. Explicitly stating
a block size overrides the automatic selection of block size.
If you are taking a backup that you plan to copy onto and restore from a CD -ROM, specify BLOCKSIZE=2048.

NOTE
This option typically affects performance only when writing to tape devices.

Data Transfer Options


BUFFERCOUNT = { buffercount | @buffercount_variable }
Specifies the total number of I/O buffers to be used for the backup operation. You can specify any positive integer;
however, large numbers of buffers might cause "out of memory" errors because of inadequate virtual address
space in the Sqlservr.exe process.
The total space used by the buffers is determined by: buffercount/maxtransfersize.

NOTE
For important information about using the BUFFERCOUNT option, see the Incorrect BufferCount data transfer option can
lead to OOM condition blog.

MAXTRANSFERSIZE = { maxtransfersize | **@* maxtransfersize_variable* } Specifies the largest unit of transfer


in bytes to be used between SQL Server and the backup media. The possible values are multiples of 65536 bytes
(64 KB ) ranging up to 4194304 bytes (4 MB ).

NOTE
When creating backups by using the SQL Writer Service, if the database has configured FILESTREAM, or includes memory
optimized filegroups, then the MAXTRANSFERSIZE at the time of a restore should be greater than or equal to the
MAXTRANSFERSIZE that was used when the backup was created.

NOTE
For Transparent Data Encryption (TDE) enabled databases with a single data file, the default MAXTRANSFERSIZE is 65536 (64
KB). For non-TDE encrypted databases the default MAXTRANSFERSIZE is 1048576 (1 MB) when using backup to DISK, and
65536 (64 KB) when using VDI or TAPE. For more information about using backup compression with TDE encrypted
databases, see the Remarks section.

Error Management Options


These options allow you to determine whether backup checksums are enabled for the backup operation and
whether the operation stops on encountering an error.
{ NO_CHECKSUM | CHECKSUM }
Controls whether backup checksums are enabled.
NO_CHECKSUM
Explicitly disables the generation of backup checksums (and the validation of page checksums). This is the default
behavior.
CHECKSUM
Specifies that the backup operation verifies each page for checksum and torn page, if enabled and available, and
generate a checksum for the entire backup.
Using backup checksums may affect workload and backup throughput.
For more information, see Possible Media Errors During Backup and Restore (SQL Server).
{ STOP_ON_ERROR | CONTINUE_AFTER_ERROR }
Controls whether a backup operation stops or continues after encountering a page checksum error.
STOP_ON_ERROR
Instructs BACKUP to fail if a page checksum does not verify. This is the default behavior.
CONTINUE_AFTER_ERROR
Instructs BACKUP to continue despite encountering errors such as invalid checksums or torn pages.
If you are unable to back up the tail of the log using the NO_TRUNCATE option when the database is damaged,
you can attempt a tail-log log backup by specifying CONTINUE_AFTER_ERROR instead of NO_TRUNCATE.
For more information, see Possible Media Errors During Backup and Restore (SQL Server).
Compatibility Options
RESTART
Beginning with SQL Server 2008, has no effect. This option is accepted by the version for compatibility with
previous versions of SQL Server.
Monitoring Options
STATS [ = percentage ]
Displays a message each time another percentage completes, and is used to gauge progress. If percentage is
omitted, SQL Server displays a message after each 10 percent is completed.
The STATS option reports the percentage complete as of the threshold for reporting the next interval. This is at
approximately the specified percentage; for example, with STATS=10, if the amount completed is 40 percent, the
option might display 43 percent. For large backup sets, this is not a problem, because the percentage complete
moves very slowly between completed I/O calls.
Tape Options
Applies to: SQL Server
These options are used only for TAPE devices. If a nontape device is being used, these options are ignored.
{ REWIND | NOREWIND }
REWIND Applies to: SQL Server Specifies that SQL Server releases and rewinds the tape. REWIND is the
default.
NOREWIND Applies to: SQL Server Specifies that SQL Server will keep the tape open after the backup
operation. You can use this option to help improve performance when performing multiple backup operations to a
tape.
NOREWIND implies NOUNLOAD, and these options are incompatible within a single BACKUP statement.
NOTE
If you use NOREWIND , the instance of SQL Server retains ownership of the tape drive until a BACKUP or RESTORE statement
that is running in the same process uses either the REWIND or UNLOAD option, or the server instance is shut down.
Keeping the tape open prevents other processes from accessing the tape. For information about how to display a list of
open tapes and to close an open tape, see Backup Devices (SQL Server).

{ UNLOAD | NOUNLOAD }
Applies to: SQL Server

NOTE
UNLOAD and NOUNLOAD are session settings that persist for the life of the session or until it is reset by specifying the
alternative.

UNLOAD Applies to: SQL Server


Specifies that the tape is automatically rewound and unloaded when the backup is finished. UNLOAD is the
default when a session begins.
NOUNLOAD Applies to: SQL Server Specifies that after the BACKUP operation the tape remains loaded on the
tape drive.

NOTE
For a backup to a tape backup device, the BLOCKSIZE option to affect the performance of the backup operation. This
option typically affects performance only when writing to tape devices.

Log-specific options
Applies to: SQL Server
These options are only used with BACKUP LOG .

NOTE
If you do not want to take log backups, use the simple recovery model. For more information, see Recovery Models (SQL
Server).

{ NORECOVERY | STANDBY = undo_file_name }


NORECOVERY Applies to: SQL Server
Backs up the tail of the log and leaves the database in the RESTORING state. NORECOVERY is useful when
failing over to a secondary database or when saving the tail of the log before a RESTORE operation.
To perform a best-effort log backup that skips log truncation and then take the database into the RESTORING
state atomically, use the NO_TRUNCATE and NORECOVERY options together.
STANDBY = standby_file_name Applies to: SQL Server
Backs up the tail of the log and leaves the database in a read-only and STANDBY state. The STANDBY clause
writes standby data (performing rollback, but with the option of further restores). Using the STANDBY option is
equivalent to BACKUP LOG WITH NORECOVERY followed by a RESTORE WITH STANDBY.
Using standby mode requires a standby file, specified by standby_file_name, whose location is stored in the log of
the database. If the specified file already exists, the Database Engine overwrites it; if the file does not exist, the
Database Engine creates it. The standby file becomes part of the database.
This file holds the rolled back changes, which must be reversed if RESTORE LOG operations are to be
subsequently applied. There must be enough disk space for the standby file to grow so that it can contain all the
distinct pages from the database that were modified by rolling back uncommitted transactions.
NO_TRUNCATE
Applies to: SQL Server
Specifies that the is log not truncated and causes the Database Engine to attempt the backup regardless of the
state of the database. Consequently, a backup taken with NO_TRUNCATE might have incomplete metadata. This
option allows backing up the log in situations where the database is damaged.
The NO_TRUNCATE option of BACKUP LOG is equivalent to specifying both COPY_ONLY and
CONTINUE_AFTER_ERROR.
Without the NO_TRUNCATE option, the database must be in the ONLINE state. If the database is in the
SUSPENDED state, you might be able to create a backup by specifying NO_TRUNCATE . But if the database is in the
OFFLINE or EMERGENCY state, BACKUP is not allowed even with NO_TRUNCATE . For information about database
states, see Database States.

About working with SQL Server backups


This section introduces the following essential backup concepts:
Backup Types
Transaction Log Truncation
Formatting Backup Media
Working with Backup Devices and Media Sets
Restoring SQL Server Backups

NOTE
For an introduction to backup in SQL Server, see Backup Overview (SQL Server).

Backup types
The supported backup types depend on the recovery model of the database, as follows
All recovery models support full and differential backups of data.

SCOPE OF BACKUP BACKUP TYPES

Whole database Database backups cover the whole database.

Optionally, each database backup can serve as the base of


a series of one or more differential database backups.

Partial database Partial backups cover read/-write filegroups and, possibly,


one or more read-only files or filegroups.

Optionally, each partial backup can serve as the base of a


series of one or more differential partial backups.
SCOPE OF BACKUP BACKUP TYPES

File or filegroup File backups cover one or more files or filegroups, and are
relevant only for databases that contain multiple
filegroups. Under the simple recovery model, file backups
are essentially restricted to read-only secondary
filegroups.
Optionally, each file backup can serve as the base of a
series of one or more differential file backups.

Under the full recovery model or bulk-logged recovery model, conventional backups also include
sequential transaction log backups (or log backups), which are required. Each log backup covers the
portion of the transaction log that was active when the backup was created, and it includes all log records
not backed up in a previous log backup.
To minimize work-loss exposure, at the cost of administrative overhead, you should schedule frequent log
backups. Scheduling differential backups between full backups can reduce restore time by reducing the
number of log backups you have to restore after restoring the data.
We recommend that you put log backups on a separate volume than the database backups.

NOTE
Before you can create the first log backup, you must create a full backup.

A copy-only backup is a special-purpose full backup or log backup that is independent of the normal
sequence of conventional backups. To create a copy-only backup, specify the COPY_ONLY option in your
BACKUP statement. For more information, see Copy-Only Backups (SQL Server).
Transaction Log Truncation
To avoid filling up the transaction log of a database, routine backups are essential. Under the simple recovery
model, log truncation occurs automatically after you back up the database, and under the full recovery model,
after you back up the transaction log. However, sometimes the truncation process can be delayed. For information
about factors that can delay log truncation, see The Transaction Log (SQL Server).

NOTE
The BACKUP LOG WITH NO_LOG and WITH TRUNCATE_ONLY options have been discontinued. If you are using the full or bulk-
logged recovery model recovery and you must remove the log backup chain from a database, switch to the simple recovery
model. For more information, see View or Change the Recovery Model of a Database (SQL Server).

Formatting Backup Media


Backup media is formatted by a BACKUP statement if and only if any of the following is true:
The FORMAT option is specified.
The media is empty.
The operation is writing a continuation tape.
Working with backup devices and media sets
Backup devices in a striped media set (a stripe set)
A stripe set is a set of disk files on which data is divided into blocks and distributed in a fixed order. The number of
backup devices used in a stripe set must stay the same (unless the media is reinitialized with FORMAT ).
The following example writes a backup of the AdventureWorks2012 database to a new striped media set that
uses three disk files.
BACKUP DATABASE AdventureWorks2012
TO DISK='X:\SQLServerBackups\AdventureWorks1.bak',
DISK='Y:\SQLServerBackups\AdventureWorks2.bak',
DISK='Z:\SQLServerBackups\AdventureWorks3.bak'
WITH FORMAT,
MEDIANAME = 'AdventureWorksStripedSet0',
MEDIADESCRIPTION = 'Striped media set for AdventureWorks2012 database;
GO

After a backup device is defined as part of a stripe set, it cannot be used for a single-device backup unless
FORMAT is specified. Similarly, a backup device that contains nonstriped backups cannot be used in a stripe set
unless FORMAT is specified. To split a striped backup set, use FORMAT.
If neither MEDIANAME or MEDIADESCRIPTION is specified when a media header is written, the media header
field corresponding to the blank item is empty.
Working with a mirrored media set
Typically, backups are unmirrored, and BACKUP statements simply include a TO clause. However, a total of four
mirrors is possible per media set. For a mirrored media set, the backup operation writes to multiple groups of
backup devices. Each group of backup devices comprises a single mirror within the mirrored media set. Every
mirror must use the same quantity and type of physical backup devices, which must all have the same properties.
To back up to a mirrored media set, all of the mirrors must be present. To back up to a mirrored media set, specify
the TO clause to specify the first mirror, and specify a MIRROR TO clause for each additional mirror.
For a mirrored media set, each MIRROR TO clause must list the same number and type of devices as the TO clause.
The following example writes to a mirrored media set that contains two mirrors and uses three devices per mirror:

BACKUP DATABASE AdventureWorks2012


TO DISK='X:\SQLServerBackups\AdventureWorks1a.bak',
DISK='Y:\SQLServerBackups\AdventureWorks2a.bak',
DISK='Z:\SQLServerBackups\AdventureWorks3a.bak'
MIRROR TO DISK='X:\SQLServerBackups\AdventureWorks1b.bak',
DISK='Y:\SQLServerBackups\AdventureWorks2b.bak',
DISK='Z:\SQLServerBackups\AdventureWorks3b.bak';
GO

IMPORTANT
This example is designed to allow you to test it on your local system. In practice, backing up to multiple devices on the same
drive would hurt performance and would eliminate the redundancy for which mirrored media sets are designed.

M e d i a fa m i l i e s i n m i r r o r e d m e d i a se t s

Each backup device specified in the TO clause of a BACKUP statement corresponds to a media family. For
example, if the TO clauses lists three devices, BACKUP writes data to three media families. In a mirrored media
set, every mirror must contain a copy of every media family. This is why the number of devices must be identical
in every mirror.
When multiple devices are listed for each mirror, the order of the devices determines which media family is
written to a particular device. For example, in each of the device lists, the second device corresponds to the second
media family. For the devices in the above example, the correspondence between devices and media families is
shown in the following table.

MIRROR MEDIA FAMILY 1 MEDIA FAMILY 2 MEDIA FAMILY 3

0 Z:\AdventureWorks1a.bak Z:\AdventureWorks2a.bak Z:\AdventureWorks3a.bak


MIRROR MEDIA FAMILY 1 MEDIA FAMILY 2 MEDIA FAMILY 3

1 Z:\AdventureWorks1b.bak Z:\AdventureWorks2b.bak Z:\AdventureWorks3b.bak

A media family must always be backed up onto the same device within a specific mirror. Therefore, each time you
use an existing media set, list the devices of each mirror in the same order as they were specified when the media
set was created.
For more information about mirrored media sets, see Mirrored Backup Media Sets (SQL Server). For more
information about media sets and media families in general, see Media Sets, Media Families, and Backup Sets
(SQL Server).
Restoring SQL Server backups
To restore a database and, optionally, recover it to bring it online, or to restore a file or filegroup, use either the
Transact-SQL RESTORE statement or the SQL Server Management Studio Restore tasks. For more information
see Restore and Recovery Overview (SQL Server).

Additional considerations about BACKUP options


Interaction of SKIP, NOSKIP, INIT, and NOINIT
This table describes interactions between the { NOINIT | INIT } and { NOSKIP | SKIP } options.

NOTE
If the tape media is empty or the disk backup file does not exist, all these interactions write a media header and proceed. If
the media is not empty and lacks a valid media header, these operations give feedback stating that this is not valid MTF
media, and they terminate the backup operation.

NOINIT INIT

NOSKIP If the volume contains a valid media If the volume contains a valid media
header, verifies that the media name header, performs the following checks:
matches the given MEDIANAME , if any. If MEDIANAME was specified,
If it matches, appends the backup set, verifies that the given media
preserving all existing backup sets. name matches the media
If the volume does not contain a valid header's media name.1
media header, an error occurs. Verifies that there are no
unexpired backup sets already
on the media. If there are,
terminates the backup.

If these checks pass, overwrites any


backup sets on the media, preserving
only the media header.
If the volume does not contain a valid
media header, generates one with using
specified MEDIANAME and
MEDIADESCRIPTION , if any.
NOINIT INIT

SKIP If the volume contains a valid media If the volume contains a valid2 media
header, appends the backup set, header, overwrites any backup sets on
preserving all existing backup sets. the media, preserving only the media
header.
If the media is empty, generates a
media header using the specified
MEDIANAME and MEDIADESCRIPTION , if
any.

1 The user must belong to the appropriate fixed database or server roles to perform a backup operation.
2 Validity includes the MTFversion number and other header information. If the version specified is unsupported
or an unexpected value, an error occurs.

Compatibility
Cau t i on

Backups that are created by more recent version of SQL Server cannot be restored in earlier versions of SQL
Server.
BACKUP supports the RESTART option to provide backward compatibility with earlier versions of SQL Server. But
RESTART has no effect.

General remarks
Database or log backups can be appended to any disk or tape device, allowing a database and its transaction logs
to be kept within one physical location.
The BACKUP statement is not allowed in an explicit or implicit transaction.
Cross-platform backup operations, even between different processor types, can be performed as long as the
collation of the database is supported by the operating system.
When using backup compression with Transparent Data Encryption (TDE ) enabled databases with a single data
file, it is recommended to use a MAXTRANSFERSIZE setting larger than 65536 (64 KB ).
Starting with SQL Server 2016 (13.x), this enables an optimized compression algorithm for TDE encrypted
databases that first decrypts a page, compresses it and then encrypts it again. If using MAXTRANSFERSIZE = 65536
(64 KB ), backup compression with TDE encrypted databases directly compresses the encrypted pages, and may
not yield good compression ratios. For more information, see Backup Compression for TDE -enabled Databases.

NOTE
There are some cases where the default MAXTRANSFERSIZE is greater than 64K:
When the database has multiple data files created, it uses MAXTRANSFERSIZE > 64K
When performing backup to URL, the default MAXTRANSFERSIZE = 1048576 (1MB)

Even if one of these conditions applies, you must explicitly set MAXTRANSFERSIZE greater than 64K in your backup
command in order to get the new backup compression algorithm.

By default, every successful backup operation adds an entry in the SQL Server error log and in the system event
log. If back up the log very frequently, these success messages accumulate quickly, resulting in huge error logs
that can make finding other messages difficult. In such cases you can suppress these log entries by using trace flag
3226 if none of your scripts depend on those entries. For more information, see Trace Flags (Transact-SQL ).
Interoperability
SQL Server uses an online backup process to allow a database backup while the database is still in use. During a
backup, most operations are possible; for example, INSERT, UPDATE, or DELETE statements are allowed during a
backup operation.
Operations that cannot run during a database or transaction log backup include:
File management operations such as the ALTER DATABASE statement with either the ADD FILE or
REMOVE FILE options.

Shrink database or shrink file operations. This includes auto-shrink operations.


If a backup operation overlaps with a file-management or shrink operation, a conflict arises. Regardless of which
of the conflicting operation began first, the second operation waits for the lock set by the first operation to time
out (the time-out period is controlled by a session timeout setting). If the lock is released during the time-out
period, the second operation continues. If the lock times out, the second operation fails.

Limitations for SQL Database Managed Instance


SQL Database Managed Instance can back up a database to a backup with up to 32 stripes, which is enough for
the databases up to 4 TB if backup compression is used.
Max backup stripe size is 195 GB (maximum blob size). Increase the number of stripes in the backup command to
reduce individual stripe size and stay within this limit.

NOTE
To work around this limitation on-premises, backup to DISK instead of backup to URL , upload backup file to blob, then
restore. Restore supports bigger files because a different blob type is used.

Metadata

SQL Server includes the following backup history tables that track backup activity:
backupfile (Transact-SQL )
backupfilegroup (Transact-SQL )
backupmediafamily (Transact-SQL )
backupmediaset (Transact-SQL )
backupset (Transact-SQL )
When a restore is performed, if the backup set was not already recorded in the msdb database, the backup history
tables might be modified.

Security
Beginning with SQL Server 2012 (11.x), the PASSWORD and MEDIAPASSWORD options are discontinued for creating
backups. It is still possible to restore backups created with passwords.
Permissions
BACKUP DATABASE and BACKUP LOG permissions default to members of the sysadmin fixed server role and
the db_owner and db_backupoperator fixed database roles.
Ownership and permission problems on the backup device's physical file can interfere with a backup operation.
SQL Server must be able to read and write to the device; the account under which the SQL Server service runs
must have write permissions. However, sp_addumpdevice, which adds an entry for a backup device in the system
tables, does not check file access permissions. Such problems on the backup device's physical file may not appear
until the physical resource is accessed when the backup or restore is attempted.

Examples
This section contains the following examples:
A. Backing up a complete database
B. Backing up the database and log
C. [Creating a full file backup of the secondary filegroups](#full_
file_backup)
D. Creating a differential file backup of the secondary filegroups
E. Creating and backing up to a single-family mirrored media set
F. Creating and backing up to a multifamily mirrored media set
G Backing up to an existing mirrored media set
H. Creating a compressed backup in a new media set
I. Backing up to the Microsoft Azure Blob storage service

NOTE
The backup how-to topics contain additional examples. For more information, see Backup Overview (SQL Server).

A. Backing up a complete database


The following example backs up the AdventureWorks2012 database to a disk file.

BACKUP DATABASE AdventureWorks2012


TO DISK = 'Z:\SQLServerBackups\AdvWorksData.bak'
WITH FORMAT;
GO

B. Backing up the database and log


The following example backups up the AdventureWorks2012 sample database, which uses the simple recovery
model by default. To support log backups, the AdventureWorks2012 database is modified to use the full
recovery model.
Next, the example uses sp_addumpdevice to create a logical backup device for backing up data, AdvWorksData , and
creates another logical backup device for backing up the log, AdvWorksLog .
The example then creates a full database backup to AdvWorksData , and after a period of update activity, backs up
the log to AdvWorksLog .
-- To permit log backups, before the full database backup, modify the database
-- to use the full recovery model.
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL;
GO
-- Create AdvWorksData and AdvWorksLog logical backup devices.
USE master
GO
EXEC sp_addumpdevice 'disk', 'AdvWorksData',
'Z:\SQLServerBackups\AdvWorksData.bak';
GO
EXEC sp_addumpdevice 'disk', 'AdvWorksLog',
'X:\SQLServerBackups\AdvWorksLog.bak';
GO

-- Back up the full AdventureWorks2012 database.


BACKUP DATABASE AdventureWorks2012 TO AdvWorksData;
GO
-- Back up the AdventureWorks2012 log.
BACKUP LOG AdventureWorks2012
TO AdvWorksLog;
GO

NOTE
For a production database, back up the log regularly. Log backups should be frequent enough to provide sufficient
protection against data loss.

C. Creating a full file backup of the secondary filegroups


The following example creates a full file backup of every file in both of the secondary filegroups.

--Back up the files in SalesGroup1:


BACKUP DATABASE Sales
FILEGROUP = 'SalesGroup1',
FILEGROUP = 'SalesGroup2'
TO DISK = 'Z:\SQLServerBackups\SalesFiles.bck';
GO

D. Creating a differential file backup of the secondary filegroups


The following example creates a differential file backup of every file in both of the secondary filegroups.

--Back up the files in SalesGroup1:


BACKUP DATABASE Sales
FILEGROUP = 'SalesGroup1',
FILEGROUP = 'SalesGroup2'
TO DISK = 'Z:\SQLServerBackups\SalesFiles.bck'
WITH
DIFFERENTIAL;
GO

E. Creating and backing up to a single -family mirrored media set


The following example creates a mirrored media set containing a single media family and four mirrors and backs
up the AdventureWorks2012 database to them.
BACKUP DATABASE AdventureWorks2012
TO TAPE = '\\.\tape0'
MIRROR TO TAPE = '\\.\tape1'
MIRROR TO TAPE = '\\.\tape2'
MIRROR TO TAPE = '\\.\tape3'
WITH
FORMAT,
MEDIANAME = 'AdventureWorksSet0';

F. Creating and backing up to a multifamily mirrored media set


The following example creates a mirrored media set in which each mirror consists of two media families. The
example then backs up the AdventureWorks2012 database to both mirrors.

BACKUP DATABASE AdventureWorks2012


TO TAPE = '\\.\tape0', TAPE = '\\.\tape1'
MIRROR TO TAPE = '\\.\tape2', TAPE = '\\.\tape3'
WITH
FORMAT,
MEDIANAME = 'AdventureWorksSet1';

G. Backing up to an existing mirrored media set


The following example appends a backup set to the media set created in the preceding example.

BACKUP LOG AdventureWorks2012


TO TAPE = '\\.\tape0', TAPE = '\\.\tape1'
MIRROR TO TAPE = '\\.\tape2', TAPE = '\\.\tape3'
WITH
NOINIT,
MEDIANAME = 'AdventureWorksSet1';

NOTE
NOINIT, which is the default, is shown here for clarity.

H. Creating a compressed backup in a new media set


The following example formats the media, creating a new media set, and perform a compressed full backup of the
AdventureWorks2012 database.

BACKUP DATABASE AdventureWorks2012 TO DISK='Z:\SQLServerBackups\AdvWorksData.bak'


WITH
FORMAT,
COMPRESSION;

I. Backing up to the Microsoft Azure Blob storage service


The example performs a full database backup of Sales to the Microsoft Azure Blob storage service. The storage
Account name is mystorageaccount . The container is called myfirstcontainer . A stored access policy has been
created with read, write, delete, and list rights. The SQL Server credential,
https://mystorageaccount.blob.core.windows.net/myfirstcontainer , was created using a Shared Access Signature
that is associated with the Stored Access Policy. For information on SQL Server backup to the Windows Azure
Blob storage service, see SQL Server Backup and Restore with Microsoft Azure Blob Storage Service and SQL
Server Backup to URL.
BACKUP DATABASE Sales
TO URL = 'https://mystorageaccount.blob.core.windows.net/myfirstcontainer/Sales_20160726.bak'
WITH STATS = 5;

See Also
Backup Devices (SQL Server)
Media Sets, Media Families, and Backup Sets (SQL Server)
Tail-Log Backups (SQL Server)
ALTER DATABASE (Transact-SQL )
DBCC SQLPERF (Transact-SQL )
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
sp_addumpdevice (Transact-SQL )
sp_configure (Transact-SQL )
sp_helpfile (Transact-SQL )
sp_helpfilegroup (Transact-SQL )
Server Configuration Options (SQL Server)
Piecemeal Restore of Databases With Memory-Optimized Tables
BACKUP CERTIFICATE (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Exports a certificate to a file.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

BACKUP CERTIFICATE certname TO FILE = 'path_to_file'


[ WITH PRIVATE KEY
(
FILE = 'path_to_private_key_file' ,
ENCRYPTION BY PASSWORD = 'encryption_password'
[ , DECRYPTION BY PASSWORD = 'decryption_password' ]
)
]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

BACKUP CERTIFICATE certname TO FILE ='path_to_file'


WITH PRIVATE KEY
(
FILE ='path_to_private_key_file',
ENCRYPTION BY PASSWORD ='encryption_password'
)

Arguments
path_to_file
Specifies the complete path, including file name, of the file in which the certificate is to be saved. This can be a
local path or a UNC path to a network location. The default is the path of the SQL Server DATA folder.
path_to_private_key_file
Specifies the complete path, including file name, of the file in which the private key is to be saved. This can be a
local path or a UNC path to a network location. The default is the path of the SQL Server DATA folder.
encryption_password
Is the password that is used to encrypt the private key before writing the key to the backup file. The password is
subject to complexity checks.
decryption_password
Is the password that is used to decrypt the private key before backing up the key.

Remarks
If the private key is encrypted with a password in the database, the decryption password must be specified.
When you back up the private key to a file, encryption is required. The password used to protect the backed up
certificate is not the same password that is used to encrypt the private key of the certificate.
To restore a backed up certificate, use the CREATE CERTIFICATEstatement.

Permissions
Requires CONTROL permission on the certificate and knowledge of the password that is used to encrypt the
private key. If only the public part of the certificate is backed up, requires some permission on the certificate and
that the caller has not been denied VIEW permission on the certificate.

Examples
A. Exporting a certificate to a file
The following example exports a certificate to a file.

BACKUP CERTIFICATE sales05 TO FILE = 'c:\storedcerts\sales05cert';


GO

B. Exporting a certificate and a private key


In the following example, the private key of the certificate that is backed up will be encrypted with the password
997jkhUbhk$w4ez0876hKHJH5gh .

BACKUP CERTIFICATE sales05 TO FILE = 'c:\storedcerts\sales05cert'


WITH PRIVATE KEY ( FILE = 'c:\storedkeys\sales05key' ,
ENCRYPTION BY PASSWORD = '997jkhUbhk$w4ez0876hKHJH5gh' );
GO

C. Exporting a certificate that has an encrypted private key


In the following example, the private key of the certificate is encrypted in the database. The private key must be
decrypted with the password 9875t6#6rfid7vble7r . When the certificate is stored to the backup file, the private
key will be encrypted with the password 9n34khUbhk$w4ecJH5gh .

BACKUP CERTIFICATE sales09 TO FILE = 'c:\storedcerts\sales09cert'


WITH PRIVATE KEY ( DECRYPTION BY PASSWORD = '9875t6#6rfid7vble7r' ,
FILE = 'c:\storedkeys\sales09key' ,
ENCRYPTION BY PASSWORD = '9n34khUbhk$w4ecJH5gh' );
GO

See Also
CREATE CERTIFICATE (Transact-SQL )
ALTER CERTIFICATE (Transact-SQL )
DROP CERTIFICATE (Transact-SQL )
BACKUP DATABASE (Parallel Data Warehouse)
5/4/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a backup of a Parallel Data Warehouse database and stores the backup off the appliance in a user-specified
network location. Use this statement with RESTORE DATABASE (Parallel Data Warehouse) for disaster recovery,
or to copy a database from one appliance to another.
Before you begin, see "Acquire and Configure a Backup Server" in the Parallel Data Warehouse product
documentation.
There are two types of backups in Parallel Data Warehouse. A full database backup is a backup of an entire Parallel
Data Warehouse database. A differential database backup only includes changes made since the last full backup. A
backup of a user database includes database users, and database roles. A backup of the master database includes
logins.
For more information about Parallel Data Warehouse database backups, see "Backup and Restore" in the Parallel
Data Warehouse product documentation.
Transact-SQL Syntax Conventions (Transact-SQL )

Syntax
Create a full backup of a user database or the master database.
BACKUP DATABASE database_name
TO DISK = '\\UNC_path\backup_directory'
[ WITH [ ( ] <with_options> [ ,...n ] [ ) ] ]
[;]

Create a differential backup of a user database.


BACKUP DATABASE database_name
TO DISK = '\\UNC_path\backup_directory'
WITH [ ( ] DIFFERENTIAL
[ , <with_options> [ ,...n ] [ ) ]
[;]

<with_options> ::=
DESCRIPTION = 'text'
| NAME = 'backup_name'

Arguments
database_name
The name of the database on which to create a backup. The database can be the master database or a user
database.
TO DISK = '\\UNC_path\backup_directory'
The network path and directory to which Parallel Data Warehouse will write the backup files. For example,
'\\xxx.xxx.xxx.xxx\backups\2012\Monthly\08.2012.Mybackup'.
The path to the backup directory name must already exist and must be specified as a fully qualified
universal naming convention (UNC ) path.
The backup directory, backup_directory, must not exist before running the backup command. Parallel Data
Warehouse will create the backup directory.
The path to the backup directory cannot be a local path and it cannot be a location on any of the Parallel
Data Warehouse appliance nodes.
The maximum length of the UNC path and backup directory name is 200 characters.
The server or host must be specified as an IP address. You cannot specify it as the host or server name.
DESCRIPTION = 'text'
Specifies a textual description of the backup. The maximum length of the text is 255 characters.
The description is stored in the metadata, and will be displayed when the backup header is restored with
RESTORE HEADERONLY.
NAME = 'backup _name'
Specifies the name of the backup. The backup name can be different from the database name.
Names can have a maximum of 128 characters.
Cannot include a path.
Must begin with a letter or number character or an underscore (). Special characters permitted are the
underscore (\), hyphen (-), or space ( ). Backup names cannot end with a space character.
The statement will fail if backup_name already exists in the specified location.
This name is stored in the metadata, and will be displayed when the backup header is restored with
RESTORE HEADERONLY.
DIFFERENTIAL
Specifies to perform a differential backup of a user database. If omitted, the default is a full database backup.
The name of the differential backup does not need to match the name of the full backup. For keeping track
of the differential and its corresponding full backup, consider using the same name with 'full' or 'diff'
appended.
For example:
BACKUP DATABASE Customer TO DISK = '\\xxx.xxx.xxx.xxx\backups\CustomerFull';

BACKUP DATABASE Customer TO DISK = '\\xxx.xxx.xxx.xxx\backups\CustomerDiff' WITH DIFFERENTIAL;

Permissions
Requires the BACKUP DATABASE permission or membership in the db_backupoperator fixed database role.
The master database cannot be backed up but by a regular user that was added to the db_backupoperator fixed
database role. The master database can only be backed up by sa, the fabric administrator, or members of the
sysadmin fixed server role.
Requires a Windows account that has permission to access, create, and write to the backup directory. You must
also store the Windows account name and password in Parallel Data Warehouse. To add these network credentials
to Parallel Data Warehouse, use the sp_pdw_add_network_credentials (SQL Data Warehouse) stored procedure.
For more information about managing credentials in Parallel Data Warehouse, see the Security section.

Error Handling
BACKUP DATABASE errors under the following conditions:
User permissions are not sufficient to perform a backup.
Parallel Data Warehouse does not have the correct permissions to the network location where the backup
will be stored.
The database does not exist.
The target directory already exists on the network share.
The target network share is not available.
The target network share does not have enough space for the backup. The BACKUP DATABASE command
does not confirm that sufficient disk space exists prior to initiating the backup, making it possible to
generate an out-of-disk-space error while running BACKUP DATABASE. When insufficient disk space
occurs, Parallel Data Warehouse rolls back the BACKUP DATABASE command. To decrease the size of your
database, run DBCC SHRINKLOG (Azure SQL Data Warehouse)
Attempt to start a backup within a transaction.

General Remarks
Before you perform a database backup, use DBCC SHRINKLOG (Azure SQL Data Warehouse) to decrease the size
of your database.
A Parallel Data Warehouse backup is stored as a set of multiple files within the same directory.
A differential backup usually takes less time than a full backup and can be performed more frequently. When
multiple differential backups are based on the same full backup, each differential includes all of the changes in the
previous differential backup.
If you cancel a BACKUP command, Parallel Data Warehouse will remove the target directory and any files created
for the backup. If Parallel Data Warehouse loses network connectivity to the share, the rollback cannot complete.
Full backups and differential backups are stored in separate directories. Naming conventions are not enforced for
specifying that a full backup and differential backup belong together. You can track this through your own naming
conventions. Alternatively, you can track this by using the WITH DESCRIPTION option to add a description, and
then by using the RESTORE HEADERONLY statement to retrieve the description.

Limitations and Restrictions


You cannot perform a differential backup of the master database. Only full backups of the master database are
supported.
The backup files are stored in a format suitable only for restoring the backup to a Parallel Data Warehouse
appliance by using the RESTORE DATABASE (Parallel Data Warehouse) statement.
The backup with the BACKUP DATABASE statement cannot be used to transfer data or user information to SMP
SQL Server databases. For that functionality, you can use the remote table copy feature. For more information, see
"Remote Table Copy" in the Parallel Data Warehouse product documentation.
Parallel Data Warehouse uses SQL Server backup technology to backup and restore databases. SQL Server
backup options are preconfigured to use backup compression. You cannot set backup options such as
compression, checksum, block size, and buffer count.
Only one database backup or restore can run on the appliance at any given time. Parallel Data Warehouse will
queue backup or restore commands until the current backup or restore command has completed.
The target appliance for restoring the backup must have at least as many Compute nodes as the source appliance.
The target can have more Compute nodes than the source appliance, but cannot have fewer Compute nodes.
Parallel Data Warehouse does not track the location and names of backups since the backups are stored off the
appliance.
Parallel Data Warehouse does track the success or failure of database backups.
A differential backup is only allowed if the last full backup completed successfully. For example, suppose that on
Monday you create a full backup of the Sales database and the backup finishes successfully. Then on Tuesday you
create a full backup of the Sales database and it fails. After this failure, you cannot then create a differential backup
based on Monday’s full backup. You must first create a successful full backup before creating a differential backup.

Metadata

These dynamic management views contain information about all backup, restore, and load operations. The
information persists across system restarts.
sys.pdw_loader_backup_runs (Transact-SQL )
sys.pdw_loader_backup_run_details (Transact-SQL )
sys.pdw_loader_run_stages (Transact-SQL )

Performance
To perform a backup, Parallel Data Warehouse first backs up the metadata, and then it performs a parallel backup
of the database data stored on the Compute nodes. Data is copied directly from each Compute nodes to the
backup directory. To achieve the best performance for moving data from the Compute nodes to the backup
directory, Parallel Data Warehouse controls the number of Compute nodes that are copying data concurrently.

Locking
Takes an ExclusiveUpdate lock on the DATABASE object.

Security
Parallel Data Warehouse backups are not stored on the appliance. Therefore, your IT team is responsible for
managing all aspects of the backup security. For example, this includes managing the security of the backup data,
the security of the server used to store backups, and the security of the networking infrastructure that connects the
backup server to the Parallel Data Warehouse appliance.
Manage Network Credentials
Network access to the backup directory is based on standard Windows file sharing security. Before performing a
backup, you need to create or designate a Windows account that will be used for authenticating Parallel Data
Warehouse to the backup directory. This windows account must have permission to access, create, and write to the
backup directory.

IMPORTANT
To reduce security risks with your data, we advise that you designate one Windows account solely for the purpose of
performing backup and restore operations. Allow this account to have permissions to the backup location and nowhere else.

You need to store the user name and password in Parallel Data Warehouse by running the
sp_pdw_add_network_credentials (SQL Data Warehouse) stored procedure. Parallel Data Warehouse uses
Windows Credential Manager to store and encrypt user names and passwords on the Control node and Compute
nodes. The credentials are not backed up with the BACKUP DATABASE command.
To remove network credentials from Parallel Data Warehouse, see sp_pdw_remove_network_credentials (SQL
Data Warehouse).
To list all of the network credentials stored in Parallel Data Warehouse, use the sys.dm_pdw_network_credentials
(Transact-SQL ) dynamic management view.

Examples
A. Add network credentials for the backup location
To create a backup, Parallel Data Warehouse must have read/write permission to the backup directory. The
following example shows how to add the credentials for a user. Parallel Data Warehouse will store these credentials
and use them to for backup and restore operations.

IMPORTANT
For security reasons, we recommend creating one domain account solely for the purpose of performing backups.

EXEC sp_pdw_add_network_credentials 'xxx.xxx.xxx.xxx', 'domain1\backupuser', '*****';

B. Remove network credentials for the backup location


The following example shows how to remove the credentials for a domain user from Parallel Data Warehouse.

EXEC sp_pdw_remove_network_credentials 'xxx.xxx.xxx.xxx';

C. Create a full backup of a user database


The following example creates a full backup of the Invoices user database. Parallel Data Warehouse will create the
Invoices2013 directory and will save the backup files to the \\10.192.63.147\backups\yearly\Invoices2013Full
directory.

BACKUP DATABASE Invoices TO DISK = '\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full';

D. Create a differential backup of a user database


The following example creates a differential backup, which includes all changes made since the last full backup of
the Invoices database. Parallel Data Warehouse will create the \\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Diff
directory to which it will store the files. The description 'Invoices 2013 differential backup' will be stored with the
header information for the backup.
The differential backup will only run successfully if the last full backup of Invoices completed successfully.

BACKUP DATABASE Invoices TO DISK = '\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Diff'


WITH DIFFERENTIAL,
DESCRIPTION = 'Invoices 2013 differential backup';

E. Create a full backup of the master database


The following example creates a full backup of the master database and stores it in the directory
'\\10.192.63.147\backups\2013\daily\20130722\master'.
BACKUP DATABASE master TO DISK = '\\xxx.xxx.xxx.xxx\backups\2013\daily\20130722\master';

F. Create a backup of appliance login information.


The master database stores the appliance login information. To backup the appliance login information you need to
backup master.
The following example creates a full backup of the master database.

BACKUP DATABASE master TO DISK = '\\xxx.xxx.xxx.xxx\backups\2013\daily\20130722\master'


WITH (
DESCRIPTION = 'Master Backup 20130722',
NAME = 'login-backup'
)
;

See Also
RESTORE DATABASE (Parallel Data Warehouse)
BACKUP MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Exports the database master key.
Transact-SQL Syntax Conventions

Syntax
BACKUP MASTER KEY TO FILE = 'path_to_file'
ENCRYPTION BY PASSWORD = 'password'

Arguments
FILE ='path_to_file'
Specifies the complete path, including file name, to the file to which the master key will be exported. This may be a
local path or a UNC path to a network location.
PASSWORD ='password'
Is the password used to encrypt the master key in the file. This password is subject to complexity checks. For more
information, see Password Policy.

Remarks
The master key must be open and, therefore, decrypted before it is backed up. If it is encrypted with the service
master key, the master key does not have to be explicitly opened. But if the master key is encrypted only with a
password, it must be explicitly opened.
We recommend that you back up the master key as soon as it is created, and store the backup in a secure, off-site
location.

Permissions
Requires CONTROL permission on the database.

Examples
The following example creates a backup of the AdventureWorks2012 master key. Because this master key is not
encrypted by the service master key, a password must be specified when it is opened.

USE AdventureWorks2012;
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'sfj5300osdVdgwdfkli7';
BACKUP MASTER KEY TO FILE = 'c:\temp\exportedmasterkey'
ENCRYPTION BY PASSWORD = 'sd092735kjn$&adsg';
GO
See Also
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
RESTORE MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
DROP MASTER KEY (Transact-SQL )
Encryption Hierarchy
BACKUP SERVICE MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Exports the service master key.
Transact-SQL Syntax Conventions

Syntax
BACKUP SERVICE MASTER KEY TO FILE = 'path_to_file'
ENCRYPTION BY PASSWORD = 'password'

Arguments
FILE ='path_to_file'
Specifies the complete path, including file name, to the file to which the service master key will be exported. This
may be a local path or a UNC path to a network location.
PASSWORD ='password'
Is the password used to encrypt the service master key in the backup file. This password is subject to complexity
checks. For more information, see Password Policy.

Remarks
The service master key should be backed up and stored in a secure, off-site location. Creating this backup should
be one of the first administrative actions performed on the server.

Permissions
Requires CONTROL SERVER permission on the server.

Examples
In the following example, the service master key is backed up to a file.

BACKUP SERVICE MASTER KEY TO FILE = 'c:\temp_backups\keys\service_master_key' ENCRYPTION BY PASSWORD =


'3dH85Hhk003GHk2597gheij4';

See Also
ALTER SERVICE MASTER KEY (Transact-SQL )
RESTORE SERVICE MASTER KEY (Transact-SQL )
RESTORE Statements (Transact-SQL)
5/4/2018 • 24 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Restores backups taken using the BACKUP command. This command enables you to perform the following
restore scenarios:
Restore an entire database from a full database backup (a complete restore).
Restore part of a database (a partial restore).
Restore specific files or filegroups to a database (a file restore).
Restore specific pages to a database (a page restore).
Restore a transaction log onto a database (a transaction log restore).
Revert a database to the point in time captured by a database snapshot.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL
Database Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

For more information about SQL Server restore scenarios, see Restore and Recovery Overview (SQL
Server). For more information about descriptions of the arguments, see RESTORE Arguments (Transact-
SQL ). When restoring a database from another instance, consider the information from Manage Metadata
When Making a Database Available on Another Server Instance (SQL Server).

NOTE: For more information about restoring from the Windows Azure Blob storage service, see SQL
Server Backup and Restore with Microsoft Azure Blob Storage Service.

Transact-SQL Syntax Conventions

Syntax
--To Restore an Entire Database from a Full database backup (a Complete Restore):
RESTORE DATABASE { database_name | @database_name_var }
[ FROM <backup_device> [ ,...n ] ]
[ WITH
{
[ RECOVERY | NORECOVERY | STANDBY =
{standby_file_name | @standby_file_name_var }
]
| , <general_WITH_options> [ ,...n ]
| , <replication_WITH_option>
| , <change_data_capture_WITH_option>
| , <FILESTREAM_WITH_option>
| , <service_broker_WITH options>
| , \<point_in_time_WITH_options—RESTORE_DATABASE>
} [ ,...n ]
]
[;]
[;]

--To perform the first step of the initial restore sequence


-- of a piecemeal restore:
RESTORE DATABASE { database_name | @database_name_var }
<files_or_filegroups> [ ,...n ]
[ FROM <backup_device> [ ,...n ] ]
WITH
PARTIAL, NORECOVERY
[ , <general_WITH_options> [ ,...n ]
| , \<point_in_time_WITH_options—RESTORE_DATABASE>
] [ ,...n ]
[;]

--To Restore Specific Files or Filegroups:


RESTORE DATABASE { database_name | @database_name_var }
<file_or_filegroup> [ ,...n ]
[ FROM <backup_device> [ ,...n ] ]
WITH
{
[ RECOVERY | NORECOVERY ]
[ , <general_WITH_options> [ ,...n ] ]
} [ ,...n ]
[;]

--To Restore Specific Pages:


RESTORE DATABASE { database_name | @database_name_var }
PAGE = 'file:page [ ,...n ]'
[ , <file_or_filegroups> ] [ ,...n ]
[ FROM <backup_device> [ ,...n ] ]
WITH
NORECOVERY
[ , <general_WITH_options> [ ,...n ] ]
[;]

--To Restore a Transaction Log:


RESTORE LOG { database_name | @database_name_var } -- Does not apply to SQL Database Managed Instance
[ <file_or_filegroup_or_pages> [ ,...n ] ]
[ FROM <backup_device> [ ,...n ] ]
[ WITH
{
[ RECOVERY | NORECOVERY | STANDBY =
{standby_file_name | @standby_file_name_var }
]
| , <general_WITH_options> [ ,...n ]
| , <replication_WITH_option>
| , \<point_in_time_WITH_options—RESTORE_LOG>
} [ ,...n ]
]
[;]

--To Revert a Database to a Database Snapshot:


RESTORE DATABASE { database_name | @database_name_var }
FROM DATABASE_SNAPSHOT = database_snapshot_name

<backup_device>::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK -- Does not apply to SQL Database Managed Instance
| TAPE -- Does not apply to SQL Database Managed Instance
| URL -- Applies to SQL Server and SQL Database Managed Instance
} = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Note: URL is the format used to specify the location and the file name for the Windows Azure Blob.
Although Windows Azure storage is a service, the implementation is similar to disk and tape to allow for
a consistent and seemless restore experince for all the three devices.
<files_or_filegroups>::=
{
{
FILE = { logical_file_name_in_backup | @logical_file_name_in_backup_var }
| FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
| READ_WRITE_FILEGROUPS
}

<general_WITH_options> [ ,...n ]::=


--Restore Operation Options
MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name'
[ ,...n ]
| REPLACE
| RESTART
| RESTRICTED_USER | CREDENTIAL

--Backup Set Options


| FILE = { backup_set_file_number | @backup_set_file_number }
| PASSWORD = { password | @password_variable }

--Media Set Options


| MEDIANAME = { media_name | @media_name_variable }
| MEDIAPASSWORD = { mediapassword | @mediapassword_variable }
| BLOCKSIZE = { blocksize | @blocksize_variable }

--Data Transfer Options


| BUFFERCOUNT = { buffercount | @buffercount_variable }
| MAXTRANSFERSIZE = { maxtransfersize | @maxtransfersize_variable }

--Error Management Options


| { CHECKSUM | NO_CHECKSUM }
| { STOP_ON_ERROR | CONTINUE_AFTER_ERROR }

--Monitoring Options
| STATS [ = percentage ]

--Tape Options. Does not apply to SQL Database Managed Instance


| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }

<replication_WITH_option>::=
| KEEP_REPLICATION

<change_data_capture_WITH_option>::=
| KEEP_CDC

<FILESTREAM_WITH_option>::=
| FILESTREAM ( DIRECTORY_NAME = directory_name )

<service_broker_WITH_options>::=
| ENABLE_BROKER
| ERROR_BROKER_CONVERSATIONS
| NEW_BROKER

\<point_in_time_WITH_options—RESTORE_DATABASE>::=
| {
STOPAT = { 'datetime'| @datetime_var }
| STOPATMARK = 'lsn:lsn_number'
[ AFTER 'datetime']
| STOPBEFOREMARK = 'lsn:lsn_number'
[ AFTER 'datetime']
}

\<point_in_time_WITH_options—RESTORE_LOG>::=
| {
STOPAT = { 'datetime'| @datetime_var }
| STOPATMARK = { 'mark_name' | 'lsn:lsn_number' }
[ AFTER 'datetime']
| STOPBEFOREMARK = { 'mark_name' | 'lsn:lsn_number' }
[ AFTER 'datetime']
}
Arguments
For descriptions of the arguments, see RESTORE Arguments (Transact-SQL ).

About Restore Scenarios


SQL Server supports a variety of restore scenarios:
Complete database restore
Restores the entire database, beginning with a full database backup, which may be followed by
restoring a differential database backup (and log backups). For more information, see Complete
Database Restores (Simple Recovery Model) or Complete Database Restores (Full Recovery Model).
File restore
Restores a file or filegroup in a multi-filegroup database. Note that under the simple recovery model,
the file must belong to a read-only filegroup. After a full file restore, a differential file backup can be
restored. For more information, see File Restores (Full Recovery Model) and File Restores (Simple
Recovery Model).
Page restore
Restores individual pages. Page restore is available only under the full and bulk-logged recovery
models. For more information, see Restore Pages (SQL Server).
Piecemeal restore
Restores the database in stages, beginning with the primary filegroup and one or more secondary
filegroups. A piecemeal restore begins with a RESTORE DATABASE using the PARTIAL option and
specifying one or more secondary filegroups to be restored. For more information, see Piecemeal
Restores (SQL Server).
Recovery only
Recovers data that is already consistent with the database and needs only to be made available. For
more information, see Recover a Database Without Restoring Data (Transact-SQL ).
Transaction log restore.
Under the full or bulk-logged recovery model, restoring log backups is required to reach the desired
recovery point. For more information about restoring log backups, see Apply Transaction Log
Backups (SQL Server).
Prepare an availability database for an Always On availability group
For more information, see Manually Prepare a Secondary Database for an Availability Group (SQL
Server).
Prepare a mirror database for database mirroring
For more information, see Prepare a Mirror Database for Mirroring (SQL Server).
Online Restore

NOTE: Online restore is allowed only in Enterprise edition of SQL Server.

Where online restore is supported, if the database is online, file restores and page restores are
automatically online restores and, also, restores of secondary filegroup after the initial stage of a
piecemeal restore.

NOTE: Online restores can involve deferred transactions.

For more information, see Online Restore (SQL Server).

Additional Considerations About RESTORE Options


Discontinued RESTORE Keywords
The following keywords were discontinued in SQL Server 2008:

DISCONTINUED KEYWORD REPLACED BY… EXAMPLE OF REPLACEMENT KEYWORD

LOAD RESTORE RESTORE DATABASE

TRANSACTION LOG RESTORE LOG

DBO_ONLY RESTRICTED_USER RESTORE DATABASE ... WITH


RESTRICTED_USER

RESTORE LOG
RESTORE LOG can include a file list to allow for creation of files during roll forward. This is used when the
log backup contains log records written when a file was added to the database.

NOTE: For a database using the full or bulk-logged recovery model, in most cases you must back up
the tail of the log before restoring the database. Restoring a database without first backing up the tail of
the log results in an error, unless the RESTORE DATABASE statement contains either the WITH
REPL ACE or the WITH STOPAT clause, which must specify a time or transaction that occurred after the
end of the data backup. For more information about tail-log backups, see Tail-Log Backups (SQL
Server).

Comparison of RECOVERY and NORECOVERY


Roll back is controlled by the RESTORE statement through the [ RECOVERY | NORECOVERY ] options:
NORECOVERY specifies that roll back not occur. This allows roll forward to continue with the next
statement in the sequence.
In this case, the restore sequence can restore other backups and roll them forward.
RECOVERY (the default) indicates that roll back should be performed after roll forward is completed
for the current backup.
Recovering the database requires that the entire set of data being restored (the roll forward set) is
consistent with the database. If the roll forward set has not been rolled forward far enough to be
consistent with the database and RECOVERY is specified, the Database Engine issues an error.

Compatibility Support
Backups of master, model and msdb that were created by using an earlier version of SQL Server cannot
be restored by SQL Server 2017.

NOTE: No SQL Server backup be restored to an earlier version of SQL Server than the version on
which the backup was created.
Each version of SQL Server uses a different default path than earlier versions. Therefore, to restore a
database that was created in the default location for earlier version backups, you must use the MOVE
option. For information about the new default path, see File Locations for Default and Named Instances of
SQL Server.
After you restore an earlier version database to SQL Server 2017, the database is automatically upgraded.
Typically, the database becomes available immediately. However, if a SQL Server 2005 database has full-
text indexes, the upgrade process either imports, resets, or rebuilds them, depending on the setting of the
upgrade_option server property. If the upgrade option is set to import (upgrade_option = 2) or rebuild
(upgrade_option = 0), the full-text indexes will be unavailable during the upgrade. Depending the amount
of data being indexed, importing can take several hours, and rebuilding can take up to ten times longer.
Note also that when the upgrade option is set to import, the associated full-text indexes are rebuilt if a full-
text catalog is not available. To change the setting of the upgrade_option server property, use
sp_fulltext_service.
When a database is first attached or restored to a new instance of SQL Server, a copy of the database
master key (encrypted by the service master key) is not yet stored in the server. You must use the OPEN
MASTER KEY statement to decrypt the database master key (DMK). Once the DMK has been decrypted,
you have the option of enabling automatic decryption in the future by using the ALTER MASTER KEY
REGENERATE statement to provision the server with a copy of the DMK, encrypted with the service
master key (SMK). When a database has been upgraded from an earlier version, the DMK should be
regenerated to use the newer AES algorithm. For more information about regenerating the DMK, see
ALTER MASTER KEY (Transact-SQL ). The time required to regenerate the DMK key to upgrade to AES
depends upon the number of objects protected by the DMK. Regenerating the DMK key to upgrade to AES
is only necessary once, and has no impact on future regenerations as part of a key rotation strategy.

General Remarks
During an offline restore, if the specified database is in use, RESTORE forces the users off after a short
delay. For online restore of a non-primary filegroup, the database can stay in use except when the filegroup
being restored is being taken offline. Any data in the specified database is replaced by the restored data.
For more information about database recovery, see Restore and Recovery Overview (SQL Server).
Cross-platform restore operations, even between different processor types, can be performed as long as the
collation of the database is supported by the operating system.
RESTORE can be restarted after an error. In addition, you can instruct RESTORE to continue despite errors,
and it restores as much data as possible (see the CONTINUE_AFTER_ERROR option).
RESTORE is not allowed in an explicit or implicit transaction.
Restoring a damaged master database is performed using a special procedure. For more information, see
Back Up and Restore of System Databases (SQL Server).
Restoring a database clears the plan cache for the instance of SQL Server. Clearing the plan cache causes a
recompilation of all subsequent execution plans and can cause a sudden, temporary decrease in query
performance. For each cleared cachestore in the plan cache, the SQL Server error log contains the following
informational message: " SQL Server has encountered %d occurrence(s) of cachestore flush for the '%s'
cachestore (part of plan cache) due to some database maintenance or reconfigure operations". This
message is logged every five minutes as long as the cache is flushed within that time interval.
To restore an availability database, first restore the database to the instance of SQL Server, and then add the
database to the availability group.

General Remarks - SQL Database Managed Instance


For an asynchronous restore, the restore continues even if client connection breaks. If your connection is
dropped, you can check sys.dm_operation_status view for the status of a restore operation (as well as for
CREATE and DROP database).
The following database options are set/overridden and cannot be changed later:
NEW_BROKER (if broker is not enabled in .bak file)
ENABLE_BROKER (if broker is not enabled in .bak file)
AUTO_CLOSE=OFF (if a database in .bak file has AUTO_CLOSE=ON )
RECOVERY FULL (if a database in .bak file has SIMPLE or BULK_LOGGED recovery mode)
Memory optimized filegroup is added and called XTP if it was not in the source .bak file. Any existing
memory optimized filegroup is renamed to XTP
SINGLE_USER and RESTRICTED_USER options are converted to MULTI_USER

Limitations - SQL Database Managed Instance


These limitations apply:
.BAK files containing multiple backup sets cannot be restored.
.BAK files containing multiple log files cannot be restored.
Restore will fail if .bak contains FILESTREAM data.
Backups containing databases that have active In-memory objects cannot currently be restored.
Backups containing databases where at some point In-Memory objects existed cannot currently be
restored.
Backups containing databases in read-only mode cannot currently be restored. This limitation will be
removed soon.
For more information, see Managed Instance

Interoperability
Database Settings and Restoring
During a restore, most of the database options that are settable using ALTER DATABASE are reset to the
values in force at the time of the end of backup.
Using the WITH RESTRICTED_USER option, however, overrides this behavior for the user access option
setting. This setting is always set following a RESTORE statement, which includes the WITH
RESTRICTED_USER option.
Restoring an Encrypted Database
To restore a database that is encrypted, you must have access to the certificate or asymmetric key that was
used to encrypt the database. Without the certificate or asymmetric key, the database cannot be restored. As
a result, the certificate that is used to encrypt the database encryption key must be retained as long as the
backup is needed. For more information, see SQL Server Certificates and Asymmetric Keys.
Restoring a Database Enabled for vardecimal Storage
Backup and restore work correctly with the vardecimal storage format. For more information about
vardecimal storage format, see sp_db_vardecimal_storage_format (Transact-SQL ).
Restore Full-Text Data
Full-text data is restored together with other database data during a complete restore. Using the regular
RESTORE DATABASE database_name FROM backup_device syntax, the full-text files are restored as part of the
database file restore.
The RESTORE statement also can be used to perform restores to alternate locations, differential restores,
file and filegroup restores, and differential file and filegroup restores of full-text data. In addition, RESTORE
can restore full-text files only, as well as with database data.

NOTE: Full-text catalogs imported from SQL Server 2005 are still treated as database files. For these,
the SQL Server 2005 procedure for backing up full-text catalogs remains applicable, except that
pausing and resuming during the backup operation are no longer necessary. For more information, see
Backing Up and Restoring Full-Text Catalogs.

Metadata

SQL Server includes backup and restore history tables that track the backup and restore activity for each
server instance. When a restore is performed, the backup history tables are also modified. For information
on these tables, see Backup History and Header Information (SQL Server).

REPLACE Option Impact


REPL ACE should be used rarely and only after careful consideration. Restore normally prevents
accidentally overwriting a database with a different database. If the database specified in a RESTORE
statement already exists on the current server and the specified database family GUID differs from the
database family GUID recorded in the backup set, the database is not restored. This is an important
safeguard.
The REPL ACE option overrides several important safety checks that restore normally performs. The
overridden checks are as follows:
Restoring over an existing database with a backup taken of another database.
With the REPL ACE option, restore allows you to overwrite an existing database with whatever
database is in the backup set, even if the specified database name differs from the database name
recorded in the backup set. This can result in accidentally overwriting a database by a different
database.
Restoring over a database using the full or bulk-logged recovery model where a tail-log backup has
not been taken and the STOPAT option is not used.
With the REPL ACE option, you can lose committed work, because the log written most recently has
not been backed up.
Overwriting existing files.
For example, a mistake could allow overwriting files of the wrong type, such as .xls files, or that are
being used by another database that is not online. Arbitrary data loss is possible if existing files are
overwritten, although the restored database is complete.

Redoing a Restore
Undoing the effects of a restore is not possible; however, you can negate the effects of the data copy and roll
forward by starting over on a per-file basis. To start over, restore the desired file and perform the roll
forward again. For example, if you accidentally restored too many log backups and overshot your intended
stopping point, you would have to restart the sequence.
A restore sequence can be aborted and restarted by restoring the entire contents of the affected files.
Reverting a Database to a Database Snapshot
A revert database operation (specified using the DATABASE_SNAPSHOT option) takes a full source
database back in time by reverting it to the time of a database snapshot, that is, overwriting the source
database with data from the point in time maintained in the specified database snapshot. Only the snapshot
to which you are reverting can currently exist. The revert operation then rebuilds the log (therefore, you
cannot later roll forward a reverted database to the point of user error).
Data loss is confined to updates to the database since the snapshot's creation. The metadata of a reverted
database is the same as the metadata at the time of snapshot creation. However, reverting to a snapshot
drops all the full-text catalogs.
Reverting from a database snapshot is not intended for media recovery. Unlike a regular backup set, the
database snapshot is an incomplete copy of the database files. If either the database or the database
snapshot is corrupted, reverting from a snapshot is likely to be impossible. Furthermore, even when
possible, reverting in the event of corruption is unlikely to correct the problem.
Restrictions on Reverting
Reverting is unsupported under the following conditions:
The source database contains any read-only or compressed filegroups.
Any files are offline that were online when the snapshot was created.
More than one snapshot of the database currently exists.
For more information, see Revert a Database to a Database Snapshot.

Security
A backup operation may optionally specify passwords for a media set, a backup set, or both. When a
password has been defined on a media set or backup set, you must specify the correct password or
passwords in the RESTORE statement. These passwords prevent unauthorized restore operations and
unauthorized appends of backup sets to media using SQL Server tools. However, password-protected
media can be overwritten by the BACKUP statement's FORMAT option.

IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server
tools by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using
this feature in new development work, and plan to modify applications that currently use this feature.The best
practice for protecting backups is to store backup tapes in a secure location or back up to disk files that are protected
by adequate access control lists (ACLs). The ACLs should be set on the directory root under which backups are
created.
For information specific to SQL Server backup and restore with the Windows Azure Blob storage, see SQL Server
Backup and Restore with Microsoft Azure Blob Storage Service.

Permissions
If the database being restored does not exist, the user must have CREATE DATABASE permissions to be
able to execute RESTORE. If the database exists, RESTORE permissions default to members of the
sysadmin and dbcreator fixed server roles and the owner (dbo) of the database (for the FROM
DATABASE_SNAPSHOT option, the database always exists).
RESTORE permissions are given to roles in which membership information is always readily available to
the server. Because fixed database role membership can be checked only when the database is accessible
and undamaged, which is not always the case when RESTORE is executed, members of the db_owner fixed
database role do not have RESTORE permissions.

Examples
All the examples assume that a full database backup has been performed.
The RESTORE examples include the following:
A. Restoring a full database
B. Restoring full and differential database backups
C. Restoring a database using RESTART syntax
D. Restoring a database and move files
E. Copying a database using BACKUP and RESTORE
F. Restoring to a point-in-time using STOPAT
G. Restoring the transaction log to a mark
H. Restoring using TAPE syntax
I. Restoring using FILE and FILEGROUP syntax
J. Reverting from a database snapshot
K. Restoring from the Microsoft Azure Blob storage service

NOTE: For additional examples, see the restore how -to topics that are listed in Restore and Recovery
Overview (SQL Server).

A. Restoring a full database


The following example restores a full database backup from the AdventureWorksBackups logical backup
device. For an example of creating this device, see Backup Devices.

RESTORE DATABASE AdventureWorks2012


FROM AdventureWorks2012Backups;

NOTE: For a database using the full or bulk-logged recovery model, SQL Server requires in most cases
that you back up the tail of the log before restoring the database. For more information, see Tail-Log
Backups (SQL Server).

[Top of examples]
B. Restoring full and differential database backups
The following example restores a full database backup followed by a differential backup from the
Z:\SQLServerBackups\AdventureWorks2012.bak backup device, which contains both backups. The full database
backup to be restored is the sixth backup set on the device ( FILE = 6 ), and the differential database backup
is the ninth backup set on the device ( FILE = 9 ). As soon as the differential backup is recovered, the
database is recovered.
RESTORE DATABASE AdventureWorks2012
FROM DISK = 'Z:\SQLServerBackups\AdventureWorks2012.bak'
WITH FILE = 6
NORECOVERY;
RESTORE DATABASE AdventureWorks2012
FROM DISK = 'Z:\SQLServerBackups\AdventureWorks2012.bak'
WITH FILE = 9
RECOVERY;

[Top of examples]
C. Restoring a database using RESTART syntax
The following example uses the RESTART option to restart a RESTORE operation interrupted by a server
power failure.

-- This database RESTORE halted prematurely due to power failure.


RESTORE DATABASE AdventureWorks2012
FROM AdventureWorksBackups;
-- Here is the RESTORE RESTART operation.
RESTORE DATABASE AdventureWorks2012
FROM AdventureWorksBackups WITH RESTART;

[Top of examples]
D. Restoring a database and move files
The following example restores a full database and transaction log and moves the restored database into
the C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Data directory.

RESTORE DATABASE AdventureWorks2012


FROM AdventureWorksBackups
WITH NORECOVERY,
MOVE 'AdventureWorks2012_Data' TO
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Data\NewAdvWorks.mdf',
MOVE 'AdventureWorks2012_Log'
TO 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Data\NewAdvWorks.ldf';
RESTORE LOG AdventureWorks2012
FROM AdventureWorksBackups
WITH RECOVERY;

[Top of examples]
E. Copying a database using BACKUP and RESTORE
The following example uses both the BACKUP and RESTORE statements to make a copy of the
AdventureWorks2012 database. The MOVE statement causes the data and log file to be restored to the
specified locations. The RESTORE FILELISTONLY statement is used to determine the number and names of the
files in the database being restored. The new copy of the database is named TestDB . For more information,
see RESTORE FILELISTONLY (Transact-SQL ).
BACKUP DATABASE AdventureWorks2012
TO AdventureWorksBackups ;

RESTORE FILELISTONLY
FROM AdventureWorksBackups ;

RESTORE DATABASE TestDB


FROM AdventureWorksBackups
WITH MOVE 'AdventureWorks2012_Data' TO 'C:\MySQLServer\testdb.mdf',
MOVE 'AdventureWorks2012_Log' TO 'C:\MySQLServer\testdb.ldf';
GO

[Top of examples]
F. Restoring to a point-in-time using STOPAT
The following example restores a database to its state as of 12:00 AM on April 15, 2020 and shows a
restore operation that involves multiple log backups. On the backup device, AdventureWorksBackups , the full
database backup to be restored is the third backup set on the device ( FILE = 3 ), the first log backup is the
fourth backup set ( FILE = 4 ), and the second log backup is the fifth backup set ( FILE = 5 ).

RESTORE DATABASE AdventureWorks2012


FROM AdventureWorksBackups
WITH FILE=3, NORECOVERY;

RESTORE LOG AdventureWorks2012


FROM AdventureWorksBackups
WITH FILE=4, NORECOVERY, STOPAT = 'Apr 15, 2020 12:00 AM';

RESTORE LOG AdventureWorks2012


FROM AdventureWorksBackups
WITH FILE=5, NORECOVERY, STOPAT = 'Apr 15, 2020 12:00 AM';
RESTORE DATABASE AdventureWorks2012 WITH RECOVERY;

[Top of examples]
G. Restoring the transaction log to a mark
The following example restores the transaction log to the mark in the marked transaction named
ListPriceUpdate .
USE AdventureWorks2012
GO
BEGIN TRANSACTION ListPriceUpdate
WITH MARK 'UPDATE Product list prices';
GO

UPDATE Production.Product
SET ListPrice = ListPrice * 1.10
WHERE ProductNumber LIKE 'BK-%';
GO

COMMIT TRANSACTION ListPriceUpdate;


GO

-- Time passes. Regular database


-- and log backups are taken.
-- An error occurs in the database.
USE master;
GO

RESTORE DATABASE AdventureWorks2012


FROM AdventureWorksBackups
WITH FILE = 3, NORECOVERY;
GO

RESTORE LOG AdventureWorks2012


FROM AdventureWorksBackups
WITH FILE = 4,
RECOVERY,
STOPATMARK = 'UPDATE Product list prices';

[Top of examples]
H. Restoring using TAPE syntax
The following example restores a full database backup from a TAPE backup device.

RESTORE DATABASE AdventureWorks2012


FROM TAPE = '\\.\tape0';

[Top of examples]
I. Restoring using FILE and FILEGROUP syntax
The following example restores a database named MyDatabase that has two files, one secondary filegroup,
and one transaction log. The database uses the full recovery model.
The database backup is the ninth backup set in the media set on a logical backup device named
MyDatabaseBackups . Next, three log backups, which are in the next three backup sets ( 10 , 11 , and 12 ) on
the MyDatabaseBackups device, are restored by using WITH NORECOVERY . After restoring the last log backup,
the database is recovered.

NOTE: Recovery is performed as a separate step to reduce the possibility of you recovering too early,
before all of the log backups have been restored.

In the RESTORE DATABASE , notice that there are two types of FILE options. The FILE options preceding the
backup device name specify the logical file names of the database files that are to be restored from the
backup set; for example, FILE = 'MyDatabase_data_1' . This backup set is not the first database backup in the
media set; therefore, its position in the media set is indicated by using the FILE option in the WITH clause,
FILE=9 .
RESTORE DATABASE MyDatabase
FILE = 'MyDatabase_data_1',
FILE = 'MyDatabase_data_2',
FILEGROUP = 'new_customers'
FROM MyDatabaseBackups
WITH
FILE = 9,
NORECOVERY;
GO
-- Restore the log backups.
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 10,
NORECOVERY;
GO
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 11,
NORECOVERY;
GO
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 12,
NORECOVERY;
GO
--Recover the database:
RESTORE DATABASE MyDatabase WITH RECOVERY;
GO

[Top of examples]
J. Reverting from a database snapshot
The following example reverts a database to a database snapshot. The example assumes that only one
snapshot currently exists on the database. For an example of how to create this database snapshot, see
Create a Database Snapshot (Transact-SQL ).

NOTE: Reverting to a snapshot drops all the full-text catalogs.

USE master;
RESTORE DATABASE AdventureWorks2012 FROM DATABASE_SNAPSHOT = 'AdventureWorks_dbss1800';
GO

For more information, see Revert a Database to a Database Snapshot.


[Top of examples]
K. Restoring from the Microsoft Azure Blob storage service
The three examples below involve the use of the Microsoft Azure storage service. The storage Account
name is mystorageaccount . The container for data files is called myfirstcontainer . The container for backup
files is called mysecondcontainer . A stored access policy has been created with read, write, delete, and list,
rights for each container. SQL Server credentials were created using Shared Access Signatures that are
associated with the Stored Access Policies. For information specific to SQL Server backup and restore with
the Microsoft Azure Blob storage, see SQL Server Backup and Restore with Microsoft Azure Blob Storage
Service.
K1. Restore a full database backup from the Microsoft Azure storage service
A full database backup, located at mysecondcontainer , of Sales will be restored to myfirstcontainer .
Sales does not currently exist on the server.
RESTORE DATABASE Sales
FROM URL = 'https://mystorageaccount.blob.core.windows.net/mysecondcontainer/Sales.bak'
WITH MOVE 'Sales_Data1' to
'https://mystorageaccount.blob.core.windows.net/myfirstcontainer/Sales_Data1.mdf',
MOVE 'Sales_log' to 'https://mystorageaccount.blob.core.windows.net/myfirstcontainer/Sales_log.ldf',
STATS = 10;

K2. Restore a full database backup from the Microsoft Azure storage service to local storage
A full database backup, located at mysecondcontainer , of Sales will be restored to local storage. Sales
does not currently exist on the server.

RESTORE DATABASE Sales


FROM URL = 'https://mystorageaccount.blob.core.windows.net/mysecondcontainer/Sales.bak'
WITH MOVE 'Sales_Data1' to 'H:\DATA\Sales_Data1.mdf',
MOVE 'Sales_log' to 'O:\LOG\Sales_log.ldf',
STATS = 10;

K3. Restore a full database backup from local storage to the Microsoft Azure storage service

RESTORE DATABASE Sales


FROM DISK = 'E:\BAK\Sales.bak'
WITH MOVE 'Sales_Data1' to
'https://mystorageaccount.blob.core.windows.net/myfirstcontainer/Sales_Data1.mdf',
MOVE 'Sales_log' to 'https://mystorageaccount.blob.core.windows.net/myfirstcontainer/Sales_log.ldf',
STATS = 10;

[Top of examples]

Much more information!!


Back Up and Restore of SQL Server Databases
Back Up and Restore of System Databases (SQL Server)
Restore a Database Backup Using SSMS
Back Up and Restore Full-Text Catalogs and Indexes
Back Up and Restore Replicated Databases
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
Backup History and Header Information (SQL Server)
RESTORE Statements for Restoring, Recovering, and
Managing Backups (Transact-SQL)
5/4/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
This section describes the RESTORE statements for backups. In addition to the main RESTORE {DATABASE | LOG }
statement for restoring and recovering backups, a number of auxiliary RESTORE statements help you manage
your backups and plan your restore sequences. The auxiliary RESTORE commands include: RESTORE
FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, RESTORE REWINDONLY, and RESTORE
VERIFYONLY.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

IMPORTANT
In previous versions of SQL Server, any user could obtain information about backup sets and backup devices by using the
RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE LABELONLY, and RESTORE VERIFYONLY Transact-SQL statements.
Because they reveal information about the content of the backup files, in SQL Server 2008 and later versions these
statements require CREATE DATABASE permission. This requirement secures your backup files and protects your backup
information more fully than in previous versions. For information about this permission, see GRANT Database Permissions
(Transact-SQL).

In This Section
STATEMENT DESCRIPTION

RESTORE (Transact-SQL) Describes the RESTORE DATABASE and RESTORE LOG Transact-
SQL statements used to restore and recover a database from
backups taken using the BACKUP command. RESTORE
DATABASE is used for databases under all recovery models.
RESTORE LOG is used only under the full and bulk-logged
recovery models. RESTORE DATABASE can also be used to
revert a database to a database snapshot.

RESTORE Arguments (Transact-SQL) Documents the arguments described in the "Syntax" sections
of the RESTORE statement and of the associated set of
auxiliary statements: RESTORE FILELISTONLY, RESTORE
HEADERONLY, RESTORE LABELONLY, RESTORE REWINDONLY,
and RESTORE VERIFYONLY. Most of the arguments are
supported by only a subset of these six statements. The
support for each argument is indicated in the description of
the argument.
STATEMENT DESCRIPTION

RESTORE FILELISTONLY (Transact-SQL) Describes the RESTORE FILELISTONLY Transact-SQL statement,


which is used to return a result set containing a list of the
database and log files contained in the backup set.

RESTORE HEADERONLY (Transact-SQL) Describes the RESTORE HEADERONLY Transact-SQL statement,


which is used to return a result set containing all the backup
header information for all backup sets on a particular backup
device.

RESTORE LABELONLY (Transact-SQL) Describes the RESTORE LABELONLY Transact-SQL statement,


which is used to return a result set containing information
about the backup media identified by the given backup device.

RESTORE REWINDONLY (Transact-SQL) Describes the RESTORE REWINDONLY Transact-SQL statement,


which is used to rewind and close tape devices that were left
open by BACKUP or RESTORE statements executed with the
NOREWIND option.

RESTORE VERIFYONLY (Transact-SQL) Describes the RESTORE VERIFYONLY Transact-SQL statement,


which is used to verify the backup but does not restore it, and
checks to see that the backup set is complete and the entire
backup is readable; does not attempt to verify the structure of
the data.

See Also
Back Up and Restore of SQL Server Databases
RESTORE DATABASE (Parallel Data Warehouse)
5/4/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Restores a Parallel Data Warehouse user database from a database backup to a Parallel Data Warehouse
appliance. The database is restored from a backup that was previously created by the Parallel Data
WarehouseBACKUP DATABASE (Parallel Data Warehouse) command. Use the backup and restore operations to
build a disaster recovery plan, or to move databases from one appliance to another.

NOTE
Restoring master includes restoring appliance login information. To restore master, use the Restore the master Database
(Transact-SQL) page in the Configuration Manager tool. An administrator with access to the Control node can perform this
operation.

For more information about Parallel Data Warehouse database backups, see "Backup and Restore" in the Parallel
Data Warehouse product documentation.
Transact-SQL Syntax Conventions (Transact-SQL )

Syntax
Restore the master database
-- Use the Configuration Manager tool.

Restore a full user database backup.


RESTORE DATABASE database_name
FROM DISK = '\\UNC_path\full_backup_directory'
[;]

Restore a full user database backup and then a differential backup.


RESTORE DATABASE database_name
FROM DISK = '\\UNC_path\differential_backup_directory'
WITH [ ( ] BASE = '\\UNC_path\full_backup_directory' [ ) ]
[;]

Restore header information for a full or differential user database backup.


RESTORE HEADERONLY
FROM DISK = '\\UNC_path\backup_directory'
[;]

Arguments
RESTORE DATABASE database_name
Specifies to restore a user database to a database called database_name. The restored database can have a
different name than the source database that was backed up. database_name cannot already exist as a database
on the destination appliance. For more details on permitted database names, see "Object Naming Rules" in the
Parallel Data Warehouse product documentation.
Restoring a user database restores a full database backup and then optionally restores a differential backup to the
appliance. A restore of a user database includes restoring database users, and database roles.
FROM DISK = '\\UNC_path\backup_directory'
The network path and directory from which Parallel Data Warehouse will restore the backup files. For example,
FROM DISK = '\\xxx.xxx.xxx.xxx\backups\2012\Monthly\08.2012.Mybackup'.
backup_directory
Specifies the name of a directory that contains the full or differential backup. For example, you can perform a
RESTORE HEADERONLY operation on a full or differential backup.
full_backup_directory
Specifies the name of a directory that contains the full backup.
differential_backup_directory
Specifies the name of the directory that contains the differential backup.
The path and backup directory must already exist and must be specified as a fully qualified universal
naming convention (UNC ) path.
The path to the backup directory cannot be a local path and it cannot be a location on any of the Parallel
Data Warehouse appliance nodes.
The maximum length of the UNC path and backup directory name is 200 characters.
The server or host must be specified as an IP address.
RESTORE HEADERONLY
Specifies to return only the header information for one user database backup. Among other fields, the
header includes the text description of the backup, and the backup name. The backup name does not need
to be the same as the name of the directory that stores the backup files.
RESTORE HEADERONLY results are patterned after the SQL Server RESTORE HEADERONLY results.
The result has over 50 columns, which are not all used by Parallel Data Warehouse. For a description of the
columns in the SQL Server RESTORE HEADERONLY results, see RESTORE HEADERONLY (Transact-
SQL ).

Permissions
Requires the CREATE ANY DATABASE permission.
Requires a Windows account that has permission to access and read from the backup directory. You must also
store the Windows account name and password in Parallel Data Warehouse.
1. To verify the credentials are already there, use sys.dm_pdw_network_credentials (Transact-SQL ).
2. To add or update the credentials, use sp_pdw_add_network_credentials (SQL Data Warehouse).
3. To remove credentials from Parallel Data Warehouse, use sp_pdw_remove_network_credentials (SQL Data
Warehouse).

Error Handling
The RESTORE DATABASE command results in errors under the following conditions:
The name of the database to restore already exists on the target appliance. To avoid this, choose a unique
database name, or drop the existing database before running the restore.
There is an invalid set of backup files in the backup directory.
The login permissions are not sufficient to restore a database.
Parallel Data Warehouse does not have the correct permissions to the network location where the backup
files are located.
The network location for the backup directory does not exist, or is not available.
There is insufficient disk space on the Compute nodes or Control node. Parallel Data Warehouse does not
confirm that sufficient disk space exists on the appliance before initiating the restore. Therefore, it is
possible to generate an out-of-disk-space error while running the RESTORE DATABASE statement. When
insufficient disk space occurs, Parallel Data Warehouse rolls back the restore.
The target appliance to which the database is being restored has fewer Compute nodes than the source
appliance from which the database was backed up.
The database restore is attempted from within a transaction.

General Remarks
Parallel Data Warehouse tracks the success of database restores. Before restoring a differential database backup,
Parallel Data Warehouse verifies the full database restore finished successfully.
After a restore, the user database will have database compatibility level 120. This is true for all databases
regardless of their original compatibility level.
Restoring to an Appliance With a Larger Number of Compute Nodes
Run DBCC SHRINKLOG (Azure SQL Data Warehouse) after restoring a database from a smaller to larger
appliance since redistribution will increase transaction log.
Restoring a backup to an appliance with a larger number of Compute nodes grows the allocated database size in
proportion to the number of Compute nodes.
For example, when restoring a 60 GB database from a 2-node appliance (30 GB per node) to a 6-node appliance,
Parallel Data Warehouse creates a 180 GB database (6 nodes with 30 GB per node) on the 6-node appliance.
Parallel Data Warehouse initially restores the database to 2 nodes to match the source configuration, and then
redistributes the data to all 6 nodes.
After the redistribution each Compute node will contain less actual data and more free space than each Compute
node on the smaller source appliance. Use the additional space to add more data to the database. If the restored
database size is larger than you need, you can use ALTER DATABASE (Parallel Data Warehouse) to shrink the
database file sizes.

Limitations and Restrictions


For these limitations and restrictions, the source appliance is the appliance from which the database backup was
created, and the target appliance is the appliance to which the database will be restored.
Restoring a database does not automatically rebuild statistics.
Only one RESTORE DATABASE or BACKUP DATABASE statement can be running on the appliance at any given
time. If multiple backup and restore statements are submitted concurrently, the appliance will put them into a
queue and process them one at a time.
You can only restore a database backup to a Parallel Data Warehouse target appliance that has the same number
or more Compute nodes than the source appliance. The target appliance cannot have fewer Compute nodes than
the source appliance.
You cannot restore a backup that was created on an appliance that has SQL Server 2012 PDW hardware to an
appliance that has SQL Server 2008 R2 hardware. This holds true even if the appliance was originally purchased
with the SQL Server 2008 R2 PDW hardware and is now running SQL Server 2012 PDW software.
Locking
Takes an exclusive lock on the DATABASE object.

Examples
A. Simple RESTORE examples
The following example restores a full backup to the SalesInvoices2013 database. The backup files are stored in the
\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full directory. The SalesInvoices2013 database cannot already exist
on the target appliance or this command will fail with an error.

RESTORE DATABASE SalesInvoices2013


FROM DISK = '\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full';

B. Restore a full and differential backup


The following example restores a full, and then a differential backup to the SalesInvoices2013 database
The full backup of the database is restored from the full backup which is stored in the
'\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full' directory. If the restore completes successfully, the differential
backup is restored to the SalesInvoices2013 database. The differential backup is stored in the
'\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Diff' directory.

RESTORE DATABASE SalesInvoices2013


FROM DISK = '\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Diff'
WITH BASE = '\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full'
[;]

C. Restoring the backup header


This example restores the header information for database backup
'\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full' . The command results in one row of information for the
Invoices2013Full backup.

RESTORE HEADERONLY
FROM DISK = '\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full'
[;]

You can use the header information to check the contents of a backup, or to make sure the target restoration
appliance is compatible with the source backup appliance before attempting to restore the backup.

See Also
BACKUP DATABASE (Parallel Data Warehouse)
RESTORE Statements - Arguments (Transact-SQL)
5/4/2018 • 32 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This topic documents the arguments that are described in the Syntax sections of the RESTORE
{DATABASE|LOG } statement and of the associated set of auxiliary statements: RESTORE FILELISTONLY,
RESTORE HEADERONLY, RESTORE L ABELONLY, RESTORE REWINDONLY, and RESTORE VERIFYONLY.
Most of the arguments are supported by only a subset of these six statements. The support for each argument is
indicated in the description of the argument.
Transact-SQL Syntax Conventions

Syntax
For syntax, see the following topics:
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )

Arguments
DATABASE
Supported by: RESTORE
Specifies the target database. If a list of files and filegroups is specified, only those files and filegroups are
restored.
For a database using the full or bulk-logged recovery model, SQL Server requires in most cases that you back
up the tail of the log before restoring the database. Restoring a database without first backing up the tail of the
log results in an error, unless the RESTORE DATABASE statement contains either the WITH REPL ACE or the
WITH STOPAT clause, which must specify a time or transaction that occurred after the end of the data backup.
For more information about tail-log backups, see Tail-Log Backups (SQL Server).
LOG
Supported by: RESTORE
Specifies that a transaction log backup is to be applied to this database. Transaction logs must be applied in
sequential order. SQL Server checks the backed up transaction log to ensure that the transactions are being
loaded into the correct database and in the correct sequence. To apply multiple transaction logs, use the
NORECOVERY option on all restore operations except the last.
NOTE
Typically, the last log restored is the tail-log backup. A tail-log backup is a log backup taken right before restoring a
database, typically after a failure on the database. Taking a tail-log backup from the possibly damaged database prevents
work loss by capturing the log that has not yet been backed up (the tail of the log). For more information, see Tail-Log
Backups (SQL Server).

For more information, see Apply Transaction Log Backups (SQL Server).
{ database_name | @database_name_var}
Supported by: RESTORE
Is the database that the log or complete database is restored into. If supplied as a variable
(@database_name_var), this name can be specified either as a string constant (@database_name_var =
database_name) or as a variable of character string data type, except for the ntext or text data types.
<file_or_filegroup_or_page> [ ,...n ]
Supported by: RESTORE
Specifies the name of a logical file or filegroup or page to include in a RESTORE DATABASE or RESTORE LOG
statement. You can specify a list of files or filegroups.
For a database that uses the simple recovery model, the FILE and FILEGROUP options are allowed only if the
target files or filegroups are read only, or if this is a PARTIAL restore (which results in a defunct filegroup).
For a database that uses the full or bulk-logged recovery model, after using RESTORE DATABASE to restore one
or more files, filegroups, and/or pages, typically, you must apply the transaction log to the files containing the
restored data; applying the log makes those files consistent with the rest of the database. The exceptions to this
are as follows:
If the files being restored were read-only before they were last backed up—then a transaction log does
not have to be applied, and the RESTORE statement informs you of this situation.
If the backup contains the primary filegroup and a partial restore is being performed. In this case, the
restore log is not needed because the log is restored automatically from the backup set.
FILE = { logical_file_name_in_backup| @logical_file_name_in_backup_var}
Names a file to include in the database restore.
FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
Names a filegroup to include in the database restore.
Note FILEGROUP is allowed in simple recovery model only if the specified filegroup is read-only and this is a
partial restore (that is, if WITH PARTIAL is used). Any unrestored read-write filegroups are marked as defunct
and cannot subsequently be restored into the resulting database.
READ_WRITE_FILEGROUPS
Selects all read-write filegroups. This option is particularly useful when you have read-only filegroups that you
want to restore after read-write filegroups before the read-only filegroups.
PAGE = 'file:page [ ,...n ]'
Specifies a list of one or more pages for a page restore (which is supported only for databases using the full or
bulk-logged recovery models). The values are as follows:
PAGE
Indicates a list of one or more files and pages.
file
Is the file ID of the file containing a specific page to be restored.
page
Is the page ID of the page to be restored in the file.
n
Is a placeholder indicating that multiple pages can be specified.
The maximum number of pages that can be restored into any single file in a restore sequence is 1000. However,
if you have more than a small number of damaged pages in a file, consider restoring the whole file instead of the
pages.

NOTE
Page restores are never recovered.

For more information about page restore, see Restore Pages (SQL Server).
[ ,...n ]
Is a placeholder indicating that multiple files and filegroups and pages can be specified in a comma-separated
list. The number is unlimited.
FROM { <backup_device> [ ,...n ]| <database_snapshot> } Typically, specifies the backup devices from which to
restore the backup. Alternatively, in a RESTORE DATABASE statement, the FROM clause can specify the name
of a database snapshot to which you are reverting the database, in which case, no WITH clause is permitted.
If the FROM clause is omitted, the restore of a backup does not take place. Instead, the database is recovered.
This allows you to recover a database that has been restored with the NORECOVERY option or to switch over to
a standby server. If the FROM clause is omitted, NORECOVERY, RECOVERY, or STANDBY must be specified in
the WITH clause.
<backup_device> [ ,...n ] Specifies the logical or physical backup devices to use for the restore operation.
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY,
RESTORE REWINDONLY, and RESTORE VERIFYONLY.
<backup_device>::= Specifies a logical or physical backup device to use for the backup operation, as follows:
{ logical_backup_device_name | @logical_backup_device_name_var }
Is the logical name, which must follow the rules for identifiers, of the backup device(s) created by
sp_addumpdevice from which the database is restored. If supplied as a variable
(@logical_backup_device_name_var), the backup device name can be specified either as a string constant
(@logical_backup_device_name_var = logical_backup_device_name) or as a variable of character string data
type, except for the ntext or text data types.
{DISK | TAPE } = { 'physical_backup_device_name' | @physical_backup_device_name_var }
Allows backups to be restored from the named disk or tape device. The device types of disk and tape should be
specified with the actual name (for example, complete path and file name) of the device:
DISK ='Z:\SQLServerBackups\AdventureWorks.bak' or TAPE ='\\\\.\TAPE0' . If specified as a variable
(@physical_backup_device_name_var), the device name can be specified either as a string constant
(@physical_backup_device_name_var = 'physcial_backup_device_name') or as a variable of character string data
type, except for the ntext or text data types.
If using a network server with a UNC name (which must contain machine name), specify a device type of disk.
For more information about how to use UNC names, see Backup Devices (SQL Server).
The account under which you are running SQL Server must have READ access to the remote computer or
network server in order to perform a RESTORE operation.
n
Is a placeholder indicating that up to 64 backup devices may be specified in a comma-separated list.
Whether a restore sequence requires as many backup devices as were used to create the media set to which the
backups belong, depends on whether the restore is offline or online, as follows:
Offline restore allows a backup to be restored using fewer devices than were used to create the backup.
Online restore requires all the backup devices of the backup. An attempt to restore with fewer devices
fails.
For example, consider a case in which a database was backed up to four tape drives connected to the
server. An online restore requires that you have four drives connected to the server; an offline restore
allows you to restore the backup if there are less than four drives on the machine.

NOTE
When you are restoring a backup from a mirrored media set, you can specify only a single mirror for each media family. In
the presence of errors, however, having the other mirrors enables some restore problems to be resolved quickly. You can
substitute a damaged media volume with the corresponding volume from another mirror. Be aware that for offline restores
you can restore from fewer devices than media families, but each family is processed only once.

<database_snapshot>::=
Supported by: RESTORE DATABASE
DATABASE_SNAPSHOT =database_snapshot_name
Reverts the database to the database snapshot specified by database_snapshot_name. The
DATABASE_SNAPSHOT option is available only for a full database restore. In a revert operation, the database
snapshot takes the place of a full database backup.
A revert operation requires that the specified database snapshot is the only one on the database. During the
revert operation, the database snapshot and the destination database and are both marked as In restore . For
more information, see the "Remarks" section in RESTORE DATABASE.
WITH Options
Specifies the options to be used by a restore operation. For a summary of which statements use each option, see
"Summary of Support for WITH Options," later in this topic.

NOTE
WITH options are organized here in the same order as in the "Syntax" section in RESTORE {DATABASE|LOG}.

PARTIAL
Supported by: RESTORE DATABASE
Specifies a partial restore operation that restores the primary filegroup and any specified secondary filegroup(s).
The PARTIAL option implicitly selects the primary filegroup; specifying FILEGROUP = 'PRIMARY' is
unnecessary. To restore a secondary filegroup, you must explicitly specify the filegroup using the FILE option or
FILEGROUP option.
The PARTIAL option is not allowed on RESTORE LOG statements.
The PARTIAL option starts the initial stage of a piecemeal restore, which allows remaining filegroups to be
restored at a later time. For more information, see Piecemeal Restores (SQL Server).
[ RECOVERY | NORECOVERY | STANDBY ]
Supported by: RESTORE
RECOVERY
Instructs the restore operation to roll back any uncommitted transactions. After the recovery process, the
database is ready for use. If neither NORECOVERY, RECOVERY, nor STANDBY is specified, RECOVERY is the
default.
If subsequent RESTORE operations (RESTORE LOG, or RESTORE DATABASE from differential) are planned,
NORECOVERY or STANDBY should be specified instead.
When restoring backup sets from an earlier version of SQL Server, a database upgrade might be required. This
upgrade is performed automatically when WITH RECOVERY is specified. For more information, see Apply
Transaction Log Backups (SQL Server).

NOTE
If the FROM clause is omitted, NORECOVERY, RECOVERY, or STANDBY must be specified in the WITH clause.

NORECOVERY
Instructs the restore operation to not roll back any uncommitted transactions. If another transaction log has to
be applied later, specify either the NORECOVERY or STANDBY option. If neither NORECOVERY, RECOVERY,
nor STANDBY is specified, RECOVERY is the default. During an offline restore operation using the
NORECOVERY option, the database is not usable.
For restoring a database backup and one or more transaction logs or whenever multiple RESTORE statements
are necessary (for example, when restoring a full database backup followed by a differential database backup),
RESTORE requires the WITH NORECOVERY option on all but the final RESTORE statement. A best practice is
to use WITH NORECOVERY on ALL statements in a multi-step restore sequence until the desired recovery
point is reached, and then to use a separate RESTORE WITH RECOVERY statement for recovery only.
When used with a file or filegroup restore operation, NORECOVERY forces the database to remain in the
restoring state after the restore operation. This is useful in either of these situations:
A restore script is being run and the log is always being applied.
A sequence of file restores is used and the database is not intended to be usable between two of the
restore operations.
In some cases RESTORE WITH NORECOVERY rolls the roll forward set far enough forward that it is
consistent with the database. In such cases, roll back does not occur and the data remains offline, as
expected with this option. However, the Database Engine issues an informational message that states that
the roll-forward set can now be recovered by using the RECOVERY option.
STANDBY =standby_file_name
Specifies a standby file that allows the recovery effects to be undone. The STANDBY option is allowed for offline
restore (including partial restore). The option is disallowed for online restore. Attempting to specify the
STANDBY option for an online restore operation causes the restore operation to fail. STANDBY is also not
allowed when a database upgrade is necessary.
The standby file is used to keep a "copy-on-write" pre-image for pages modified during the undo pass of a
RESTORE WITH STANDBY. The standby file allows a database to be brought up for read-only access between
transaction log restores and can be used with either warm standby server situations or special recovery
situations in which it is useful to inspect the database between log restores. After a RESTORE WITH STANDBY
operation, the undo file is automatically deleted by the next RESTORE operation. If this standby file is manually
deleted before the next RESTORE operation, then the entire database must be re-restored. While the database is
in the STANDBY state, you should treat this standby file with the same care as any other database file. Unlike
other database files, this file is only kept open by the Database Engine during active restore operations.
The standby_file_name specifies a standby file whose location is stored in the log of the database. If an existing
file is using the specified name, the file is overwritten; otherwise, the Database Engine creates the file.
The size requirement of a given standby file depends on the volume of undo actions resulting from uncommitted
transactions during the restore operation.

IMPORTANT
If free disk space is exhausted on the drive containing the specified standby file name, the restore operation stops.

For a comparison of RECOVERY and NORECOVERY, see the "Remarks" section in RESTORE.
LOADHISTORY
Supported by: RESTORE VERIFYONLY
Specifies that the restore operation loads the information into the msdb history tables. The LOADHISTORY
option loads information, for the single backup set being verified, about SQL Server backups stored on the
media set to the backup and restore history tables in the msdb database. For more information about history
tables, see System Tables (Transact-SQL ).
<general_WITH_options> [ ,...n ]
The general WITH options are all supported in RESTORE DATABASE and RESTORE LOG statements. Some of
these options are also supported by one or more auxiliary statements, as noted below.
R e st o r e O p e r a t i o n O p t i o n s

These options affect the behavior of the restore operation.


MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name' [ ...n ]
Supported by: RESTORE and RESTORE VERIFYONLY
Specifies that the data or log file whose logical name is specified by logical_file_name_in_backup should be
moved by restoring it to the location specified by operating_system_file_name. The logical file name of a data or
log file in a backup set matches its logical name in the database when the backup set was created.
n is a placeholder indicating that you can specify additional MOVE statements. Specify a MOVE statement for
every logical file you want to restore from the backup set to a new location. By default, the
logical_file_name_in_backup file is restored to its original location.

NOTE
To obtain a list of the logical files from the backup set, use RESTORE FILELISTONLY.

If a RESTORE statement is used to relocate a database on the same server or copy it to a different server, the
MOVE option might be necessary to relocate the database files and to avoid collisions with existing files.
When used with RESTORE LOG, the MOVE option can be used only to relocate files that were added during the
interval covered by the log being restored. For example, if the log backup contains an add file operation for file
file23 , this file may be relocated using the MOVE option on RESTORE LOG.

When used with SQL Server Snaphot Backup, the MOVE option can be used only to relocate files to an Azure
blob within the same storage account as the original blob. The MOVE option cannot be used to restore the
snapshot backup to a local file or to a different storage account.
If a RESTORE VERIFYONLY statement is used when you plan to relocate a database on the same server or copy
it to a different server, the MOVE option might be necessary to verify that sufficient space is available in the
target and to identify potential collisions with existing files.
For more information, see Copy Databases with Backup and Restore.
CREDENTIAL
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
Applies to: SQL Server 2012 (11.x) SP1 CU2 through SQL Server 2017
Used only when restoring a backup from the Microsoft Azure Blob storage service.

NOTE
With SQL Server 2012 (11.x) SP1 CU2 until SQL Server 2016 (13.x), you can only restore from a single device when
restoring from URL. In order to restore from multiple devices when restoring from URL you must use SQL Server 2016
(13.x) through current version) and you must use Shared Access Signature (SAS) tokens. For more information, see Enable
SQL Server Managed Backup to Microsoft Azure and Simplifying creation of SQL Credentials with Shared Access Signature
( SAS ) tokens on Azure Storage with Powershell.

REPL ACE
Supported by: RESTORE
Specifies that SQL Server should create the specified database and its related files even if another database
already exists with the same name. In such a case, the existing database is deleted. When the REPL ACE option is
not specified, a safety check occurs. This prevents overwriting a different database by accident. The safety check
ensures that the RESTORE DATABASE statement does not restore the database to the current server if the
following conditions both exist:
The database named in the RESTORE statement already exists on the current server, and
The database name is different from the database name recorded in the backup set.
REPL ACE also allows RESTORE to overwrite an existing file that cannot be verified as belonging to the
database being restored. Normally, RESTORE refuses to overwrite pre-existing files. WITH REPL ACE can
also be used in the same way for the RESTORE LOG option.
REPL ACE also overrides the requirement that you back up the tail of the log before restoring the
database.
For information the impact of using the REPL ACE option, see RESTORE (Transact-SQL ).
RESTART
Supported by: RESTORE
Specifies that SQL Server should restart a restore operation that has been interrupted. RESTART restarts the
restore operation at the point it was interrupted.
RESTRICTED_USER
Supported by: RESTORE.
Restricts access for the newly restored database to members of the db_owner, dbcreator, or sysadmin roles.
RESTRICTED_USER replaces the DBO_ONLY option. DBO_ONLY has been discontinued with SQL Server
2008.
Use with the RECOVERY option.
B a c k u p Se t O p t i o n s

These options operate on the backup set containing the backup to be restored.
FILE ={ backup_set_file_number | @backup_set_file_number }
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, and RESTORE VERIFYONLY.
Identifies the backup set to be restored. For example, a backup_set_file_number of 1 indicates the first backup set
on the backup medium and a backup_set_file_number of 2 indicates the second backup set. You can obtain the
backup_set_file_number of a backup set by using the RESTORE HEADERONLY statement.
When not specified, the default is 1, except for RESTORE HEADERONLY in which case all backup sets in the
media set are processed. For more information, see "Specifying a Backup Set," later in this topic.

IMPORTANT
This FILE option is unrelated to the FILE option for specifying a database file, FILE = { logical_file_name_in_backup |
@logical_file_name_in_backup_var }.

PASSWORD = { password | @password_variable }


Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, and RESTORE VERIFYONLY.
Supplies the password of the backup set. A backup-set password is a character string.

NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development
work, and plan to modify applications that currently use this feature.

If a password was specified when the backup set was created, that password is required to perform any restore
operation from the backup set. It is an error to specify the wrong password or to specify a password if the
backup set does not have one.

IMPORTANT
This password provides only weak protection for the media set. For more information, see the Permissions section for the
relevant statement.

M e d i a Se t O p t i o n s

These options operate on the media set as a whole.


MEDIANAME = { media_name | @media_name_variable}
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
Specifies the name for the media. If provided, the media name must match the media name on the backup
volumes; otherwise, the restore operation terminates. If no media name is given in the RESTORE statement, the
check for a matching media name on the backup volumes is not performed.

IMPORTANT
Consistently using media names in backup and restore operations provides an extra safety check for the media selected for
the restore operation.

MEDIAPASSWORD = { mediapassword | @mediapassword_variable }


Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
Supplies the password of the media set. A media-set password is a character string.
NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development
work, and plan to modify applications that currently use this feature.

If a password was provided when the media set was formatted, that password is required to access any backup
set on the media set. It is an error to specify the wrong password or to specify a password if the media set does
not have any.

IMPORTANT
This password provides only weak protection for the media set. For more information, see the "Permissions" section for the
relevant statement.

BLOCKSIZE = { blocksize | @blocksize_variable }


Supported by: RESTORE
Specifies the physical block size, in bytes. The supported sizes are 512, 1024, 2048, 4096, 8192, 16384, 32768,
and 65536 (64 KB ) bytes. The default is 65536 for tape devices and 512 otherwise. Typically, this option is
unnecessary because RESTORE automatically selects a block size that is appropriate to the device. Explicitly
stating a block size overrides the automatic selection of block size.
If you are restoring a backup from a CD -ROM, specify BLOCKSIZE=2048.

NOTE
This option typically affects performance only when reading from tape devices.

D a t a T r a n sfe r O p t i o n s

The options enable you to optimize data transfer from the backup device.
BUFFERCOUNT = { buffercount | @buffercount_variable }
Supported by: RESTORE
Specifies the total number of I/O buffers to be used for the restore operation. You can specify any positive
integer; however, large numbers of buffers might cause "out of memory" errors because of inadequate virtual
address space in the Sqlservr.exe process.
The total space used by the buffers is determined by: buffercount\*maxtransfersize.
MAXTRANSFERSIZE = { maxtransfersize | @maxtransfersize_variable }
Supported by: RESTORE
Specifies the largest unit of transfer in bytes to be used between the backup media and SQL Server. The possible
values are multiples of 65536 bytes (64 KB ) ranging up to 4194304 bytes (4 MB ).

NOTE
When the database has configured FILESTREAM, or includes or In-Memory OLTP File Groups, MAXTRANSFERSIZE at the
time of restore should be greater than or equal to what was used when the backup was created.

Er r o r M a n a g e m e n t O p t i o n s

These options allow you to determine whether backup checksums are enabled for the restore operation and
whether the operation stops on encountering an error.
{ CHECKSUM | NO_CHECKSUM }
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
The default behavior is to verify checksums if they are present and proceed without verification if they are not
present.
CHECKSUM
Specifies that backup checksums must be verified and, if the backup lacks backup checksums, causes the restore
operation to fail with a message indicating that checksums are not present.

NOTE
Page checksums are relevant to backup operations only if backup checksums are used.

By default, on encountering an invalid checksum, RESTORE reports a checksum error and stops. However, if you
specify CONTINUE_AFTER_ERROR, RESTORE proceeds after returning a checksum error and the number of
the page containing the invalid checksum, if the corruption permits.
For more information about working with backup checksums, see Possible Media Errors During Backup and
Restore (SQL Server).
NO_CHECKSUM
Explicitly disables the validation of checksums by the restore operation.
{ STOP_ON_ERROR | CONTINUE_AFTER_ERROR }
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
STOP_ON_ERROR
Specifies that the restore operation stops with the first error encountered. This is the default behavior for
RESTORE, except for VERIFYONLY which has CONTINUE_AFTER_ERROR as the default.
CONTINUE_AFTER_ERROR
Specifies that the restore operation is to continue after an error is encountered.
If a backup contains damaged pages, it is best to repeat the restore operation using an alternative backup that
does not contain the errors—for example, a backup taken before the pages were damaged. As a last resort,
however, you can restore a damaged backup using the CONTINUE_AFTER_ERROR option of the restore
statement and try to salvage the data.
F I L E ST R E A M O p t i o n s

FILESTREAM ( DIRECTORY_NAME =directory_name )


Supported by: RESTORE and RESTORE VERIFYONLY
Applies to: SQL Server 2012 (11.x) through SQL Server 2017
A windows-compatible directory name. This name should be unique among all the database-level FILESTREAM
directory names in the SQL Server instance. Uniqueness comparison is done in a case-insensitive fashion,
regardless of SQL Server collation settings.
Monitoring O ptions

These options enable you to monitor the transfer of data transfer from the backup device.
STATS [ = percentage ]
Supported by: RESTORE and RESTORE VERIFYONLY
Displays a message each time another percentage completes, and is used to gauge progress. If percentage is
omitted, SQL Server displays a message after each 10 percent is completed (approximately).
The STATS option reports the percentage complete as of the threshold for reporting the next interval. This is at
approximately the specified percentage; for example, with STATS=10, the Database Engine reports at
approximately that interval; for instance, instead of displaying precisely 40%, the option might display 43%. For
large backup sets, this is not a problem because the percentage complete moves very slowly between completed
I/O calls.
Ta p e O p t i o n s

These options are used only for TAPE devices. If a nontape device is being used, these options are ignored.
{ REWIND | NOREWIND }
These options are used only for TAPE devices. If a non-tape device is being used, these options are ignored.
REWIND
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
Specifies that SQL Server release and rewind the tape. REWIND is the default.
NOREWIND
Supported by: RESTORE and RESTORE VERIFYONLY
Specifying NOREWIND in any other restore statement generates an error.
Specifies that SQL Server will keep the tape open after the backup operation. You can use this option to improve
performance when performing multiple backup operations to a tape.
NOREWIND implies NOUNLOAD, and these options are incompatible within a single RESTORE statement.

NOTE
If you use NOREWIND, the instance of SQL Server retains ownership of the tape drive until a BACKUP or RESTORE
statement running in the same process uses either the REWIND or UNLOAD option, or the server instance is shut down.
Keeping the tape open prevents other processes from accessing the tape. For information about how to display a list of
open tapes and to close an open tape, see Backup Devices (SQL Server).

{ UNLOAD | NOUNLOAD }
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY,
RESTORE REWINDONLY, and RESTORE VERIFYONLY.
These options are used only for TAPE devices. If a non-tape device is being used, these options are ignored.

NOTE
UNLOAD/NOUNLOAD is a session setting that persists for the life of the session or until it is reset by specifying the
alternative.

UNLOAD
Specifies that the tape is automatically rewound and unloaded when the backup is finished. UNLOAD is the
default when a session begins.
NOUNLOAD
Specifies that after the RESTORE operation the tape remains loaded on the tape drive.
<replication_WITH_option>
This option is relevant only if the database was replicated when the backup was created.
KEEP_REPLICATION
Supported by: RESTORE
Use KEEP_REPLICATION when setting up replication to work with log shipping. It prevents replication settings
from being removed when a database backup or log backup is restored on a warm standby server and the
database is recovered. Specifying this option when restoring a backup with the NORECOVERY option is not
permitted. To ensure replication functions properly after restore:
The msdb and master databases at the warm standby server must be in sync with the msdb and master
databases at the primary server.
The warm standby server must be renamed to use the same name as the primary server.
<change_data_capture_WITH_option>
This option is relevant only if the database was enabled for change data capture when the backup was created.
KEEP_CDC
Supported by: RESTORE
KEEP_CDC should be used to prevent change data capture settings from being removed when a database
backup or log backup is restored on another server and the database is recovered. Specifying this option when
restoring a backup with the NORECOVERY option is not permitted.
Restoring the database with KEEP_CDC does not create the change data capture jobs. To extract changes from
the log after restoring the database, recreate the capture process job and the cleanup job for the restored
database. For information, see sys.sp_cdc_add_job (Transact-SQL ).
For information about using change data capture with database mirroring, see Change Data Capture and Other
SQL Server Features.
<service_broker_WITH_options>
Turns Service Broker message delivery on or off or sets a new Service Broker identifier. This option is relevant
only if Service Broker was enabled (activated) for the database when the backup was created.
{ ENABLE_BROKER | ERROR_BROKER_CONVERSATIONS | NEW_BROKER }
Supported by: RESTORE DATABASE
ENABLE_BROKER
Specifies that Service Broker message delivery is enabled at the end of the restore so that messages can be sent
immediately. By default Service Broker message delivery is disabled during a restore. The database retains the
existing Service Broker identifier.
ERROR_BROKER_CONVERSATIONS
Ends all conversations with an error stating that the database is attached or restored. This enables your
applications to perform regular clean up for existing conversations. Service Broker message delivery is disabled
until this operation is completed, and then it is enabled. The database retains the existing Service Broker
identifier.
NEW_BROKER
Specifies that the database be assigned a new Service Broker identifier. Because the database is considered to be
a new Service Broker, existing conversations in the database are immediately removed without producing end
dialog messages. Any route referencing the old Service Broker identifier must be recreated with the new
identifier.
<point_in_time_WITH_options>
Supported by: RESTORE {DATABASE|LOG } and only for the full or bulk-logged recovery models.
You can restore a database to a specific point in time or transaction, by specifying the target recovery point in a
STOPAT, STOPATMARK, or STOPBEFOREMARK clause. A specified time or transaction is always restored from
a log backup. In every RESTORE LOG statement of the restore sequence, you must specify your target time or
transaction in an identical STOPAT, STOPATMARK, or STOPBEFOREMARK clause.
As a prerequisite to a point-in-time restore, you must first restore a full database backup whose end point is
earlier than your target recovery point. To help you identify which database backup to restore, you can optionally
specify your WITH STOPAT, STOPATMARK, or STOPBEFOREMARK clause in a RESTORE DATABASE
statement to raise an error if a data backup is too recent for the specified target time. But the complete data
backup is always restored, even if it contains the target time.

NOTE
The RESTORE_DATABASE and RESTORE_LOG point-in-time WITH options are similar, but only RESTORE LOG supports the
mark_name argument.

{ STOPAT | STOPATMARK | STOPBEFOREMARK }


STOPAT = { 'datetime' | @datetime_var }
Specifies that the database be restored to the state it was in as of the date and time specified by the datetime or
@datetime_var parameter. For information about specifying a date and time, see Date and Time Data Types and
Functions (Transact-SQL ).
If a variable is used for STOPAT, the variable must be varchar, char, smalldatetime, or datetime data type.
Only transaction log records written before the specified date and time are applied to the database.

NOTE
If the specified STOPAT time is after the last LOG backup, the database is left in the unrecovered state, just as if RESTORE
LOG ran with NORECOVERY.

For more information, see Restore a SQL Server Database to a Point in Time (Full Recovery Model).
STOPATMARK = { 'mark_name' | 'lsn:lsn_number' } [ AFTER 'datetime' ]
Specifies recovery to a specified recovery point. The specified transaction is included in the recovery, but it is
committed only if it was originally committed when the transaction was actually generated.
Both RESTORE DATABASE and RESTORE LOG support the lsn_number parameter. This parameter specifies a
log sequence number.
The mark_name parameter is supported only by the RESTORE LOG statement. This parameter identifies a
transaction mark in the log backup.
In a RESTORE LOG statement, if AFTER datetime is omitted, recovery stops at the first mark with the specified
name. If AFTER datetime is specified, recovery stops at the first mark having the specified name exactly at or
after datetime.

NOTE
If the specified mark, LSN, or time is after the last LOG backup, the database is left in the unrecovered state, just as if
RESTORE LOG ran with NORECOVERY.

For more information, see Use Marked Transactions to Recover Related Databases Consistently (Full Recovery
Model) and Recover to a Log Sequence Number (SQL Server).
STOPBEFOREMARK = { 'mark_name' | 'lsn:lsn_number' } [ AFTER 'datetime' ]
Specifies recovery up to a specified recovery point. The specified transaction is not included in the recovery, and
is rolled back when WITH RECOVERY is used.
Both RESTORE DATABASE and RESTORE LOG support the lsn_number parameter. This parameter specifies a
log sequence number.
The mark_name parameter is supported only by the RESTORE LOG statement. This parameter identifies a
transaction mark in the log backup.
In a RESTORE LOG statement, if AFTER datetime is omitted, recovery stops just before the first mark with the
specified name. If AFTER datetime is specified, recovery stops just before the first mark having the specified
name exactly at or after datetime.

IMPORTANT
If a partial restore sequence excludes any FILESTREAM filegroup, point-in-time restore is not supported. You can force the
restore sequence to continue. However, the FILESTREAM filegroups that are omitted from the RESTORE statement can
never be restored. To force a point-in-time restore, specify the CONTINUE_AFTER_ERROR option together with the STOPAT,
STOPATMARK, or STOPBEFOREMARK option. If you specify CONTINUE_AFTER_ERROR, the partial restore sequence
succeeds and the FILESTREAM filegroup becomes unrecoverable.

Result Sets
For result sets, see the following topics:
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )

Remarks
For additional remarks, see the following topics:
RESTORE (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )

Specifying a Backup Set


A backup set contains the backup from a single, successful backup operation. RESTORE, RESTORE
FILELISTONLY, RESTORE HEADERONLY, and RESTORE VERIFYONLY statements operate on a single backup
set within the media set on the specified backup device or devices. You should specify the backup you need from
within the media set. You can obtain the backup_set_file_number of a backup set by using the RESTORE
HEADERONLY statement.
The option for specifying the backup set to restore is:
FILE ={ backup_set_file_number | @backup_set_file_number }
Where backup_set_file_number indicates the position of the backup in the media set. A backup_set_file_number
of 1 (FILE = 1) indicates the first backup set on the backup medium and a backup_set_file_number of 2 (FILE =
2) indicates the second backup set, and so on.
The behavior of this option varies depending on the statement, as described in the following table:
STATEMENT BEHAVIOR OF BACKUP-SET FILE OPTION

RESTORE The default backup set file number is 1. Only one backup-set
FILE option is allowed in a RESTORE statement. It is
important to specify backup sets in order.

RESTORE FILELISTONLY The default backup set file number is 1.

RESTORE HEADERONLY By default, all backup sets in the media set are processed.
The RESTORE HEADERONLY results set returns information
about each backup set, including its Position in the media
set. To return information on a given backup set, use its
position number as the backup_set_file_number value in the
FILE option.

Note: For tape media, RESTORE HEADER only processes


backup sets on the loaded tape.

RESTORE VERIFYONLY The default backup_set_file_number is 1.

NOTE
The FILE option for specifying a backup set is unrelated to the FILE option for specifying a database file, FILE = {
logical_file_name_in_backup | @logical_file_name_in_backup_var }.

Summary of Support for WITH Options


The following WITH options are supported by only the RESTORE statement: BLOCKSIZE, BUFFERCOUNT,
MAXTRANSFERSIZE, PARTIAL, KEEP_REPLICATION, { RECOVERY | NORECOVERY | STANDBY }, REPL ACE,
RESTART, RESTRICTED_USER, and { STOPAT | STOPATMARK | STOPBEFOREMARK }

NOTE
The PARTIAL option is supported only by RESTORE DATABASE.

The following table lists the WITH options that are used by one or more statements and indicates which
statements support each option. A check mark (√) indicates that an option is supported; a dash (—) indicates that
an option is not supported.

RESTORE RESTORE RESTORE RESTORE RESTORE


WITH OPTION RESTORE FILELISTONLY HEADERONLY LABELONLY REWINDONLY VERIFYONLY

{ CHECKSUM √ √ √ √ — √

|
NO_CHECKSU
M}

{ √ √ √ √ — √
CONTINUE_A
FTER_ERROR

|
STOP_ON_ER
ROR }
RESTORE RESTORE RESTORE RESTORE RESTORE
WITH OPTION RESTORE FILELISTONLY HEADERONLY LABELONLY REWINDONLY VERIFYONLY

FILE1 √ √ √ — — √

LOADHISTOR — — — — — √
Y

MEDIANAME √ √ √ √ — √

MEDIAPASSW √ √ √ √ — √
ORD

MOVE √ — — — — √

PASSWORD √ √ √ — — √

{ REWIND | √ Only REWIND Only REWIND Only REWIND — √


NOREWIND }

STATS √ — — — — √

{ UNLOAD | √ √ √ √ √ √
NOUNLOAD }

1 FILE =backup_set_file_number, which is distinct from {FILE | FILEGROUP }.

Permissions
For permissions, see the following topics:
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )

Examples
For examples, see the following topics:
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )

See Also
BACKUP (Transact-SQL )
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
Back Up and Restore of SQL Server Databases
FILESTREAM (SQL Server)
RESTORE Statements - FILELISTONLY (Transact-
SQL)
5/4/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Returns a result set containing a list of the database and log files contained in the backup set in SQL Server.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

NOTE
For the descriptions of the arguments, see RESTORE Arguments (Transact-SQL).

Transact-SQL Syntax Conventions

Syntax
RESTORE FILELISTONLY
FROM <backup_device>
[ WITH
{
--Backup Set Options
FILE = { backup_set_file_number | @backup_set_file_number }
| PASSWORD = { password | @password_variable }

--Media Set Options


| MEDIANAME = { media_name | @media_name_variable }
| MEDIAPASSWORD = { mediapassword | @mediapassword_variable }

--Error Management Options


| { CHECKSUM | NO_CHECKSUM }
| { STOP_ON_ERROR | CONTINUE_AFTER_ERROR }

--Tape Options
| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
} [ ,...n ]
]
[;]

<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK | TAPE } = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Arguments
For descriptions of the RESTORE FILELISTONLY arguments, see RESTORE Arguments (Transact-SQL ).

Result Sets
A client can use RESTORE FILELISTONLY to obtain a list of the files contained in a backup set. This
information is returned as a result set containing one row for each file.

COLUMN NAME DATA TYPE DESCRIPTION

LogicalName nvarchar(128) Logical name of the file.

PhysicalName nvarchar(260) Physical or operating-system name of


the file.

Type char(1) The type of file, one of:

L = Microsoft SQL Server log file

D = SQL Server data file

F = Full Text Catalog

S = FileStream, FileTable, or In-


Memory OLTP container

FileGroupName nvarchar(128) NULL Name of the filegroup that contains


the file.

Size numeric(20,0) Current size in bytes.

MaxSize numeric(20,0) Maximum allowed size in bytes.

FileID bigint File identifier, unique within the


database.

CreateLSN numeric(25,0) Log sequence number at which the


file was created.

DropLSN numeric(25,0) NULL The log sequence number at which


the file was dropped. If the file has not
been dropped, this value is NULL.

UniqueID uniqueidentifier Globally unique identifier of the file.

ReadOnlyLSN numeric(25,0) NULL Log sequence number at which the


filegroup containing the file changed
from read-write to read-only (the
most recent change).

ReadWriteLSN numeric(25,0) NULL Log sequence number at which the


filegroup containing the file changed
from read-only to read-write (the
most recent change).

BackupSizeInBytes bigint Size of the backup for this file in bytes.


COLUMN NAME DATA TYPE DESCRIPTION

SourceBlockSize int Block size of the physical device


containing the file in bytes (not the
backup device).

FileGroupID int ID of the filegroup.

LogGroupGUID uniqueidentifier NULL NULL.

DifferentialBaseLSN numeric(25,0) NULL For differential backups, changes with


log sequence numbers greater than
or equal to DifferentialBaseLSN are
included in the differential.

For other backup types, the value is


NULL.

DifferentialBaseGUID uniqueidentifier NULL For differential backups, the unique


identifier of the differential base.

For other backup types, the value is


NULL.

IsReadOnly bit 1 = The file is read-only.

IsPresent bit 1 = The file is present in the backup.

TDEThumbprint varbinary(32) NULL Shows the thumbprint of the


Database Encryption Key. The
encryptor thumbprint is a SHA-1
hash of the certificate with which the
key is encrypted. For information
about database encryption, see
Transparent Data Encryption (TDE).

SnapshotURL nvarchar(360) NULL The URL for the Azure snapshot of


the database file contained in the
FILE_SNAPSHOT backup. Returns
NULL if no FILE_SNAPSHOT backup.

Security
A backup operation may optionally specify passwords for a media set, a backup set, or both. When a
password has been defined on a media set or backup set, you must specify the correct password or passwords
in the RESTORE statement. These passwords prevent unauthorized restore operations and unauthorized
appends of backup sets to media using Microsoft SQL Server tools. However, a password does not prevent
overwrite of media using the BACKUP statement's FORMAT option.
IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server tools
by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature. The best practice for
protecting backups is to store backup tapes in a secure location or back up to disk files that are protected by adequate
access control lists (ACLs). The ACLs should be set on the directory root under which backups are created.

Permissions
Beginning in SQL Server 2008, obtaining information about a backup set or backup device requires CREATE
DATABASE permission. For more information, see GRANT Database Permissions (Transact-SQL ).

Examples
The following example returns the information from a backup device named AdventureWorksBackups . The
example uses the FILE option to specify the second backup set on the device.

RESTORE FILELISTONLY FROM AdventureWorksBackups


WITH FILE=2;
GO

See Also
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
RESTORE Statements - HEADERONLY (Transact-
SQL)
5/4/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Returns a result set containing all the backup header information for all backup sets on a particular backup
device in SQL Server.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

NOTE
For the descriptions of the arguments, see RESTORE Arguments (Transact-SQL).

Transact-SQL Syntax Conventions

Syntax
RESTORE HEADERONLY
FROM <backup_device>
[ WITH
{
--Backup Set Options
FILE = { backup_set_file_number | @backup_set_file_number }
| PASSWORD = { password | @password_variable }

--Media Set Options


| MEDIANAME = { media_name | @media_name_variable }
| MEDIAPASSWORD = { mediapassword | @mediapassword_variable }

--Error Management Options


| { CHECKSUM | NO_CHECKSUM }
| { STOP_ON_ERROR | CONTINUE_AFTER_ERROR }

--Tape Options
| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
} [ ,...n ]
]
[;]

<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK | TAPE } = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Arguments
For descriptions of the RESTORE HEADERONLY arguments, see RESTORE Arguments (Transact-SQL ).

Result Sets
For each backup on a given device, the server sends a row of header information with the following columns:

NOTE
RESTORE HEADERONLY looks at all backup sets on the media. Therefore, producing this result set when using high-
capacity tape drives can take some time. To get a quick look at the media without getting information about every
backup set, use RESTORE LABELONLY or specify FILE = backup_set_file_number.

NOTE
Due to the nature of Microsoft Tape Format, it is possible for backup sets from other software programs to occupy
space on the same media as Microsoft SQL Server backup sets. The result set returned by RESTORE HEADERONLY
includes a row for each of these other backup sets.

DESCRIPTION FOR SQL SERVER BACKUP


COLUMN NAME DATA TYPE SETS

BackupName nvarchar(128) Backup set name.

BackupDescription nvarchar(255) Backup set description.

BackupType smallint Backup type:

1 = Database

2 = Transaction log

4 = File

5 = Differential database

6 = Differential file

7 = Partial

8 = Differential partial

ExpirationDate datetime Expiration date for the backup set.

Compressed BYTE(1) Whether the backup set is


compressed using software-based
compression:

0 = No

1 = Yes

Position smallint Position of the backup set in the


volume (for use with the FILE =
option).
DESCRIPTION FOR SQL SERVER BACKUP
COLUMN NAME DATA TYPE SETS

DeviceType tinyint Number corresponding to the device


used for the backup operation.

Disk:

2 = Logical

102 = Physical

Tape:

5 = Logical

105 = Physical

Virtual Device:

7 = Logical

107 = Physical

Logical device names and device


numbers are in sys.backup_devices;
for more information, see
sys.backup_devices (Transact-SQL).

UserName nvarchar(128) User name that performed the


backup operation.

ServerName nvarchar(128) Name of the server that wrote the


backup set.

DatabaseName nvarchar(128) Name of the database that was


backed up.

DatabaseVersion int Version of the database from which


the backup was created.

DatabaseCreationDate datetime Date and time the database was


created.

BackupSize numeric(20,0) Size of the backup, in bytes.

FirstLSN numeric(25,0) Log sequence number of the first log


record in the backup set.

LastLSN numeric(25,0) Log sequence number of the next log


record after the backup set.

CheckpointLSN numeric(25,0) Log sequence number of the most


recent checkpoint at the time the
backup was created.
DESCRIPTION FOR SQL SERVER BACKUP
COLUMN NAME DATA TYPE SETS

DatabaseBackupLSN numeric(25,0) Log sequence number of the most


recent full database backup.

DatabaseBackupLSN is the “begin of


checkpoint” that is triggered when
the backup starts. This LSN will
coincide with FirstLSN if the backup is
taken when the database is idle and
no replication is configured.

BackupStartDate datetime Date and time that the backup


operation began.

BackupFinishDate datetime Date and time that the backup


operation finished.

SortOrder smallint Server sort order. This column is valid


for database backups only. Provided
for backward compatibility.

CodePage smallint Server code page or character set


used by the server.

UnicodeLocaleId int Server Unicode locale ID


configuration option used for Unicode
character data sorting. Provided for
backward compatibility.

UnicodeComparisonStyle int Server Unicode comparison style


configuration option, which provides
additional control over the sorting of
Unicode data. Provided for backward
compatibility.

CompatibilityLevel tinyint Compatibility level setting of the


database from which the backup was
created.

SoftwareVendorId int Software vendor identification


number. For SQL Server, this number
is 4608 (or hexadecimal 0x1200).

SoftwareVersionMajor int Major version number of the server


that created the backup set.

SoftwareVersionMinor int Minor version number of the server


that created the backup set.

SoftwareVersionBuild int Build number of the server that


created the backup set.

MachineName nvarchar(128) Name of the computer that


performed the backup operation.
DESCRIPTION FOR SQL SERVER BACKUP
COLUMN NAME DATA TYPE SETS

Flags int Individual flags bit meanings if set to


1:

1 = Log backup contains bulk-logged


operations.

2 = Snapshot backup.

4 = Database was read-only when


backed up.

8 = Database was in single-user


mode when backed up.

16 = Backup contains backup


checksums.

32 = Database was damaged when


backed up, but the backup operation
was requested to continue despite
errors.

64 = Tail log backup.

128 = Tail log backup with incomplete


metadata.

256 = Tail log backup with


NORECOVERY.

Important: We recommend that


instead of Flags you use the
individual Boolean columns (listed
below starting with
HasBulkLoggedData and ending
with IsCopyOnly).

BindingID uniqueidentifier Binding ID for the database. This


corresponds to
sys.database_recovery_statusdata
base_guid. When a database is
restored, a new value is assigned. Also
see FamilyGUID (below).

RecoveryForkID uniqueidentifier ID for the ending recovery fork. This


column corresponds to
last_recovery_fork_guid in the
backupset table.

For data backups, RecoveryForkID


equals FirstRecoveryForkID.

Collation nvarchar(128) Collation used by the database.

FamilyGUID uniqueidentifier ID of the original database when


created. This value stays the same
when the database is restored.
DESCRIPTION FOR SQL SERVER BACKUP
COLUMN NAME DATA TYPE SETS

HasBulkLoggedData bit 1 = Log backup containing bulk-


logged operations.

IsSnapshot bit 1 = Snapshot backup.

IsReadOnly bit 1 = Database was read-only when


backed up.

IsSingleUser bit 1 = Database was single-user when


backed up.

HasBackupChecksums bit 1 = Backup contains backup


checksums.

IsDamaged bit 1 = Database was damaged when


backed up, but the backup operation
was requested to continue despite
errors.

BeginsLogChain bit 1 = This is the first in a continuous


chain of log backups. A log chain
begins with the first log backup taken
after the database is created or when
it is switched from the Simple to the
Full or Bulk-Logged Recovery Model.

HasIncompleteMetaData bit 1 = A tail-log backup with incomplete


meta-data.

For information about tail-log


backups with incomplete backup
metadata, see Tail-Log Backups (SQL
Server).

IsForceOffline bit 1 = Backup taken with NORECOVERY;


the database was taken offline by
backup.

IsCopyOnly bit 1 = A copy-only backup.

A copy-only backup does not impact


the overall backup and restore
procedures for the database. For
more information, see Copy-Only
Backups (SQL Server).

FirstRecoveryForkID uniqueidentifier ID for the starting recovery fork. This


column corresponds to
first_recovery_fork_guid in the
backupset table.

For data backups,


FirstRecoveryForkID equals
RecoveryForkID.
DESCRIPTION FOR SQL SERVER BACKUP
COLUMN NAME DATA TYPE SETS

ForkPointLSN numeric(25,0) NULL If FirstRecoveryForkID is not equal


to RecoveryForkID, this is the log
sequence number of the fork point.
Otherwise, this value is NULL.

RecoveryModel nvarchar(60) Recovery model for the Database, one


of:

FULL

BULK-LOGGED

SIMPLE

DifferentialBaseLSN numeric(25,0) NULL For a single-based differential backup,


the value equals the FirstLSN of the
differential base; changes with LSNs
greater than or equal to
DifferentialBaseLSN are included in
the differential.

For a multi-based differential, the


value is NULL, and the base LSN must
be determined at the file level. For
more information, see RESTORE
FILELISTONLY (Transact-SQL).

For non-differential backup types, the


value is always NULL.

For more information, see Differential


Backups (SQL Server).

DifferentialBaseGUID uniqueidentifier For a single-based differential backup,


the value is the unique identifier of
the differential base.

For multi-based differentials, the


value is NULL, and the differential
base must be determined per file.

For non-differential backup types, the


value is NULL.

BackupTypeDescription nvarchar(60) Backup type as string, one of:

DATABASE

TRANSACTION LOG

FILE OR FILEGROUP

DATABASE DIFFERENTIAL

FILE DIFFERENTIAL PARTIAL

PARTIAL DIFFERENTIAL
DESCRIPTION FOR SQL SERVER BACKUP
COLUMN NAME DATA TYPE SETS

BackupSetGUID uniqueidentifier NULL Unique identification number of the


backup set, by which it is identified on
the media.

CompressedBackupSize bigint Byte count of the backup set. For


uncompressed backups, this value is
the same as BackupSize.

To calculate the compression ratio,


use CompressedBackupSize and
BackupSize.

During an msdb upgrade, this value


is set to match the value of the
BackupSize column.

containment tinyint not NULL Applies to: SQL Server 2012 (11.x)
through SQL Server 2017.

Indicates the containment status of


the database.

0 = database containment is off

1 = database is in partial containment

KeyAlgorithm nvarchar(32) Applies to: SQL Server ( SQL Server


2014 (12.x) (CU1) through current
version.

The encryption algorithm used to


encrypt the backup. NO_Encryption
indicates that the backup was not
encrypted. When the correct value
cannot be determined the value
should be NULL.

EncryptorThumbprint varbinary(20) Applies to: SQL Server ( SQL Server


2014 (12.x) (CU1) through current
version.

The thumbprint of the encryptor


which can be used to find certificate
or the asymmetric key in the
database. When the backup was not
encrypted, this value is NULL.

EncryptorType nvarchar(32) Applies to: SQL Server ( SQL Server


2014 (12.x) (CU1) through current
version.

The type of encryptor used:


Certificate or Asymmetric Key. When
the backup was not encrypted, this
value is NULL.
NOTE
If passwords are defined for the backup sets, RESTORE HEADERONLY shows complete information for only the backup
set whose password matches the specified PASSWORD option of the command. RESTORE HEADERONLY also shows
complete information for unprotected backup sets. The BackupName column for the other password-protected
backup sets on the media is set to '***Password Protected***', and all other columns are NULL.

General Remarks
A client can use RESTORE HEADERONLY to retrieve all the backup header information for all backups on a
particular backup device. For each backup on the backup device, the server sends the header information as a
row.

Security
A backup operation may optionally specify passwords for a media set, a backup set, or both. When a
password has been defined on a media set or backup set, you must specify the correct password or
passwords in the RESTORE statement. These passwords prevent unauthorized restore operations and
unauthorized appends of backup sets to media using Microsoft SQL Server tools. However, a password does
not prevent overwrite of media using the BACKUP statement's FORMAT option.

IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server tools
by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature.The best practice for
protecting backups is to store backup tapes in a secure location or back up to disk files that are protected by adequate
access control lists (ACLs). The ACLs should be set on the directory root under which backups are created.

Permissions
Obtaining information about a backup set or backup device requires CREATE DATABASE permission. For
more information, see GRANT Database Permissions (Transact-SQL ).

Examples
The following example returns the information in the header for the disk file
C:\AdventureWorks-FullBackup.bak .

RESTORE HEADERONLY
FROM DISK = N'C:\AdventureWorks-FullBackup.bak'
WITH NOUNLOAD;
GO

See Also
BACKUP (Transact-SQL )
backupset (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
Enable or Disable Backup Checksums During Backup or Restore (SQL Server)
Media Sets, Media Families, and Backup Sets (SQL Server)
Recovery Models (SQL Server)
RESTORE Statements - LABELONLY (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Returns a result set containing information about the backup media identified by the given backup device.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

NOTE
For the descriptions of the arguments, see RESTORE Arguments (Transact-SQL).

Transact-SQL Syntax Conventions

Syntax
RESTORE LABELONLY
FROM <backup_device>
[ WITH
{
--Media Set Options
MEDIANAME = { media_name | @media_name_variable }
| MEDIAPASSWORD = { mediapassword | @mediapassword_variable }

--Error Management Options


| { CHECKSUM | NO_CHECKSUM }
| { STOP_ON_ERROR | CONTINUE_AFTER_ERROR }

--Tape Options
| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
} [ ,...n ]
]
[;]

<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK | TAPE } = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}

Arguments
For descriptions of the RESTORE L ABELONLY arguments, see RESTORE Arguments (Transact-SQL ).
Result Sets
The result set from RESTORE L ABELONLY consists of a single row with this information.

COLUMN NAME DATA TYPE DESCRIPTION

MediaName nvarchar(128) Name of the media.

MediaSetId uniqueidentifier Unique identification number of the


media set.

FamilyCount int Number of media families in the media


set.

FamilySequenceNumber int Sequence number of this family.

MediaFamilyId uniqueidentifier Unique identification number for the


media family.

MediaSequenceNumber int Sequence number of this media in the


media family.

MediaLabelPresent tinyint Whether the media description


contains:

1 = Microsoft Tape Format media label

0 = Media description

MediaDescription nvarchar(255) Media description, in free-form text, or


the Tape Format media label.

SoftwareName nvarchar(128) Name of the backup software that


wrote the label.

SoftwareVendorId int Unique vendor identification number


of the software vendor that wrote the
backup.

MediaDate datetime Date and time the label was written.

Mirror_Count int Number of mirrors in the set (1-4).

Note: The labels written for different


mirrors in a set are identical.

IsCompressed bit Whether the backup is compressed:

0 = not compressed

1 =compressed

NOTE
If passwords are defined for the media set, RESTORE LABELONLY returns information only if the correct media password
is specified in the MEDIAPASSWORD option of the command.
General Remarks
Executing RESTORE L ABELONLY is a quick way to find out what the backup media contains. Because
RESTORE L ABELONLY reads only the media header, this statement finishes quickly even when using high-
capacity tape devices.

Security
A backup operation may optionally specify passwords for a media set. When a password has been defined on a
media set, you must specify the correct password in the RESTORE statement. The password prevents
unauthorized restore operations and unauthorized appends of backup sets to media using Microsoft SQL
Server tools. However, a password does not prevent overwrite of media using the BACKUP statement's
FORMAT option.

IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server tools
by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature.The best practice for
protecting backups is to store backup tapes in a secure location or back up to disk files that are protected by adequate
access control lists (ACLs). The ACLs should be set on the directory root under which backups are created.

Permissions
In SQL Server 2008 and later versions, obtaining information about a backup set or backup device requires
CREATE DATABASE permission. For more information, see GRANT Database Permissions (Transact-SQL ).

See Also
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
RESTORE MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Imports a database master key from a backup file.
Transact-SQL Syntax Conventions

Syntax
RESTORE MASTER KEY FROM FILE = 'path_to_file'
DECRYPTION BY PASSWORD = 'password'
ENCRYPTION BY PASSWORD = 'password'
[ FORCE ]

Arguments
FILE ='path_to_file'
Specifies the complete path, including file name, to the stored database master key. path_to_file can be a local path
or a UNC path to a network location.
DECRYPTION BY PASSWORD ='password'
Specifies the password that is required to decrypt the database master key that is being imported from a file.
ENCRYPTION BY PASSWORD ='password'
Specifies the password that is used to encrypt the database master key after it has been loaded into the database.
FORCE
Specifies that the RESTORE process should continue, even if the current database master key is not open, or if
SQL Server cannot decrypt some of the private keys that are encrypted with it.

Remarks
When the master key is restored, SQL Server decrypts all the keys that are encrypted with the currently active
master key, and then encrypts these keys with the restored master key. This resource-intensive operation should
be scheduled during a period of low demand. If the current database master key is not open or cannot be opened,
or if any of the keys that are encrypted by it cannot be decrypted, the restore operation fails.
Use the FORCE option only if the master key is irretrievable or if decryption fails. Information that is encrypted
only by an irretrievable key will be lost.
If the master key was encrypted by the service master key, the restored master key will also be encrypted by the
service master key.
If there is no master key in the current database, RESTORE MASTER KEY creates a master key. The new master
key will not be automatically encrypted with the service master key.

Permissions
Requires CONTROL permission on the database.
Examples
The following example restores the database master key of the AdventureWorks2012 database.

USE AdventureWorks2012;
RESTORE MASTER KEY
FROM FILE = 'c:\backups\keys\AdventureWorks2012_master_key'
DECRYPTION BY PASSWORD = '3dH85Hhk003#GHkf02597gheij04'
ENCRYPTION BY PASSWORD = '259087M#MyjkFkjhywiyedfgGDFD';
GO

See Also
CREATE MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
Encryption Hierarchy
RESTORE Statements - REWINDONLY (Transact-
SQL)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Rewinds and closes specified tape devices that were left open by BACKUP or RESTORE statements executed
with the NOREWIND option. This command is supported only for tape devices.
Transact-SQL Syntax Conventions

Syntax
RESTORE REWINDONLY
FROM <backup_device> [ ,...n ]
[ WITH {UNLOAD | NOUNLOAD}]
}
[;]

<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| TAPE = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}

Arguments
<backup_device> ::=
Specifies the logical or physical backup devices to use for the restore operation.
{ logical_backup_device_name | @logical_backup_device_name_var }
Is the logical name, which must follow the rules for identifiers, of the backup devices created by
sp_addumpdevice from which the database is restored. If supplied as a variable
(@logical_backup_device_name_var), the backup device name can be specified either as a string constant
(@logical_backup_device_name_var = logical_backup_device_name) or as a variable of character string data
type, except for the ntext or text data types.
{DISK | TAPE } = { 'physical_backup_device_name' | @physical_backup_device_name_var }
Allows backups to be restored from the named disk or tape device. The device types of disk and tape should be
specified with the actual name (for example, complete path and file name) of the device: DISK = 'C:\Program
Files\Microsoft SQL Server\MSSQL\BACKUP\Mybackup.bak' or TAPE = '\\.\TAPE0'. If specified as a variable
(@physical_backup_device_name_var), the device name can be specified either as a string constant
(@physical_backup_device_name_var = 'physcial_backup_device_name') or as a variable of character string
data type, except for the ntext or text data types.
If using a network server with a UNC name (which must contain machine name), specify a device type of disk.
For more information about using UNC names, see Backup Devices (SQL Server).
The account under which you are running Microsoft SQL Server must have READ access to the remote
computer or network server in order to perform a RESTORE operation.
n
Is a placeholder that indicates multiple backup devices and logical backup devices can be specified. The
maximum number of backup devices or logical backup devices is 64.
Whether a restore sequence requires as many backup devices as were used to create the media set to which the
backups belong, depends on whether the restore is offline or online. Offline restore allows a backup to be
restored using fewer devices than were used to create the backup. Online restore requires all the backup devices
of the backup. An attempt to restore with fewer devices fails.
For more information, see Backup Devices (SQL Server).

NOTE
When restoring a backup from a mirrored media set, you can specify only a single mirror for each media family. In the
presence of errors, however, having the other mirror(s) enables some restore problems to be resolved quickly. You can
substitute a damaged media volume with the corresponding volume from another mirror. Note that for offline restores
you can restore from fewer devices than media families, but each family is processed only once.

WITH Options
UNLOAD
Specifies that the tape is automatically rewound and unloaded when the RESTORE is finished. UNLOAD is set
by default when a new user session is started. It remains set until NOUNLOAD is specified. This option is used
only for tape devices. If a non-tape device is being used for RESTORE, this option is ignored.
NOUNLOAD
Specifies that the tape is not unloaded automatically from the tape drive after a RESTORE. NOUNLOAD
remains set until UNLOAD is specified.
Specifies that the tape is not unloaded automatically from the tape drive after a RESTORE. NOUNLOAD
remains set until UNLOAD is specified.

General Remarks
RESTORE REWINDONLY is an alternative to RESTORE L ABELONLY FROM TAPE = <name> WITH
REWIND. You can get a list of opened tape drives from the sys.dm_io_backup_tapes dynamic management view.

Security
Permissions
Any user may use RESTORE REWINDONLY.

See Also
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
RESTORE Statements - VERIFYONLY (Transact-
SQL)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Verifies the backup but does not restore it, and checks to see that the backup set is complete and the entire
backup is readable. However, RESTORE VERIFYONLY does not attempt to verify the structure of the data
contained in the backup volumes. In Microsoft SQL Server, RESTORE VERIFYONLY has been enhanced to
do additional checking on the data to increase the probability of detecting errors. The goal is to be as close to
an actual restore operation as practical. For more information, see the Remarks.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL
Database Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

If the backup is valid, the SQL Server Database Engine returns a success message.

NOTE
For the descriptions of the arguments, see RESTORE Arguments (Transact-SQL).

Transact-SQL Syntax Conventions

Syntax
RESTORE VERIFYONLY
FROM <backup_device> [ ,...n ]
[ WITH
{
LOADHISTORY

--Restore Operation Option


| MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name'
[ ,...n ]

--Backup Set Options


| FILE = { backup_set_file_number | @backup_set_file_number }
| PASSWORD = { password | @password_variable }

--Media Set Options


| MEDIANAME = { media_name | @media_name_variable }
| MEDIAPASSWORD = { mediapassword | @mediapassword_variable }

--Error Management Options


| { CHECKSUM | NO_CHECKSUM }
| { STOP_ON_ERROR | CONTINUE_AFTER_ERROR }

--Monitoring Options
| STATS [ = percentage ]

--Tape Options
| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
} [ ,...n ]
]
[;]

<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK | TAPE } = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}

Arguments
For descriptions of the RESTORE VERIFYONLY arguments, see RESTORE Arguments (Transact-SQL ).

General Remarks
The media set or the backup set must contain minimal correct information to enable it to be interpreted as
Microsoft Tape Format. If not, RESTORE VERIFYONLY stops and indicates that the format of the backup is
invalid.
Checks performed by RESTORE VERIFYONLY include:
That the backup set is complete and all volumes are readable.
Some header fields of database pages, such as the page ID (as if it were about to write the data).
Checksum (if present on the media).
Checking for sufficient space on destination devices.
NOTE
RESTORE VERIFYONLY does not work on a database snapshot. To verify a database snapshot before a revert
operation, you can run DBCC CHECKDB.

NOTE
With snapshot backups, RESTORE VERIFYONLY confirms the existence of the snapshots in the locations specified in
the backup file. Snapshot backups are a new feature in SQL Server 2016 (13.x). For more information about Snapshot
Backups, see File-Snapshot Backups for Database Files in Azure.

Security
A backup operation may optionally specify passwords for a media set, a backup set, or both. When a
password has been defined on a media set or backup set, you must specify the correct password or
passwords in the RESTORE statement. These passwords prevent unauthorized restore operations and
unauthorized appends of backup sets to media using SQL Server tools. However, a password does not
prevent overwrite of media using the BACKUP statement's FORMAT option.

IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server
tools by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using
this feature in new development work, and plan to modify applications that currently use this feature.The best
practice for protecting backups is to store backup tapes in a secure location or back up to disk files that are protected
by adequate access control lists (ACLs). The ACLs should be set on the directory root under which backups are
created.

Permissions
Beginning in SQL Server 2008, obtaining information about a backup set or backup device requires
CREATE DATABASE permission. For more information, see GRANT Database Permissions (Transact-SQL ).

See Also
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE REWINDONLY (Transact-SQL )
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
BULK INSERT (Transact-SQL)
5/3/2018 • 19 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Imports a data file into a database table or view in a user-specified format in SQL Server

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
BULK INSERT
[ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ]
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] DATASOURCE = 'data_source_name' ]
[ [ , ] ERRORFILE = 'file_name' ]
[ [ , ] ERRORFILE_DATASOURCE = 'data_source_name' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE_DATASOURCE = 'data_source_name' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]

-- input file format options


[ [ , ] FORMAT = 'CSV' ]
[ [ , ] FIELDQUOTE = 'quote_characters']
[ [ , ] FORMATFILE = 'format_file_path' ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
)]

Arguments
database_name
Is the database name in which the specified table or view resides. If not specified, this is the current database.
schema_name
Is the name of the table or view schema. schema_name is optional if the default schema for the user performing
the bulk-import operation is schema of the specified table or view. If schema is not specified and the default
schema of the user performing the bulk-import operation is different from the specified table or view, SQL Server
returns an error message, and the bulk-import operation is canceled.
table_name
Is the name of the table or view to bulk import data into. Only views in which all columns refer to the same base
table can be used. For more information about the restrictions for loading data into views, see INSERT (Transact-
SQL ).
' data_file '
Is the full path of the data file that contains data to import into the specified table or view. BULK INSERT can
import data from a disk (including network, floppy disk, hard disk, and so on).
data_file must specify a valid path from the server on which SQL Server is running. If data_file is a remote file,
specify the Universal Naming Convention (UNC ) name. A UNC name has the form
\\Systemname\ShareName\Path\FileName. For example, \\SystemX\DiskZ\Sales\update.txt .
Applies to: SQL Server 2017 (14.x) CTP 1.1.
Beginning with SQL Server 2017 (14.x) CTP1.1, the data_file can be in Azure blob storage.
' data_source_name '
Applies to: SQL Server 2017 (14.x) CTP 1.1.
Is a named external data source pointing to the Azure Blob storage location of the file that will be imported. The
external data source must be created using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (14.x) CTP
1.1. For more information, see CREATE EXTERNAL DATA SOURCE.
BATCHSIZE =batch_size
Specifies the number of rows in a batch. Each batch is copied to the server as one transaction. If this fails, SQL
Server commits or rolls back the transaction for every batch. By default, all data in the specified data file is one
batch. For information about performance considerations, see "Remarks," later in this topic.
CHECK_CONSTRAINTS
Specifies that all constraints on the target table or view must be checked during the bulk-import operation.
Without the CHECK_CONSTRAINTS option, any CHECK and FOREIGN KEY constraints are ignored, and after
the operation, the constraint on the table is marked as not-trusted.

NOTE
UNIQUE, and PRIMARY KEY constraints are always enforced. When importing into a character column that is defined with a
NOT NULL constraint, BULK INSERT inserts a blank string when there is no value in the text file.

At some point, you must examine the constraints on the whole table. If the table was non-empty before the bulk-
import operation, the cost of revalidating the constraint may exceed the cost of applying CHECK constraints to
the incremental data.
A situation in which you might want constraints disabled (the default behavior) is if the input data contains rows
that violate constraints. With CHECK constraints disabled, you can import the data and then use Transact-SQL
statements to remove the invalid data.

NOTE
The MAXERRORS option does not apply to constraint checking.

CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' }


Specifies the code page of the data in the data file. CODEPAGE is relevant only if the data contains char, varchar,
or text columns with character values greater than 127 or less than 32.

IMPORTANT
CODEPAGE is not a supported option on Linux.

NOTE
Microsoft recommends that you specify a collation name for each column in a format file.

CODEPAGE VALUE DESCRIPTION

ACP Columns of char, varchar, or text data type are converted


from the ANSI/ Microsoft Windows code page (ISO 1252) to
the SQL Server code page.

OEM (default) Columns of char, varchar, or text data type are converted
from the system OEM code page to the SQL Server code
page.

RAW No conversion from one code page to another occurs; this is


the fastest option.

code_page Specific code page number, for example, 850.

** Important *\* Versions prior to SQL Server 2016 (13.x) do


not support code page 65001 (UTF-8 encoding).

DATAFILETYPE = { 'char' | 'native' | 'widechar' | 'widenative' }


Specifies that BULK INSERT performs the import operation using the specified data-file type value.

DATAFILETYPE VALUE ALL DATA REPRESENTED IN:

char (default) Character format.

For more information, see Use Character Format to Import or


Export Data (SQL Server).

native Native (database) data types. Create the native data file by
bulk importing data from SQL Server using the bcp utility.

The native value offers a higher performance alternative to


the char value.

For more information, see Use Native Format to Import or


Export Data (SQL Server).

widechar Unicode characters.

For more information, see Use Unicode Character Format to


Import or Export Data (SQL Server).
DATAFILETYPE VALUE ALL DATA REPRESENTED IN:

widenative Native (database) data types, except in char, varchar, and


text columns, in which data is stored as Unicode. Create the
widenative data file by bulk importing data from SQL Server
using the bcp utility.

The widenative value offers a higher performance alternative


to widechar. If the data file contains ANSI extended
characters, specify widenative.

For more information, see Use Unicode Native Format to


Import or Export Data (SQL Server).

ERRORFILE ='file_name'
Specifies the file used to collect rows that have formatting errors and cannot be converted to an OLE DB rowset.
These rows are copied into this error file from the data file "as is."
The error file is created when the command is executed. An error occurs if the file already exists. Additionally, a
control file that has the extension .ERROR.txt is created. This references each row in the error file and provides
error diagnostics. As soon as the errors have been corrected, the data can be loaded.
Applies to: SQL Server 2017 (14.x) CTP 1.1. Beginning with SQL Server 2017 (14.x), the error_file_path can
be in Azure blob storage.
'errorfile_data_source_name'
Applies to: SQL Server 2017 (14.x) CTP 1.1. Is a named external data source pointing to the Azure Blob storage
location of the error file that will contain errors found during the import. The external data source must be created
using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (14.x) CTP 1.1. For more information, see
CREATE EXTERNAL DATA SOURCE.
FIRSTROW =first_row
Specifies the number of the first row to load. The default is the first row in the specified data file. FIRSTROW is 1-
based.

NOTE
The FIRSTROW attribute is not intended to skip column headers. Skipping headers is not supported by the BULK INSERT
statement. When skipping rows, the SQL Server Database Engine looks only at the field terminators, and does not validate
the data in the fields of skipped rows.

FIRE_TRIGGERS
Specifies that any insert triggers defined on the destination table execute during the bulk-import operation. If
triggers are defined for INSERT operations on the target table, they are fired for every completed batch.
If FIRE_TRIGGERS is not specified, no insert triggers execute.
FORMATFILE_DATASOURCE = 'data_source_name'
Applies to: SQL Server 2017 (14.x) 1.1.
Is a named external data source pointing to the Azure Blob storage location of the format file that will define the
schema of imported data. The external data source must be created using the TYPE = BLOB_STORAGE option added
in SQL Server 2017 (14.x) CTP 1.1. For more information, see CREATE EXTERNAL DATA SOURCE.
KEEPIDENTITY
Specifies that identity value or values in the imported data file are to be used for the identity column. If
KEEPIDENTITY is not specified, the identity values for this column are verified but not imported and SQL Server
automatically assigns unique values based on the seed and increment values specified during table creation. If the
data file does not contain values for the identity column in the table or view, use a format file to specify that the
identity column in the table or view is to be skipped when importing data; SQL Server automatically assigns
unique values for the column. For more information, see DBCC CHECKIDENT (Transact-SQL ).
For more information, see about keeping identify values see Keep Identity Values When Bulk Importing Data
(SQL Server).
KEEPNULLS
Specifies that empty columns should retain a null value during the bulk-import operation, instead of having any
default values for the columns inserted. For more information, see Keep Nulls or Use Default Values During Bulk
Import (SQL Server).
KILOBYTES_PER_BATCH = kilobytes_per_batch
Specifies the approximate number of kilobytes (KB ) of data per batch as kilobytes_per_batch. By default,
KILOBYTES_PER_BATCH is unknown. For information about performance considerations, see "Remarks," later in
this topic.
L ASTROW=last_row
Specifies the number of the last row to load. The default is 0, which indicates the last row in the specified data file.
MAXERRORS = max_errors
Specifies the maximum number of syntax errors allowed in the data before the bulk-import operation is canceled.
Each row that cannot be imported by the bulk-import operation is ignored and counted as one error. If
max_errors is not specified, the default is 10.

NOTE
The MAX_ERRORS option does not apply to constraint checks or to converting money and bigint data types.

ORDER ( { column [ ASC | DESC ] } [ ,... n ] )


Specifies how the data in the data file is sorted. Bulk import performance is improved if the data being imported
is sorted according to the clustered index on the table, if any. If the data file is sorted in a different order, that is
other than the order of a clustered index key or if there is no clustered index on the table, the ORDER clause is
ignored. The column names supplied must be valid column names in the destination table. By default, the bulk
insert operation assumes the data file is unordered. For optimized bulk import, SQL Server also validates that the
imported data is sorted.
n
Is a placeholder that indicates that multiple columns can be specified.
ROWS_PER_BATCH =rows_per_batch
Indicates the approximate number of rows of data in the data file.
By default, all the data in the data file is sent to the server as a single transaction, and the number of rows in the
batch is unknown to the query optimizer. If you specify ROWS_PER_BATCH (with a value > 0) the server uses
this value to optimize the bulk-import operation. The value specified for ROWS_PER_BATCH should
approximately the same as the actual number of rows. For information about performance considerations, see
"Remarks," later in this topic.
TABLOCK
Specifies that a table-level lock is acquired for the duration of the bulk-import operation. A table can be loaded
concurrently by multiple clients if the table has no indexes and TABLOCK is specified. By default, locking behavior
is determined by the table option table lock on bulk load. Holding a lock for the duration of the bulk-import
operation reduces lock contention on the table, in some cases can significantly improve performance. For
information about performance considerations, see "Remarks," later in this topic.
For columnstore index. the locking behaviour is different because it is internally divided into multiple rowsets.
Each thread loads data exclusively into each rowset by taking a X lock on the rowset allowing parallel data load
with concurrent data load sessions. The use of TABLOCK option will cause thread to take an X lock on the table
(unlike BU lock for traditional rowsets) which will prevent other concurrent threads to load data concurrently.
Input file format options
FORMAT = 'CSV'
Applies to: SQL Server 2017 (14.x) CTP 1.1.
Specifies a comma separated values file compliant to the RFC 4180 standard.
FIELDQUOTE = 'field_quote'
Applies to: SQL Server 2017 (14.x) CTP 1.1.
Specifies a character that will be used as the quote character in the CSV file. If not specified, the quote character
(") will be used as the quote character as defined in the RFC 4180 standard.
FORMATFILE ='format_file_path'
Specifies the full path of a format file. A format file describes the data file that contains stored responses created
by using the bcp utility on the same table or view. The format file should be used if:
The data file contains greater or fewer columns than the table or view.
The columns are in a different order.
The column delimiters vary.
There are other changes in the data format. Format files are typically created by using the bcp utility and
modified with a text editor as needed. For more information, see bcp Utility.
Applies to: SQL Server 2017 (14.x) CTP 1.1.
Beginning with SQL Server 2017 (14.x) CTP 1.1, the format_file_path can be in Azure blob storage.
FIELDTERMINATOR ='field_terminator'
Specifies the field terminator to be used for char and widechar data files. The default field terminator is \t (tab
character). For more information, see Specify Field and Row Terminators (SQL Server).
ROWTERMINATOR ='row_terminator'
Specifies the row terminator to be used for char and widechar data files. The default row terminator is \r\n
(newline character). For more information, see Specify Field and Row Terminators (SQL Server).

Compatibility
BULK INSERT enforces strict data validation and data checks of data read from a file that could cause existing
scripts to fail when they are executed on invalid data. For example, BULK INSERT verifies that:
The native representations of float or real data types are valid.
Unicode data has an even-byte length.

Data Types
String-to -Decimal Data Type Conversions
The string-to-decimal data type conversions used in BULK INSERT follow the same rules as the Transact-SQL
CONVERT function, which rejects strings representing numeric values that use scientific notation. Therefore,
BULK INSERT treats such strings as invalid values and reports conversion errors.
To work around this behavior, use a format file to bulk import scientific notation float data into a decimal column.
In the format file, explicitly describe the column as real or float data. For more information about these data
types, see float and real (Transact-SQL ).

NOTE
Format files represent real data as the SQLFLT4 data type and float data as the SQLFLT8 data type. For information about
non-XML format files, see Specify File Storage Type by Using bcp (SQL Server).

Example of Importing a Numeric Value that Uses Scientific Notation


This example uses the following table:

CREATE TABLE t_float(c1 float, c2 decimal (5,4));

The user wants to bulk import data into the t_float table. The data file, C:\t_float-c.dat, contains scientific
notation float data; for example:

8.0000000000000002E-28.0000000000000002E-2

However, BULK INSERT cannot import this data directly into t_float , because its second column, c2 , uses the
decimal data type. Therefore, a format file is necessary. The format file must map the scientific notation float
data to the decimal format of column c2 .
The following format file uses the SQLFLT8 data type to map the second data field to the second column:

<?xml version="1.0"?>
<BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<RECORD>
<FIELD ID="1" xsi:type="CharTerm" TERMINATOR="\t" MAX_LENGTH="30"/>
<FIELD ID="2" xsi:type="CharTerm" TERMINATOR="\r\n" MAX_LENGTH="30"/> </RECORD> <ROW>
<COLUMN SOURCE="1" NAME="c1" xsi:type="SQLFLT8"/>
<COLUMN SOURCE="2" NAME="c2" xsi:type="SQLFLT8"/> </ROW> </BCPFORMAT>

To use this format file (using the file name C:\t_floatformat-c-xml.xml ) to import the test data into the test table,
issue the following Transact-SQL statement:

BULK INSERT bulktest..t_float


FROM 'C:\t_float-c.dat' WITH (FORMATFILE='C:\t_floatformat-c-xml.xml');
GO

Data Types for Bulk Exporting or Importing SQLXML Documents


To bulk export or import SQLXML data, use one of the following data types in your format file:

DATA TYPE EFFECT

SQLCHAR or SQLVARCHAR The data is sent in the client code page or in the code page
implied by the collation). The effect is the same as specifying
the DATAFILETYPE ='char' without specifying a format file.

SQLNCHAR or SQLNVARCHAR The data is sent as Unicode. The effect is the same as
specifying the DATAFILETYPE = 'widechar' without specifying
a format file.

SQLBINARY or SQLVARBIN The data is sent without any conversion.


General Remarks
For a comparison of the BULK INSERT statement, the INSERT ... SELECT * FROM OPENROWSET(BULK...)
statement, and the bcp command, see Bulk Import and Export of Data (SQL Server).
For information about preparing data for bulk import, see Prepare Data for Bulk Export or Import (SQL Server).
The BULK INSERT statement can be executed within a user-defined transaction to import data into a table or
view. Optionally, to use multiple matches for bulk importing data, a transaction can specify the BATCHSIZE clause
in the BULK INSERT statement. If a multiple-batch transaction is rolled back, every batch that the transaction has
sent to SQL Server is rolled back.

Interoperability
Importing Data from a CSV file
Beginning with SQL Server 2017 (14.x) CTP 1.1, BULK INSERT supports the CSV format.
Before SQL Server 2017 (14.x) CTP 1.1, comma-separated value (CSV ) files are not supported by SQL Server
bulk-import operations. However, in some cases, a CSV file can be used as the data file for a bulk import of data
into SQL Server. For information about the requirements for importing data from a CSV data file, see Prepare
Data for Bulk Export or Import (SQL Server).

Logging Behavior
For information about when row -insert operations that are performed by bulk import are logged in the
transaction log, see Prerequisites for Minimal Logging in Bulk Import.

Restrictions
When using a format file with BULK INSERT, you can specify up to 1024 fields only. This is same as the
maximum number of columns allowed in a table. If you use BULK INSERT with a data file that contains more
than 1024 fields, BULK INSERT generates the 4822 error. The bcp utility does not have this limitation, so for data
files that contain more than 1024 fields, use the bcp command.

Performance Considerations
If the number of pages to be flushed in a single batch exceeds an internal threshold, a full scan of the buffer pool
might occur to identify which pages to flush when the batch commits. This full scan can hurt bulk-import
performance. A likely case of exceeding the internal threshold occurs when a large buffer pool is combined with a
slow I/O subsystem. To avoid buffer overflows on large machines, either do not use the TABLOCK hint (which will
remove the bulk optimizations) or use a smaller batch size (which preserves the bulk optimizations).
Because computers vary, we recommend that you test various batch sizes with your data load to find out what
works best for you.

Security
Security Account Delegation (Impersonation)
If a user uses a SQL Server login, the security profile of the SQL Server process account is used. A login using
SQL Server authentication cannot be authenticated outside of the Database Engine. Therefore, when a BULK
INSERT command is initiated by a login using SQL Server authentication, the connection to the data is made
using the security context of the SQL Server process account (the account used by the SQL Server Database
Engine service). To successfully read the source data you must grant the account used by the SQL Server
Database Engine, access to the source data.In contrast, if a SQL Server user logs on by using Windows
Authentication, the user can read only those files that can be accessed by the user account, regardless of the
security profile of the SQL Server process.
When executing the BULK INSERT statement by using sqlcmd or osql, from one computer, inserting data into
SQL Server on a second computer, and specifying a data_file on third computer by using a UNC path, you may
receive a 4861 error.
To resolve this error, use SQL Server Authentication and specify a SQL Server login that uses the security profile
of the SQL Server process account, or configure Windows to enable security account delegation. For information
about how to enable a user account to be trusted for delegation, see Windows Help.
For more information about this and other security considerations for using BULK INSERT, see Import Bulk Data
by Using BULK INSERT or OPENROWSET(BULK...) (SQL Server).
Permissions
Requires INSERT and ADMINISTER BULK OPERATIONS permissions. In Azure SQL Database, INSERT and
ADMINISTER DATABASE BULK OPERATIONS permissions are required. Additionally, ALTER TABLE
permission is required if one or more of the following is true:
Constraints exist and the CHECK_CONSTRAINTS option is not specified.

NOTE
Disabling constraints is the default behavior. To check constraints explicitly, use the CHECK_CONSTRAINTS option.

Triggers exist and the FIRE_TRIGGER option is not specified.

NOTE
By default, triggers are not fired. To fire triggers explicitly, use the FIRE_TRIGGER option.

You use the KEEPIDENTITY option to import identity value from data file.

Examples
A. Using pipes to import data from a file
The following example imports order detail information into the AdventureWorks2012.Sales.SalesOrderDetail table
from the specified data file by using a pipe ( | ) as the field terminator and |\n as the row terminator.

BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail


FROM 'f:\orders\lineitem.tbl'
WITH
(
FIELDTERMINATOR =' |',
ROWTERMINATOR =' |\n'
);

B. Using the FIRE_TRIGGERS argument


The following example specifies the FIRE_TRIGGERS argument.
BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail
FROM 'f:\orders\lineitem.tbl'
WITH
(
FIELDTERMINATOR =' |',
ROWTERMINATOR = ':\n',
FIRE_TRIGGERS
);

C. Using line feed as a row terminator


The following example imports a file that uses the line feed as a row terminator such as a UNIX output:

DECLARE @bulk_cmd varchar(1000);


SET @bulk_cmd = 'BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail
FROM ''<drive>:\<path>\<filename>''
WITH (ROWTERMINATOR = '''+CHAR(10)+''')';
EXEC(@bulk_cmd);

NOTE
Due to how Microsoft Windows treats text files (\n automatically gets replaced with \r\n).

D. Specifying a code page


The following example show how to specify a code page.

BULK INSERT MyTable


FROM 'D:\data.csv'
WITH
( CODEPAGE = '65001',
DATAFILETYPE = 'char',
FIELDTERMINATOR = ','
);

E. Importing data from a CSV file


The following example show how to specify a CSV file.

BULK INSERT Sales.Invoices


FROM '\\share\invoices\inv-2016-07-25.csv'
WITH (FORMAT = 'CSV');

F. Importing data from a file in Azure blob storage


The following example shows how to load data from a csv file in an Azure blob storage location, which has been
configured as an external data source. This requires a database scoped credential using a shared access signature.

BULK INSERT Sales.Invoices


FROM 'inv-2017-01-19.csv'
WITH (DATA_SOURCE = 'MyAzureInvoices',
FORMAT = 'CSV');

For complete BULK INSERT examples including configuring the credential and external data source, see Examples
of Bulk Access to Data in Azure Blob Storage.
Additional Examples
Other BULK INSERT examples are provided in the following topics:
Examples of Bulk Import and Export of XML Documents (SQL Server)
Keep Identity Values When Bulk Importing Data (SQL Server)
Keep Nulls or Use Default Values During Bulk Import (SQL Server)
Specify Field and Row Terminators (SQL Server)
Use a Format File to Bulk Import Data (SQL Server)
Use Character Format to Import or Export Data (SQL Server)
Use Native Format to Import or Export Data (SQL Server)
Use Unicode Character Format to Import or Export Data (SQL Server)
Use Unicode Native Format to Import or Export Data (SQL Server)
Use a Format File to Skip a Table Column (SQL Server)
Use a Format File to Map Table Columns to Data-File Fields (SQL Server)

See Also
Bulk Import and Export of Data (SQL Server)
bcp Utility
Format Files for Importing or Exporting Data (SQL Server)
INSERT (Transact-SQL )
OPENROWSET (Transact-SQL )
Prepare Data for Bulk Export or Import (SQL Server)
sp_tableoption (Transact-SQL )
CREATE AGGREGATE (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a user-defined aggregate function whose implementation is defined in a class of an assembly in the .NET
Framework. For the Database Engine to bind the aggregate function to its implementation, the .NET Framework
assembly that contains the implementation must first be uploaded into an instance of SQL Server by using a
CREATE ASSEMBLY statement.
Transact-SQL Syntax Conventions

Syntax
CREATE AGGREGATE [ schema_name . ] aggregate_name
(@param_name <input_sqltype>
[ ,...n ] )
RETURNS <return_sqltype>
EXTERNAL NAME assembly_name [ .class_name ]

<input_sqltype> ::=
system_scalar_type | { [ udt_schema_name. ] udt_type_name }

<return_sqltype> ::=
system_scalar_type | { [ udt_schema_name. ] udt_type_name }

Arguments
schema_name
Is the name of the schema to which the user-defined aggregate function belongs.
aggregate_name
Is the name of the aggregate function you want to create.
@ param_name
One or more parameters in the user-defined aggregate. The value of a parameter must be supplied by the user
when the aggregate function is executed. Specify a parameter name by using an "at" sign (@) as the first character.
The parameter name must comply with the rules for identifiers. Parameters are local to the function.
system_scalar_type
Is any one of the SQL Server system scalar data types to hold the value of the input parameter or return value. All
scalar data types can be used as a parameter for a user-defined aggregate, except text, ntext, and image.
Nonscalar types, such as cursor and table, cannot be specified.
udt_schema_name
Is the name of the schema to which the CLR user-defined type belongs. If not specified, the Database Engine
references udt_type_name in the following order:
The native SQL type namespace.
The default schema of the current user in the current database.
The dbo schema in the current database.
udt_type_name
Is the name of a CLR user-defined type already created in the current database. If udt_schema_name is not
specified, SQL Server assumes the type belongs to the schema of the current user.
assembly_name [ .class_name ]
Specifies the assembly to bind with the user-defined aggregate function and, optionally, the name of the
schema to which the assembly belongs and the name of the class in the assembly that implements the user-
defined aggregate. The assembly must already have been created in the database by using a CREATE
ASSEMBLY statement. class_name must be a valid SQL Server identifier and match the name of a class
that exists in the assembly. class_name may be a namespace-qualified name if the programming language
used to write the class uses namespaces, such as C#. If class_name is not specified, SQL Server assumes it
is the same as aggregate_name.

Remarks
By default, the ability of SQL Server to run CLR code is off. You can create, modify, and drop database objects that
reference managed code modules, but the code in these modules will not run in an instance of SQL Server unless
the clr enabled option is enabled by using sp_configure.
The class of the assembly referenced in assembly_name and its methods, should satisfy all the requirements for
implementing a user-defined aggregate function in an instance of SQL Server. For more information, see CLR
User-Defined Aggregates.

Permissions
Requires CREATE AGGREGATE permission and also REFERENCES permission on the assembly that is specified
in the EXTERNAL NAME clause.

Examples
The following example assumes that a StringUtilities.csproj sample application is compiled. For more information,
see String Utility Functions Sample.
The example creates aggregate Concatenate . Before the aggregate is created, the assembly StringUtilities.dll is
registered in the local database.

USE AdventureWorks2012;
GO
DECLARE @SamplesPath nvarchar(1024)
-- You may have to modify the value of the this variable if you have
--installed the sample some location other than the default location.

SELECT @SamplesPath = REPLACE(physical_name, 'Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\master.mdf',


'Microsoft SQL Server\130\Samples\Engine\Programmability\CLR\')
FROM master.sys.database_files
WHERE name = 'master';

CREATE ASSEMBLY StringUtilities FROM @SamplesPath +


'StringUtilities\CS\StringUtilities\bin\debug\StringUtilities.dll'
WITH PERMISSION_SET=SAFE;
GO

CREATE AGGREGATE Concatenate(@input nvarchar(4000))


RETURNS nvarchar(4000)
EXTERNAL NAME [StringUtilities].[Microsoft.Samples.SqlServer.Concatenate];
GO
See Also
DROP AGGREGATE (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds an application role to the current database.
Transact-SQL Syntax Conventions

Syntax
CREATE APPLICATION ROLE application_role_name
WITH PASSWORD = 'password' [ , DEFAULT_SCHEMA = schema_name ]

Arguments
application_role_name
Specifies the name of the application role. This name must not already be used to refer to any principal in the
database.
PASSWORD ='password'
Specifies the password that database users will use to activate the application role. You should always use strong
passwords. password must meet the Windows password policy requirements of the computer that is running
the instance of SQL Server.
DEFAULT_SCHEMA =schema_name
Specifies the first schema that will be searched by the server when it resolves the names of objects for this role.
If DEFAULT_SCHEMA is left undefined, the application role will use DBO as its default schema. schema_name
can be a schema that does not exist in the database.

Remarks
IMPORTANT
Password complexity is checked when application role passwords are set. Applications that invoke application roles must
store their passwords. Application role passwords should always be stored encrypted.

Application roles are visible in the sys.database_principals catalog view.


For information about how to use application roles, see Application Roles.
Cau t i on

Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that
schemas are equivalent to database users may no longer return correct results. Old catalog views, including
sysobjects, should not be used in a database in which any of the following DDL statements have ever been used:
CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE
ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).

Permissions
Requires ALTER ANY APPLICATION ROLE permission on the database.

Examples
The following example creates an application role called weekly_receipts that has the password
987Gbv876sPYY5m23 and Sales as its default schema.

CREATE APPLICATION ROLE weekly_receipts


WITH PASSWORD = '987G^bv876sPY)Y5m23'
, DEFAULT_SCHEMA = Sales;
GO

See Also
Application Roles
sp_setapprole (Transact-SQL )
ALTER APPLICATION ROLE (Transact-SQL )
DROP APPLICATION ROLE (Transact-SQL )
Password Policy
EVENTDATA (Transact-SQL )
CREATE ASSEMBLY (Transact-SQL)
5/4/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Creates a managed application module that contains class metadata and managed code as an object in an
instance of SQL Server. By referencing this module, common language runtime (CLR ) functions, stored
procedures, triggers, user-defined aggregates, and user-defined types can be created in the database.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

WARNING
CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer supported as a security boundary. A CLR
assembly created with PERMISSION_SET = SAFE may be able to access external system resources, call unmanaged code,
and acquire sysadmin privileges. Beginning with SQL Server 2017 (14.x), an sp_configure option called
clr strict security is introduced to enhance the security of CLR assemblies. clr strict security is enabled by
default, and treats SAFE and EXTERNAL_ACCESS assemblies as if they were marked UNSAFE . The clr strict security
option can be disabled for backward compatibility, but this is not recommended. Microsoft recommends that all assemblies
be signed by a certificate or asymmetric key with a corresponding login that has been granted UNSAFE ASSEMBLY
permission in the master database. For more information, see CLR strict security.

Transact-SQL Syntax Conventions

Syntax
CREATE ASSEMBLY assembly_name
[ AUTHORIZATION owner_name ]
FROM { <client_assembly_specifier> | <assembly_bits> [ ,...n ] }
[ WITH PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE } ]
[ ; ]
<client_assembly_specifier> :: =
'[\\computer_name\]share_name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'

<assembly_bits> :: =
{ varbinary_literal | varbinary_expression }

Arguments
assembly_name
Is the name of the assembly. The name must be unique within the database and a valid identifier.
AUTHORIZATION owner_name
Specifies the name of a user or role as owner of the assembly. owner_name must either be the name of a role of
which the current user is a member, or the current user must have IMPERSONATE permission on owner_name.
If not specified, ownership is given to the current user.
<client_assembly_specifier>
Specifies the local path or network location where the assembly that is being uploaded is located, and also the
manifest file name that corresponds to the assembly. <client_assembly_specifier> can be expressed as a fixed
string or an expression evaluating to a fixed string, with variables. CREATE ASSEMBLY does not support loading
multimodule assemblies. SQL Server also looks for any dependent assemblies of this assembly in the same
location and also uploads them with the same owner as the root level assembly. If these dependent assemblies
are not found and they are not already loaded in the current database, CREATE ASSEMBLY fails. If the dependent
assemblies are already loaded in the current database, the owner of those assemblies must be the same as the
owner of the newly created assembly.
<client_assembly_specifier> cannot be specified if the logged in user is being impersonated.
<assembly_bits>
Is the list of binary values that make up the assembly and its dependent assemblies. The first value in the list is
considered the root-level assembly. The values corresponding to the dependent assemblies can be supplied in any
order. Any values that do not correspond to dependencies of the root assembly are ignored.

NOTE
This option is not available in a contained database.

varbinary_literal
Is a varbinary literal.
varbinary_expression
Is an expression of type varbinary.
PERMISSION_SET { SAFE | EXTERNAL_ACCESS | UNSAFE }

IMPORTANT
The PERMISSION_SET option is affected by the clr strict security option, described in the opening warning. When
clr strict security is enabled, all assemblies are treated as UNSAFE .

Specifies a set of code access permissions that are granted to the assembly when it is accessed by SQL Server. If
not specified, SAFE is applied as the default.
We recommend using SAFE. SAFE is the most restrictive permission set. Code executed by an assembly with
SAFE permissions cannot access external system resources such as files, the network, environment variables, or
the registry.
EXTERNAL_ACCESS enables assemblies to access certain external system resources such as files, networks,
environmental variables, and the registry.

NOTE
This option is not available in a contained database.

UNSAFE enables assemblies unrestricted access to resources, both within and outside an instance of SQL Server.
Code running from within an UNSAFE assembly can call unmanaged code.

NOTE
This option is not available in a contained database.
IMPORTANT
SAFE is the recommended permission setting for assemblies that perform computation and data management tasks
without accessing resources outside an instance of SQL Server.
We recommend using EXTERNAL_ACCESS for assemblies that access resources outside of an instance of SQL Server.
EXTERNAL_ACCESS assemblies include the reliability and scalability protections of SAFE assemblies, but from a security
perspective are similar to UNSAFE assemblies. This is because code in EXTERNAL_ACCESS assemblies runs by default under
the SQL Server service account and accesses external resources under that account, unless the code explicitly impersonates
the caller. Therefore, permission to create EXTERNAL_ACCESS assemblies should be granted only to logins that are trusted
to run code under the SQL Server service account. For more information about impersonation, see CLR Integration
Security.
Specifying UNSAFE enables the code in the assembly complete freedom to perform operations in the SQL Server process
space that can potentially compromise the robustness of SQL Server. UNSAFE assemblies can also potentially subvert the
security system of either SQL Server or the common language runtime. UNSAFE permissions should be granted only to
highly trusted assemblies. Only members of the sysadmin fixed server role can create and alter UNSAFE assemblies.

For more information about assembly permission sets, see Designing Assemblies.

Remarks
CREATE ASSEMBLY uploads an assembly that was previously compiled as a .dll file from managed code for use
inside an instance of SQL Server.
When enabled, the PERMISSION_SET option in the CREATE ASSEMBLY and ALTER ASSEMBLY statements is ignored at
run-time, but the PERMISSION_SET options are preserved in metadata. Ignoring the option, minimizes breaking
existing code statements.
SQL Server does not allow registering different versions of an assembly with the same name, culture and public
key.
When attempting to access the assembly specified in <client_assembly_specifier>, SQL Server impersonates the
security context of the current Windows login. If <client_assembly_specifier> specifies a network location (UNC
path), the impersonation of the current login is not carried forward to the network location because of delegation
limitations. In this case, access is made using the security context of the SQL Server service account. For more
information, see Credentials (Database Engine).
Besides the root assembly specified by assembly_name, SQL Server tries to upload any assemblies that are
referenced by the root assembly being uploaded. If a referenced assembly is already uploaded to the database
because of an earlier CREATE ASSEMBLY statement, this assembly is not uploaded but is available to the root
assembly. If a dependent assembly was not previously uploaded, but SQL Server cannot locate its manifest file in
the source directory, CREATE ASSEMBLY returns an error.
If any dependent assemblies referenced by the root assembly are not already in the database and are implicitly
loaded together with the root assembly, they have the same permission set as the root level assembly. If the
dependent assemblies must be created by using a different permission set than the root-level assembly, they
must be uploaded explicitly before the root level assembly with the appropriate permission set.

Assembly Validation
SQL Server performs checks on the assembly binaries uploaded by the CREATE ASSEMBLY statement to
guarantee the following:
The assembly binary is well formed with valid metadata and code segments, and the code segments have
valid Microsoft Intermediate language (MSIL ) instructions.
The set of system assemblies it references is one of the following supported assemblies in SQL Server:
Microsoft.Visualbasic.dll, Mscorlib.dll, System.Data.dll, System.dll, System.Xml.dll, Microsoft.Visualc.dll,
Custommarshallers.dll, System.Security.dll, System.Web.Services.dll, System.Data.SqlXml.dll,
System.Core.dll, and System.Xml.Linq.dll. Other system assemblies can be referenced, but they must be
explicitly registered in the database.
For assemblies created by using SAFE or EXTERNAL ACCESS permission sets:
The assembly code should be type-safe. Type safety is established by running the common
language runtime verifier against the assembly.
The assembly should not contain any static data members in its classes unless they are marked as
read-only.
The classes in the assembly cannot contain finalizer methods.
The classes or methods of the assembly should be annotated only with allowed code attributes. For
more information, see Custom Attributes for CLR Routines.
Besides the previous checks that are performed when CREATE ASSEMBLY executes, there are additional
checks that are performed at execution time of the code in the assembly:
Calling certain Microsoft .NET Framework APIs that require a specific Code Access Permission may fail if
the permission set of the assembly does not include that permission.
For SAFE and EXTERNAL_ACCESS assemblies, any attempt to call .NET Framework APIs that are
annotated with certain HostProtectionAttributes will fail.
For more information, see Designing Assemblies.

Permissions
Requires CREATE ASSEMBLY permission.
If PERMISSION_SET = EXTERNAL_ACCESS is specified, requiresEXTERNAL ACCESS ASSEMBLY
permission on the server. If PERMISSION_SET = UNSAFE is specified, requires UNSAFE ASSEMBLY
permission on the server.
User must be the owner of any assemblies that are referenced by the assembly that are to be uploaded if the
assemblies already exist in the database. To upload an assembly by using a file path, the current user must be a
Windows authenticated login or a member of the sysadmin fixed server role. The Windows login of the user that
executes CREATE ASSEMBLY must have read permission on the share and the files being loaded in the
statement.
Permissions with CLR strict security
The following permissions required to create a CLR assembly when CLR strict security is enabled:
The user must have the CREATE ASSEMBLY permission
And one of the following conditions must also be true:
The assembly is signed with a certificate or asymmetric key that has a corresponding login with the
UNSAFE ASSEMBLY permission on the server. Signing the assembly is recommended.
The database has the TRUSTWORTHY property set to ON , and the database is owned by a login that has
the UNSAFE ASSEMBLY permission on the server. This option is not recommended.
For more information about assembly permission sets, see Designing Assemblies.

Examples
Example A: Creating an assembly from a dll
Applies to: SQL Server 2008 through SQL Server 2017.
The following example assumes that the SQL Server Database Engine samples are installed in the default
location of the local computer and the HelloWorld.csproj sample application is compiled. For more information,
see Hello World Sample.

CREATE ASSEMBLY HelloWorld


FROM <system_drive>:\Program Files\Microsoft SQL
Server\100\Samples\HelloWorld\CS\HelloWorld\bin\debug\HelloWorld.dll
WITH PERMISSION_SET = SAFE;

Example B: Creating an assembly from assembly bits


Applies to: SQL Server 2008 through SQL Server 2017.
Replace the sample bits (which are not complete or valid) with your assembly bits.

CREATE ASSEMBLY HelloWorld


FROM 0x4D5A900000000000
WITH PERMISSION_SET = SAFE;

See Also
ALTER ASSEMBLY (Transact-SQL )
DROP ASSEMBLY (Transact-SQL )
CREATE FUNCTION (Transact-SQL )
CREATE PROCEDURE (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
CREATE TYPE (Transact-SQL )
CREATE AGGREGATE (Transact-SQL )
EVENTDATA (Transact-SQL )
Usage Scenarios and Examples for Common Language Runtime (CLR ) Integration
CREATE ASYMMETRIC KEY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an asymmetric key in the database.
This feature is incompatible with database export using Data Tier Application Framework (DACFx). You must
drop all asymmetric keys before exporting.
Transact-SQL Syntax Conventions

Syntax
CREATE ASYMMETRIC KEY Asym_Key_Name
[ AUTHORIZATION database_principal_name ]
[ FROM <Asym_Key_Source> ]
[ WITH <key_option> ]
[ ENCRYPTION BY <encrypting_mechanism> ]
[ ; ]

<Asym_Key_Source>::=
FILE = 'path_to_strong-name_file'
| EXECUTABLE FILE = 'path_to_executable_file'
| ASSEMBLY Assembly_Name
| PROVIDER Provider_Name

<key_option> ::=
ALGORITHM = <algorithm>
|
PROVIDER_KEY_NAME = 'key_name_in_provider'
|
CREATION_DISPOSITION = { CREATE_NEW | OPEN_EXISTING }

<algorithm> ::=
{ RSA_4096 | RSA_3072 | RSA_2048 | RSA_1024 | RSA_512 }

<encrypting_mechanism> ::=
PASSWORD = 'password'

Arguments
FROM Asym_Key_Source
Specifies the source from which to load the asymmetric key pair.
AUTHORIZATION database_principal_name
Specifies the owner of the asymmetric key. The owner cannot be a role or a group. If this option is omitted, the
owner will be the current user.
FILE ='path_to_strong -name_file'
Specifies the path of a strong-name file from which to load the key pair.
NOTE
This option is not available in a contained database.

EXECUTABLE FILE ='path_to_executable_file'


Specifies an assembly file from which to load the public key. Limited to 260 characters by MAX_PATH from the
Windows API.

NOTE
This option is not available in a contained database.

ASSEMBLY Assembly_Name
Specifies the name of an assembly from which to load the public key.
ENCRYPTION BY <key_name_in_provider> Specifies how the key is encrypted. Can be a certificate, password,
or asymmetric key.
KEY_NAME ='key_name_in_provider'
Specifies the key name from the external provider. For more information about external key management, see
Extensible Key Management (EKM ).
CREATION_DISPOSITION = CREATE_NEW
Creates a new key on the Extensible Key Management device. PROV_KEY_NAME must be used to specify key
name on the device. If a key already exists on the device the statement fails with error.
CREATION_DISPOSITION = OPEN_EXISTING
Maps a SQL Server asymmetric key to an existing Extensible Key Management key. PROV_KEY_NAME must be
used to specify key name on the device. If CREATION_DISPOSITION = OPEN_EXISTING is not provided, the
default is CREATE_NEW.
ALGORITHM = <algorithm>
Five algorithms can be provided; RSA_4096, RSA_3072, RSA_2048, RSA_1024, and RSA_512.
RSA_1024 and RSA_512 are deprecated. To use RSA_1024 or RSA_512 (not recommended) you must set the
database to database compatibility level 120 or lower.
PASSWORD = 'password'
Specifies the password with which to encrypt the private key. If this clause is not present, the private key will be
encrypted with the database master key. password is a maximum of 128 characters. password must meet the
Windows password policy requirements of the computer that is running the instance of SQL Server.

Remarks
An asymmetric key is a securable entity at the database level. In its default form, this entity contains both a public
key and a private key. When executed without the FROM clause, CREATE ASYMMETRIC KEY generates a new
key pair. When executed with the FROM clause, CREATE ASYMMETRIC KEY imports a key pair from a file or
imports a public key from an assembly.
By default, the private key is protected by the database master key. If no database master key has been created, a
password is required to protect the private key. If a database master key does exist, the password is optional.
The private key can be 512, 1024, or 2048 bits long.

Permissions
Requires CREATE ASYMMETRIC KEY permission on the database. If the AUTHORIZATION clause is specified,
requires IMPERSONATE permission on the database principal, or ALTER permission on the application role.
Only Windows logins, SQL Server logins, and application roles can own asymmetric keys. Groups and roles
cannot own asymmetric keys.

Examples
A. Creating an asymmetric key
The following example creates an asymmetric key named PacificSales09 by using the RSA_2048 algorithm, and
protects the private key with a password.

CREATE ASYMMETRIC KEY PacificSales09


WITH ALGORITHM = RSA_2048
ENCRYPTION BY PASSWORD = '<enterStrongPasswordHere>';
GO

B. Creating an asymmetric key from a file, giving authorization to a user


The following example creates the asymmetric key PacificSales19 from a key pair stored in a file, and then
authorizes user Christina to use the asymmetric key.

CREATE ASYMMETRIC KEY PacificSales19 AUTHORIZATION Christina


FROM FILE = 'c:\PacSales\Managers\ChristinaCerts.tmp'
ENCRYPTION BY PASSWORD = '<enterStrongPasswordHere>';
GO

C. Creating an asymmetric key from an EKM provider


The following example creates the asymmetric key EKM_askey1 from a key pair stored in a file. It then encrypts it
using an Extensible Key Management provider called EKMProvider1 , and a key on that provider called
key10_user1 .

CREATE ASYMMETRIC KEY EKM_askey1


FROM PROVIDER EKM_Provider1
WITH
ALGORITHM = RSA_2048,
CREATION_DISPOSITION = CREATE_NEW
, PROVIDER_KEY_NAME = 'key10_user1' ;
GO

See Also
Choose an Encryption Algorithm
ALTER ASYMMETRIC KEY (Transact-SQL )
DROP ASYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
Extensible Key Management Using Azure Key Vault (SQL Server)
CREATE AVAILABILITY GROUP (Transact-SQL)
5/30/2018 • 28 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new availability group, if the instance of SQL Server is enabled for the Always On availability groups
feature.

IMPORTANT
Execute CREATE AVAILABILITY GROUP on the instance of SQL Server that you intend to use as the initial primary replica of
your new availability group. This server instance must reside on a Windows Server Failover Clustering (WSFC) node.

Transact-SQL Syntax Conventions

Syntax
CREATE AVAILABILITY GROUP group_name
{ <availability_group_spec> | <distributed_availability_group_spec> }
[ ; ]

<availability_group_spec>::=
[ WITH (<with_option_spec> [ ,...n ] ) ]
FOR [ DATABASE database_name [ ,...n ] ]
REPLICA ON <add_replica_spec> [ ,...n ]
[ LISTENER ‘dns_name’ ( <listener_option> ) ]

<with_option_spec>::=
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
| FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
| HEALTH_CHECK_TIMEOUT = milliseconds
| DB_FAILOVER = { ON | OFF }
| DTC_SUPPORT = { PER_DB | NONE }
| BASIC
| REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = { integer }
| CLUSTER_TYPE = { WSFC | EXTERNAL | NONE }

<add_replica_spec>::=
<server_instance> WITH
(
ENDPOINT_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT | CONFIGURATION_ONLY },
FAILOVER_MODE = { AUTOMATIC | MANUAL | EXTERNAL }
[ , <add_replica_option> [ ,...n ] ]
)

<add_replica_option>::=
SEEDING_MODE = { AUTOMATIC | MANUAL }
| BACKUP_PRIORITY = n
| SECONDARY_ROLE ( {
[ ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL } ]
[,] [ READ_ONLY_ROUTING_URL = 'TCP://system-address:port' ]
} )
| PRIMARY_ROLE ( {
[ ALLOW_CONNECTIONS = { READ_WRITE | ALL } ]
[,] [ READ_ONLY_ROUTING_LIST = { ( ‘<server_instance>’ [ ,...n ] ) | NONE } ]
} )
} )
| SESSION_TIMEOUT = integer

<listener_option> ::=
{
WITH DHCP [ ON ( <network_subnet_option> ) ]
| WITH IP ( { ( <ip_address_option> ) } [ , ...n ] ) [ , PORT = listener_port ]
}

<network_subnet_option> ::=
‘four_part_ipv4_address’, ‘four_part_ipv4_mask’

<ip_address_option> ::=
{
‘four_part_ipv4_address’, ‘four_part_ipv4_mask’
| ‘ipv6_address’
}

<distributed_availability_group_spec>::=
WITH (DISTRIBUTED)
AVAILABILITY GROUP ON <add_availability_group_spec> [ ,...2 ]

<add_availability_group_spec>::=
<ag_name> WITH
(
LISTENER_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT },
FAILOVER_MODE = MANUAL,
SEEDING_MODE = { AUTOMATIC | MANUAL }
)

Arguments
group_name
Specifies the name of the new availability group. group_name must be a valid SQL Serveridentifier, and it must
be unique across all availability groups in the WSFC cluster. The maximum length for an availability group name
is 128 characters.
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
Specifies a preference about how a backup job should evaluate the primary replica when choosing where to
perform backups. You can script a given backup job to take the automated backup preference into account. It is
important to understand that the preference is not enforced by SQL Server, so it has no impact on ad-hoc
backups.
The supported values are as follows:
PRIMARY
Specifies that the backups should always occur on the primary replica. This option is useful if you need backup
features, such as creating differential backups, that are not supported when backup is run on a secondary replica.

IMPORTANT
If you plan to use log shipping to prepare any secondary databases for an availability group, set the automated backup
preference to Primary until all the secondary databases have been prepared and joined to the availability group.

SECONDARY_ONLY
Specifies that backups should never be performed on the primary replica. If the primary replica is the only replica
online, the backup should not occur.
SECONDARY
Specifies that backups should occur on a secondary replica except when the primary replica is the only replica
online. In that case, the backup should occur on the primary replica. This is the default behavior.
NONE
Specifies that you prefer that backup jobs ignore the role of the availability replicas when choosing the replica to
perform backups. Note backup jobs might evaluate other factors such as backup priority of each availability
replica in combination with its operational state and connected state.

IMPORTANT
There is no enforcement of the AUTOMATED_BACKUP_PREFERENCE setting. The interpretation of this preference depends
on the logic, if any, that you script into back jobs for the databases in a given availability group. The automated backup
preference setting has no impact on ad-hoc backups. For more information, see Configure Backup on Availability Replicas
(SQL Server).

NOTE
To view the automated backup preference of an existing availability group, select the automated_backup_preference or
automated_backup_preference_desc column of the sys.availability_groups catalog view. Additionally,
sys.fn_hadr_backup_is_preferred_replica (Transact-SQL) can be used to determine the preferred backup replica. This function
returns 1 for at least one of the replicas, even when AUTOMATED_BACKUP_PREFERENCE = NONE .

FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
Specifies what failure conditions trigger an automatic failover for this availability group.
FAILURE_CONDITION_LEVEL is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode (AVAILIBILITY_MODE = SYNCHRONOUS_COMMIT).
Furthermore, failure conditions can trigger an automatic failover only if both the primary and secondary replicas
are configured for automatic failover mode (FAILOVER_MODE = AUTOMATIC ) and the secondary replica is
currently synchronized with the primary replica.
The failure-condition levels (1–5) range from the least restrictive, level 1, to the most restrictive, level 5. A given
condition level encompasses all the less restrictive levels. Thus, the strictest condition level, 5, includes the four
less restrictive condition levels (1-4), level 4 includes levels 1-3, and so forth. The following table describes the
failure-condition that corresponds to each level.

LEVEL FAILURE CONDITION

1 Specifies that an automatic failover should be initiated when


any of the following occurs:

-The SQL Server service is down.

-The lease of the availability group for connecting to the


WSFC cluster expires because no ACK is received from the
server instance. For more information, see How It Works: SQL
Server Always On Lease Timeout.

2 Specifies that an automatic failover should be initiated when


any of the following occurs:

-The instance of SQL Server does not connect to cluster, and


the user-specified HEALTH_CHECK_TIMEOUT threshold of the
availability group is exceeded.

-The availability replica is in failed state.


LEVEL FAILURE CONDITION

3 Specifies that an automatic failover should be initiated on


critical SQL Server internal errors, such as orphaned spinlocks,
serious write-access violations, or too much dumping.

This is the default behavior.

4 Specifies that an automatic failover should be initiated on


moderate SQL Server internal errors, such as a persistent out-
of-memory condition in the SQL Server internal resource
pool.

5 Specifies that an automatic failover should be initiated on any


qualified failure conditions, including:

-Exhaustion of SQL Engine worker-threads.

-Detection of an unsolvable deadlock.

NOTE
Lack of response by an instance of SQL Server to client requests is not relevant to availability groups.

The FAILURE_CONDITION_LEVEL and HEALTH_CHECK_TIMEOUT values, define a flexible failover policy for a
given group. This flexible failover policy provides you with granular control over what conditions must cause an
automatic failover. For more information, see Flexible Failover Policy for Automatic Failover of an Availability
Group (SQL Server).
HEALTH_CHECK_TIMEOUT = milliseconds
Specifies the wait time (in milliseconds) for the sp_server_diagnostics system stored procedure to return server-
health information before the WSFC cluster assumes that the server instance is slow or hung.
HEALTH_CHECK_TIMEOUT is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode with automatic failover (AVAILIBILITY_MODE =
SYNCHRONOUS_COMMIT). Furthermore, a health-check timeout can trigger an automatic failover only if both
the primary and secondary replicas are configured for automatic failover mode (FAILOVER_MODE =
AUTOMATIC ) and the secondary replica is currently synchronized with the primary replica.
The default HEALTH_CHECK_TIMEOUT value is 30000 milliseconds (30 seconds). The minimum value is 15000
milliseconds (15 seconds), and the maximum value is 4294967295 milliseconds.

IMPORTANT
sp_server_diagnostics does not perform health checks at the database level.

DB_FAILOVER = { ON | OFF }
Specifies the response to take when a database on the primary replica is offline. When set to ON, any status other
than ONLINE for a database in the availability group triggers an automatic failover. When this option is set to
OFF, only the health of the instance is used to trigger automatic failover.
For more information regarding this setting, see Database Level Health Detection Option
DTC_SUPPORT = { PER_DB | NONE }
Specifies whether cross-database transactions are supported through the distributed transaction coordinator
(DTC ). Cross-database transactions are only supported beginning in SQL Server 2016 (13.x). PER_DB creates the
availability group with support for these transactions. For more information, see Cross-Database Transactions and
Distributed Transactions for Always On Availability Groups and Database Mirroring (SQL Server).
BASIC
Used to create a basic availability group. Basic availability groups are limited to one database and two replicas: a
primary replica and one secondary replica. This option is a replacement for the deprecated database mirroring
feature on SQL Server Standard Edition. For more information, see Basic Availability Groups (Always On
Availability Groups). Basic availability groups are supported beginning in SQL Server 2016 (13.x).
DISTRIBUTED
Used to create a distributed availability group. The DISTRIBUTED option cannot be combined with any other
options or clauses. This option is used with the AVAIL ABILITY GROUP ON parameter to connect two availability
groups in separate Windows Server Failover Clusters. For more information, see Distributed Availability Groups
(Always On Availability Groups). Distributed availability groups are supported beginning in SQL Server 2016
(13.x).
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT
Introduced in SQL Server 2017. Used to set a minimum number of synchronous secondary replicas required to
commit before the primary commits a transaction. Guarantees that SQL Server transaction waits until the
transaction logs are updated on the minimum number of secondary replicas. The default is 0 which gives the
same behavior as SQL Server 2016. The minimum value is 0. The maximum value is the number of replicas
minus 1. This option relates to replicas in synchronous commit mode. When replicas are in synchronous commit
mode, writes on the primary replica wait until writes on the secondary synchronous replicas are committed to the
replica database transaction log. If a SQL Server that hosts a secondary synchronous replica stops responding,
the SQL Server that hosts the primary replica marks that secondary replica as NOT SYNCHRONIZED and
proceed. When the unresponsive database comes back online it is in a "not synced" state and the replica marked
as unhealthy until the primary can make it synchronous again. This setting guarantees that the primary replica
waits until the minimum number of replicas have committed each transaction. If the minimum number of replicas
is not available then commits on the primary fail. For cluster type EXTERNAL the setting is changed when the
availability group is added to a cluster resource. See High availability and data protection for availability group
configurations.
CLUSTER_TYPE
Introduced in SQL Server 2017. Used to identify if the availability group is on a Windows Server Failover Cluster
(WSFC ). Set to WSFC when availability group is on a failover cluster instance on a Windows Server failover
cluster. Set to EXTERNAL when the cluster is managed by a cluster manager that is not a Windows Server failover
cluster, like Linux Pacemaker. Set to NONE when availability group not using WSFC for cluster coordination. For
example, when an availability group includes Linux servers with no cluster manager.
DATABASE database_name
Specifies a list of one or more user databases on the local SQL Server instance (that is, the server instance on
which you are creating the availability group). You can specify multiple databases for an availability group, but
each database can belong to only one availability group. For information about the type of databases that an
availability group can support, see Prerequisites, Restrictions, and Recommendations for Always On Availability
Groups (SQL Server). To find out which local databases already belong to an availability group, see the replica_id
column in the sys.databases catalog view.
The DATABASE clause is optional. If you omit it, the new availability group is empty.
After you have created the availability group, connect to each server instance that hosts a secondary replica and
then prepare each secondary database and join it to the availability group. For more information, see Start Data
Movement on an Always On Secondary Database (SQL Server).
NOTE
Later, you can add eligible databases on the server instance that hosts the current primary replica to an availability group.
You can also remove a database from an availability group. For more information, see ALTER AVAILABILITY GROUP
(Transact-SQL).

REPLICA ON
Specifies from one to five SQL server instances to host availability replicas in the new availability group. Each
replica is specified by its server instance address followed by a WITH (…) clause. Minimally, you must specify your
local server instance, which becomes the initial primary replica. Optionally, you can also specify up to four
secondary replicas.
You need to join every secondary replica to the availability group. For more information, see ALTER
AVAIL ABILITY GROUP (Transact-SQL ).

NOTE
If you specify less than four secondary replicas when you create an availability group, you can an additional secondary
replica at any time by using the ALTER AVAILABILITY GROUP Transact-SQL statement. You can also use this statement this
remove any secondary replica from an existing availability group.

<server_instance> Specifies the address of the instance of SQL Server that is the host for an replica. The address
format depends on whether the instance is the default instance or a named instance and whether it is a standalone
instance or a failover cluster instance (FCI), as follows:
{ 'system_name[\instance_name]' | 'FCI_network_name[\instance_name]' }
The components of this address are as follows:
system_name
Is the NetBIOS name of the computer system on which the target instance of SQL Server resides. This computer
must be a WSFC node.
FCI_network_name
Is the network name that is used to access a SQL Server failover cluster. Use this if the server instance participates
as a SQL Server failover partner. Executing SELECT @@SERVERNAME on an FCI server instance returns its
entire 'FCI_network_name[\instance_name]' string (which is the full replica name).
instance_name
Is the name of an instance of a SQL Server that is hosted by system_name or FCI_network_name and that has
HADR service is enabled. For a default server instance, instance_name is optional. The instance name is case
insensitive. On a stand-alone server instance, this value name is the same as the value returned by executing
SELECT @@SERVERNAME.
\
Is a separator used only when specifying instance_name, in order to separate it from system_name or
FCI_network_name.
For information about the prerequisites for WSFC nodes and server instances, see Prerequisites, Restrictions, and
Recommendations for Always On Availability Groups (SQL Server).
ENDPOINT_URL ='TCP://system -address:port'
Specifies the URL path for the database mirroring endpoint on the instance of SQL Server that hosts the
availability replica that you are defining in your current REPLICA ON clause.
The ENDPOINT_URL clause is required. For more information, see Specify the Endpoint URL When Adding or
Modifying an Availability Replica (SQL Server).
'TCP://system -address:port'
Specifies a URL for specifying an endpoint URL or read-only routing URL. The URL parameters are as follows:
system -address
Is a string, such as a system name, a fully qualified domain name, or an IP address, that unambiguously identifies
the destination computer system.
port
Is a port number that is associated with the mirroring endpoint of the partner server instance (for the
ENDPOINT_URL option) or the port number used by the Database Engine of the server instance (for the
READ_ONLY_ROUTING_URL option).
AVAIL ABILITY_MODE = { {SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT |
CONFIGURATION_ONLY }
SYNCHRONOUS_COMMIT or ASYNCHRONOUS_COMMIT specifies whether the primary replica has to wait
for the secondary replica to acknowledge the hardening (writing) of the log records to disk before the primary
replica can commit the transaction on a given primary database. The transactions on different databases on the
same primary replica can commit independently. SQL Server 2017 CU 1 introduces CONFIGURATION_ONLY.
CONFIGURATION_ONLY replica only applies to availability groups with CLUSTER_TYPE = EXTERNAL or
CLUSTER_TYPE = NONE.
SYNCHRONOUS_COMMIT
Specifies that the primary replica waits to commit transactions until they have been hardened on this secondary
replica (synchronous-commit mode). You can specify SYNCHRONOUS_COMMIT for up to three replicas,
including the primary replica.
ASYNCHRONOUS_COMMIT
Specifies that the primary replica commits transactions without waiting for this secondary replica to harden the
log (synchronous-commit availability mode). You can specify ASYNCHRONOUS_COMMIT for up to five
availability replicas, including the primary replica.
CONFIGURATION_ONLY Specifies that the primary replica synchronously commit availability group
configuration metadata to the master database on this replica. The replica will not contain user data. This option:
Can be hosted on any edition of SQL Server, including Express Edition.
Requires the data mirroring endpoint of the CONFIGURATION_ONLY replica to be type WITNESS .
Can not be altered.
Is not valid when CLUSTER_TYPE = WSFC .
For more information, see Configuration only replica.
The AVAIL ABILITY_MODE clause is required. For more information, see Availability Modes (Always On
Availability Groups).
FAILOVER_MODE = { AUTOMATIC | MANUAL }
Specifies the failover mode of the availability replica that you are defining.
AUTOMATIC
Enables automatic failover. This option is supported only if you also specify AVAIL ABILITY_MODE =
SYNCHRONOUS_COMMIT. You can specify AUTOMATIC for two availability replicas, including the
primary replica.
NOTE
SQL Server Failover Cluster Instances (FCIs) do not support automatic failover by availability groups, so any availability
replica that is hosted by an FCI can only be configured for manual failover.

MANUAL
Enables planned manual failover or forced manual failover (typically called forced failover) by the database
administrator.
The FAILOVER_MODE clause is required. The two types of manual failover, manual failover without data loss and
forced failover (with possible data loss), are supported under different conditions. For more information, see
Failover and Failover Modes (Always On Availability Groups).
SEEDING_MODE = { AUTOMATIC | MANUAL }
Specifies how the secondary replica is initially seeded.
AUTOMATIC
Enables direct seeding. This method seeds the secondary replica over the network. This method does not require
you to backup and restore a copy of the primary database on the replica.

NOTE
For direct seeding, you must allow database creation on each secondary replica by calling ALTER AVAILABILITY GROUP
with the GRANT CREATE ANY DATABASE option.

MANUAL
Specifies manual seeding (default). This method requires you to create a backup of the database on the primary
replica and manually restore that backup on the secondary replica.
BACKUP_PRIORITY = n
Specifies your priority for performing backups on this replica relative to the other replicas in the same availability
group. The value is an integer in the range of 0..100. These values have the following meanings:
1..100 indicates that the availability replica could be chosen for performing backups. 1 indicates the lowest
priority, and 100 indicates the highest priority. If BACKUP_PRIORITY = 1, the availability replica would be
chosen for performing backups only if no higher priority availability replicas are currently available.
0 indicates that this availability replica is not for performing backups. This is useful, for example, for a
remote availability replica to which you never want backups to fail over.
For more information, see Active Secondaries: Backup on Secondary Replicas (Always On Availability
Groups).
SECONDARY_ROLE ( … )
Specifies role-specific settings that take effect if this availability replica currently owns the secondary role
(that is, whenever it is a secondary replica). Within the parentheses, specify either or both secondary-role
options. If you specify both, use a comma-separated list.
The secondary role options are as follows:
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
Specifies whether the databases of a given availability replica that is performing the secondary role (that is,
is acting as a secondary replica) can accept connections from clients, one of:
NO
No user connections are allowed to secondary databases of this replica. They are not available for read
access. This is the default behavior.
READ_ONLY
Only connections are allowed to the databases in the secondary replica where the Application Intent
property is set to ReadOnly. For more information about this property, see Using Connection String
Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the secondary replica for read-only access.
For more information, see Active Secondaries: Readable Secondary Replicas (Always On Availability
Groups).
READ_ONLY_ROUTING_URL ='TCP://system -address:port'
Specifies the URL to be used for routing read-intent connection requests to this availability replica. This is
the URL on which the SQL Server Database Engine listens. Typically, the default instance of the SQL
Server Database Engine listens on TCP port 1433.
For a named instance, you can obtain the port number by querying the port and type_desc columns of the
sys.dm_tcp_listener_states dynamic management view. The server instance uses the Transact-SQL listener
(type_desc='TSQL').
For more information about calculating the read-only routing URL for a replica, see Calculating
read_only_routing_url for Always On.

NOTE
For a named instance of SQL Server, the Transact-SQL listener should be configured to use a specific port. For more
information, see Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager).

PRIMARY_ROLE ( … )
Specifies role-specific settings that take effect if this availability replica currently owns the primary role (that is,
whenever it is the primary replica). Within the parentheses, specify either or both primary-role options. If you
specify both, use a comma-separated list.
The primary role options are as follows:
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
Specifies the type of connection that the databases of a given availability replica that is performing the primary
role (that is, is acting as a primary replica) can accept from clients, one of:
READ_WRITE
Connections where the Application Intent connection property is set to ReadOnly are disallowed. When the
Application Intent property is set to ReadWrite or the Application Intent connection property is not set, the
connection is allowed. For more information about Application Intent connection property, see Using Connection
String Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the primary replica. This is the default behavior.
READ_ONLY_ROUTING_LIST = { (‘<server_instance>’ [ ,...n ] ) | NONE } Specifies a comma-separated list of
server instances that host availability replicas for this availability group that meet the following requirements
when running under the secondary role:
Be configured to allow all connections or read-only connections (see the ALLOW_CONNECTIONS
argument of the SECONDARY_ROLE option, above).
Have their read-only routing URL defined (see the READ_ONLY_ROUTING_URL argument of the
SECONDARY_ROLE option, above).
The READ_ONLY_ROUTING_LIST values are as follows:
<server_instance> Specifies the address of the instance of SQL Server that is the host for a replica that is a
readable secondary replica when running under the secondary role.
Use a comma-separated list to specify all the server instances that might host a readable secondary replica.
Read-only routing follows the order in which server instances are specified in the list. If you include a
replica's host server instance on the replica's read-only routing list, placing this server instance at the end of
the list is typically a good practice, so that read-intent connections go to a secondary replica, if one is
available.
Beginning with SQL Server 2016 (13.x), you can load-balance read-intent requests across readable
secondary replicas. You specify this by placing the replicas in a nested set of parentheses within the read-
only routing list. For more information and examples, see Configure load-balancing across read-only
replicas.
NONE
Specifies that when this availability replica is the primary replica, read-only routing is not supported. This is
the default behavior.
SESSION_TIMEOUT = integer
Specifies the session-timeout period in seconds. If you do not specify this option, by default, the time
period is 10 seconds. The minimum value is 5 seconds.

IMPORTANT
We recommend that you keep the time-out period at 10 seconds or greater.

For more information about the session-timeout period, see Overview of Always On Availability Groups (SQL
Server).
AVAIL ABILITY GROUP ON
Specifies two availability groups that constitute a distributed availability group. Each availability group is part of
its own Windows Server Failover Cluster (WSFC ). When you create a distributed availability group, the
availability group on the current SQL Server Instance becomes the primary availability group and the remote
availability group becomes the secondary availability group.
You need to join the secondary availability group to the distributed availability group. For more information, see
ALTER AVAIL ABILITY GROUP (Transact-SQL ).
<ag_name> Specifies the name of the availability group that makes up one half of the distributed availability
group.
LISTENER ='TCP://system -address:port'
Specifies the URL path for the listener associated with the availability group.
The LISTENER clause is required.
'TCP://system -address:port'
Specifies a URL for the listener associated with the availability group. The URL parameters are as follows:
system -address
Is a string, such as a system name, a fully qualified domain name, or an IP address, that unambiguously identifies
the listener.
port
Is a port number that is associated with the mirroring endpoint of the availability group. Note that this is not the
port of the listener.
AVAIL ABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT |
CONFIGURATION_ONLY }
Specifies whether the primary replica has to wait for the secondary availability group to acknowledge the
hardening (writing) of the log records to disk before the primary replica can commit the transaction on a given
primary database.
SYNCHRONOUS_COMMIT
Specifies that the primary replica waits to commit transactions until they have been hardened on the secondary
availability group. You can specify SYNCHRONOUS_COMMIT for up to two availability groups, including the
primary availability group.
ASYNCHRONOUS_COMMIT
Specifies that the primary replica commits transactions without waiting for this secondary availability group to
harden the log. You can specify ASYNCHRONOUS_COMMIT for up to two availability groups, including the
primary availability group.
The AVAIL ABILITY_MODE clause is required.
FAILOVER_MODE = { MANUAL }
Specifies the failover mode of the distributed availability group.
MANUAL
Enables planned manual failover or forced manual failover (typically called forced failover) by the database
administrator.
The FAILOVER_MODE clause is required, and the only option is MANUAL. Automatic failover to the secondary
availability group is not supported.
SEEDING_MODE = { AUTOMATIC | MANUAL }
Specifies how the secondary availability group is initially seeded.
AUTOMATIC
Enables direct seeding. This method seeds the secondary availability group over the network. This method does
not require you to backup and restore a copy of the primary database on the replicas of the secondary availability
group.
MANUAL
Specifies manual seeding (default). This method requires you to create a backup of the database on the primary
replica and manually restore that backup on the replica(s) of the secondary availability group.
LISTENER ‘dns_name’( <listener_option> ) Defines a new availability group listener for this availability group.
LISTENER is an optional argument.

IMPORTANT
Before you create your first listener, we strongly recommend that you read Create or Configure an Availability Group
Listener (SQL Server).
After you create a listener for a given availability group, we strongly recommend that you do the following:
Ask your network administrator to reserve the listener's IP address for its exclusive use.
Give the listener's DNS host name to application developers to use in connection strings when requesting client
connections to this availability group.

dns_name
Specifies the DNS host name of the availability group listener. The DNS name of the listener must be unique in
the domain and in NetBIOS.
dns_name is a string value. This name can contain only alphanumeric characters, dashes (-), and hyphens (_), in
any order. DNS host names are case insensitive. The maximum length is 63 characters.
We recommend that you specify a meaningful string. For example, for an availability group named AG1 ,a
meaningful DNS host name would be ag1-listener .

IMPORTANT
NetBIOS recognizes only the first 15 chars in the dns_name. If you have two WSFC clusters that are controlled by the same
Active Directory and you try to create availability group listeners in both clusters using names with more than 15 characters
and an identical 15 character prefix, an error reports that the Virtual Network Name resource could not be brought online.
For information about prefix naming rules for DNS names, see Assigning Domain Names.

<listener_option> LISTENER takes one of the following <listener_option> options:


WITH DHCP [ ON { (‘four_part_ipv4_address’,‘four_part_ipv4_mask’) } ]
Specifies that the availability group listener uses the Dynamic Host Configuration Protocol (DHCP ). Optionally,
use the ON clause to identify the network on which this listener is created. DHCP is limited to a single subnet that
is used for every server instances that hosts a replica in the availability group.

IMPORTANT
We do not recommend DHCP in production environment. If there is a down time and the DHCP IP lease expires, extra time
is required to register the new DHCP network IP address that is associated with the listener DNS name and impact the client
connectivity. However, DHCP is good for setting up your development and testing environment to verify basic functions of
availability groups and for integration with your applications.

For example:
WITH DHCP ON ('10.120.19.0','255.255.254.0')

WITH IP ( { (‘four_part_ipv4_address’,‘four_part_ipv4_mask’) | (‘ipv6_address’) } [ , ...n ] ) [ , PORT =listener_port ]


Specifies that, instead of using DHCP, the availability group listener uses one or more static IP addresses. To create
an availability group across multiple subnets, each subnet requires one static IP address in the listener
configuration. For a given subnet, the static IP address can be either an IPv4 address or an IPv6 address. Contact
your network administrator to get a static IP address for each subnet that hosts a replica for the new availability
group.
For example:
WITH IP ( ('10.120.19.155','255.255.254.0') )

four_part_ipv4_address
Specifies an IPv4 four-part address for an availability group listener. For example, 10.120.19.155 .
four_part_ipv4_mask
Specifies an IPv4 four-part mask for an availability group listener. For example, 255.255.254.0 .
ipv6_address
Specifies an IPv6 address for an availability group listener. For example, 2001::4898:23:1002:20f:1fff:feff:b3a3 .
PORT = listener_port
Specifies the port number—listener_port—to be used by an availability group listener that is specified by a WITH
IP clause. PORT is optional.
The default port number, 1433, is supported. However, if you have security concerns, we recommend using a
different port number.
For example: WITH IP ( ('2001::4898:23:1002:20f:1fff:feff:b3a3') ) , PORT = 7777

Prerequisites and Restrictions


For information about the prerequisites for creating an availability group, see Prerequisites, Restrictions, and
Recommendations for Always On Availability Groups (SQL Server).
For information about restrictions on the AVAIL ABILITY GROUP Transact-SQL statements, see Overview of
Transact-SQL Statements for Always On Availability Groups (SQL Server).

Security
Permissions
Requires membership in the sysadmin fixed server role and either CREATE AVAIL ABILITY GROUP server
permission, ALTER ANY AVAIL ABILITY GROUP permission, or CONTROL SERVER permission.

Examples
A. Configuring Backup on Secondary Replicas, Flexible Failover Policy, and Connection Access
The following example creates an availability group named MyAg for two user databases, ThisDatabase and
ThatDatabase . The following table summarizes the values specified for the options that are set for the availability
group as a whole.

GROUP OPTION SETTING DESCRIPTION

AUTOMATED_BACKUP_PREFERENCE SECONDARY This automated backup preference


indicates that backups should occur on
a secondary replica except when the
primary replica is the only replica online
(this is the default behavior). For the
AUTOMATED_BACKUP_PREFERENCE
setting to have any effect, you need to
script backup jobs on the availability
databases to take the automated
backup preference into account.

FAILURE_CONDITION_LEVEL 3 This failure condition level setting


specifies that an automatic failover
should be initiated on critical SQL
Server internal errors, such as orphaned
spinlocks, serious write-access
violations, or too much dumping.

HEALTH_CHECK_TIMEOUT 600000 This health check timeout value, 60


seconds, specifies that the WSFC cluster
waits 60000 milliseconds for the
sp_server_diagnostics system stored
procedure to return server-health
information about a server instance
that is hosting a synchronous-commit
replica with automatic before the
cluster assumes that the host server
instance is slow or hung. (The default
value is 30000 milliseconds).
Three availability replicas are to be hosted by the default server instances on computers named COMPUTER01 ,
COMPUTER02 , and COMPUTER03 . The following table summarizes the values specified for the replica options of each
replica.

REPLICA OPTION SETTING ON COMPUTER01 SETTING ON COMPUTER02 SETTING ON COMPUTER03 DESCRIPTION

ENDPOINT_URL TCP://COMPUTER01:5 TCP://COMPUTER02:5 TCP://COMPUTER03:5 In this example, the


022 022 022 systems are the same
domain, so the
endpoint URLs can
use the name of the
computer system as
the system address.

AVAILABILITY_MODE SYNCHRONOUS_CO SYNCHRONOUS_CO ASYNCHRONOUS_C Two of the replicas


MMIT MMIT OMMIT use synchronous-
commit mode. When
synchronized, they
support failover
without data loss. The
third replica, which
uses asynchronous-
commit availability
mode.

FAILOVER_MODE AUTOMATIC AUTOMATIC MANUAL The synchronous-


commit replicas
support automatic
failover and planned
manual failover. The
synchronous-commit
availability mode
replica supports only
forced manual
failover.

BACKUP_PRIORITY 30 30 90 A higher priority, 90,


is assigned to the
asynchronous-
commit replica, than
to the synchronous-
commit replicas.
Backups tend to
occur on the server
instance that hosts
the asynchronous-
commit replica.
REPLICA OPTION SETTING ON COMPUTER01 SETTING ON COMPUTER02 SETTING ON COMPUTER03 DESCRIPTION

SECONDARY_ROLE ( ( ( Only the


ALLOW_CONNECTIO ALLOW_CONNECTIO ALLOW_CONNECTIO asynchronous-
NS = NO, NS = NO, NS = READ_ONLY, commit replica serves
READ_ONLY_ROUTIN as a readable
READ_ONLY_ROUTIN READ_ONLY_ROUTIN G_URL = secondary replica.
G_URL = G_URL = 'TCP://COMPUTER03:
'TCP://COMPUTER01: 'TCP://COMPUTER02: 1433' ) Specifies the
1433' ) 1433' ) computer name and
default Database
Engine port number
(1433).

This argument is
optional.

PRIMARY_ROLE ( ( ( In the primary role, all


ALLOW_CONNECTIO ALLOW_CONNECTIO ALLOW_CONNECTIO the replicas reject
NS = READ_WRITE, NS = READ_WRITE, NS = READ_WRITE, read-intent
READ_ONLY_ROUTIN READ_ONLY_ROUTIN READ_ONLY_ROUTIN connection attempts.
G_LIST = G_LIST = G_LIST = NONE )
(COMPUTER03) ) (COMPUTER03) ) Read-intent
connection requests
are routed to
COMPUTER03 if the
local replica is running
under the secondary
role. When that
replica runs under the
primary role, read-
only routing is
disabled.

This argument is
optional.

SESSION_TIMEOUT 10 10 10 This example specifies


the default session
timeout value (10).
This argument is
optional.

Finally, the example specifies the optional LISTENER clause to create an availability group listener for the new
availability group. A unique DNS name, MyAgListenerIvP6 , is specified for this listener. The two replicas are on
different subnets, so the listener must use static IP addresses. For each of the two availability replicas, the WITH IP
clause specifies a static IP address, 2001:4898:f0:f00f::cf3c and 2001:4898:e0:f213::4ce2 , which use the IPv6
format. This example also specifies uses the optional PORT argument to specify port 60173 as the listener port.
CREATE AVAILABILITY GROUP MyAg
WITH (
AUTOMATED_BACKUP_PREFERENCE = SECONDARY,
FAILURE_CONDITION_LEVEL = 3,
HEALTH_CHECK_TIMEOUT = 600000
)

FOR
DATABASE ThisDatabase, ThatDatabase
REPLICA ON
'COMPUTER01' WITH
(
ENDPOINT_URL = 'TCP://COMPUTER01:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = AUTOMATIC,
BACKUP_PRIORITY = 30,
SECONDARY_ROLE (ALLOW_CONNECTIONS = NO,
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER01:1433' ),
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
READ_ONLY_ROUTING_LIST = (COMPUTER03) ),
SESSION_TIMEOUT = 10
),

'COMPUTER02' WITH
(
ENDPOINT_URL = 'TCP://COMPUTER02:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = AUTOMATIC,
BACKUP_PRIORITY = 30,
SECONDARY_ROLE (ALLOW_CONNECTIONS = NO,
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER02:1433' ),
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
READ_ONLY_ROUTING_LIST = (COMPUTER03) ),
SESSION_TIMEOUT = 10
),

'COMPUTER03' WITH
(
ENDPOINT_URL = 'TCP://COMPUTER03:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
BACKUP_PRIORITY = 90,
SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY,
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER03:1433' ),
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
READ_ONLY_ROUTING_LIST = NONE ),
SESSION_TIMEOUT = 10
);
GO
ALTER AVAILABIIITY GROUP [MyAg]
ADD LISTENER ‘MyAgListenerIvP6’ ( WITH IP ( ('2001:db88:f0:f00f::cf3c'),('2001:4898:e0:f213::4ce2') ) , PORT
= 60173 );
GO

Related Tasks
Create an Availability Group (Transact-SQL )
Use the Availability Group Wizard (SQL Server Management Studio)
Use the New Availability Group Dialog Box (SQL Server Management Studio)
Use the Availability Group Wizard (SQL Server Management Studio)
See Also
ALTER AVAIL ABILITY GROUP (Transact-SQL )
ALTER DATABASE SET HADR (Transact-SQL )
DROP AVAIL ABILITY GROUP (Transact-SQL )
Troubleshoot Always On Availability Groups Configuration (SQL Server)
Overview of Always On Availability Groups (SQL Server)
Availability Group Listeners, Client Connectivity, and Application Failover (SQL Server)
CREATE BROKER PRIORITY (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Defines a priority level and the set of criteria for determining which Service Broker conversations to assign the
priority level. The priority level is assigned to any conversation endpoint that uses the same combination of
contracts and services that are specified in the conversation priority. Priorities range in value from 1 (low ) to 10
(high). The default is 5.
Transact-SQL Syntax Conventions

Syntax
CREATE BROKER PRIORITY ConversationPriorityName
FOR CONVERSATION
[ SET ( [ CONTRACT_NAME = {ContractName | ANY } ]
[ [ , ] LOCAL_SERVICE_NAME = {LocalServiceName | ANY } ]
[ [ , ] REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY } ]
[ [ , ] PRIORITY_LEVEL = {PriorityValue | DEFAULT } ]
)
]
[;]

Arguments
ConversationPriorityName
Specifies the name for this conversation priority. The name must be unique in the current database, and must
conform to the rules for Database Engine identifiers.
SET
Specifies the criteria for determining if the conversation priority applies to a conversation. If specified, SET must
contain at least one criterion: CONTRACT_NAME, LOCAL_SERVICE_NAME, REMOTE_SERVICE_NAME, or
PRIORITY_LEVEL. If SET is not specified, the defaults are set for all three criteria.
CONTRACT_NAME = {ContractName | ANY }
Specifies the name of a contract to be used as a criterion for determining if the conversation priority applies to a
conversation. ContractName is a Database Engine identifier, and must specify the name of a contract in the
current database.
ContractName
Specifies that the conversation priority can be applied only to conversations where the BEGIN DIALOG statement
that started the conversation specified ON CONTRACT ContractName.
ANY
Specifies that the conversation priority can be applied to any conversation, regardless of which contract it uses.
The default is ANY.
LOCAL_SERVICE_NAME = {LocalServiceName | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to a
conversation endpoint.
LocalServiceName is a Database Engine identifier. It must specify the name of a service in the current database.
LocalServiceName
Specifies that the conversation priority can be applied to the following:
Any initiator conversation endpoint whose initiator service name matches LocalServiceName.
Any target conversation endpoint whose target service name matches LocalServiceName.
ANY
Specifies that the conversation priority can be applied to any conversation endpoint, regardless of the
name of the local service used by the endpoint.
The default is ANY.
REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to
a conversation endpoint.
RemoteServiceName is a literal of type nvarchar(256). Service Broker uses a byte-by-byte comparison to
match the RemoteServiceName string. The comparison is case-sensitive and does not consider the current
collation. The target service can be in the current instance of the Database Engine, or a remote instance of
the Database Engine.
'RemoteServiceName'
Specifies that the conversation priority can be applied to the following:
Any initiator conversation endpoint whose associated target service name matches RemoteServiceName.
Any target conversation endpoint whose associated initiator service name matches RemoteServiceName.
ANY
Specifies that the conversation priority can be applied to any conversation endpoint, regardless of the name
of the remote service associated with the endpoint.
The default is ANY.
PRIORITY_LEVEL = { PriorityValue | DEFAULT }
Specifies the priority to assign any conversation endpoint that use the contracts and services specified in
the conversation priority. PriorityValue must be an integer literal from 1 (lowest priority) to 10 (highest
priority). The default is 5.

Remarks
Service Broker assigns priority levels to conversation endpoints. The priority levels control the priority of the
operations associated with the endpoint. Each conversation has two conversation endpoints:
The initiator conversation endpoint associates one side of the conversation with the initiator service and
initiator queue. The initiator conversation endpoint is created when the BEGIN DIALOG statement is run.
The operations associated with the initiator conversation endpoint include:
Sends from the initiator service.
Receives from the initiator queue.
Getting the next conversation group from the initiator queue.
The target conversation endpoint associates the other side of the conversation with the target service and
queue. The target conversation endpoint is created when the conversation is used to send a message to the
target queue. The operations associated with the target conversation endpoint include:
Receives from the target queue.
Sends from the target service.
Getting the next conversation group from the target queue.
Service Broker assigns conversation priority levels when conversation endpoints are created. The
conversation endpoint retains the priority level until the conversation ends. New priorities or changes to
existing priorities are not applied to existing conversations.
Service Broker assigns a conversation endpoint the priority level from the conversation priority whose
contract and services criteria best match the properties of the endpoint. The following table shows the
match precedence:

OPERATION CONTRACT OPERATION LOCAL SERVICE OPERATION REMOTE SERVICE

ContractName LocalServiceName RemoteServiceName

ContractName LocalServiceName ANY

ContractName ANY RemoteServiceName

ContractName ANY ANY

ANY LocalServiceName RemoteServiceName

ANY LocalServiceName ANY

ANY ANY RemoteServiceName

ANY ANY ANY

Service Broker first looks for a priority whose specified contract, local service, and remote service matches those
that the operation uses. If one is not found, Service Broker looks for a priority with a contract and local service that
matches those that the operation uses, and where the remote service was specified as ANY. This continues for all
the variations that are listed in the precedence table. If no match is found, the operation is assigned the default
priority of 5.
Service Broker independently assigns a priority level to each conversation endpoint. To have Service Broker assign
priority levels to both the initiator and target conversation endpoints, you must ensure that both endpoints are
covered by conversation priorities. If the initiator and target conversation endpoints are in separate databases, you
must create conversation priorities in each database. The same priority level is usually specified for both of the
conversation endpoints for a conversation, but you can specify different priority levels.
Priority levels are always applied to operations that receive messages or conversation group identifiers from a
queue. Priority levels are also applied when transmitting messages from one instance of the Database Engine to
another.
Priority levels are not used when transmitting messages:
From a database where the HONOR_BROKER_PRIORITY database option is set to OFF. For more
information, see ALTER DATABASE SET Options (Transact-SQL ).
Between services in the same instance of the Database Engine.
All Service Broker operations in a database are assigned default priorities of 5 if no conversation priorities
have been created in the database.

Permissions
Permission for creating a conversation priority defaults to members of the db_ddladmin or db_owner fixed
database roles, and to the sysadmin fixed server role. Requires ALTER permission on the database.

Examples
A. Assigning a priority level to both directions of a conversation.
These two conversation priorities ensure that all operations that use SimpleContract between TargetService and
the InitiatorAService are assigned priority level 3.

CREATE BROKER PRIORITY InitiatorAToTargetPriority


FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = InitiatorServiceA,
REMOTE_SERVICE_NAME = N'TargetService',
PRIORITY_LEVEL = 3);
CREATE BROKER PRIORITY TargetToInitiatorAPriority
FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = TargetService,
REMOTE_SERVICE_NAME = N'InitiatorServiceA',
PRIORITY_LEVEL = 3);

B. Setting the priority level for all conversations that use a contract
Assigns a priority level of 7 to all operations that use a contract named SimpleContract . This assumes that there
are no other priorities that specify both SimpleContract and either a local or a remote service.

CREATE BROKER PRIORITY SimpleContractDefaultPriority


FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = ANY,
REMOTE_SERVICE_NAME = ANY,
PRIORITY_LEVEL = 7);

C. Setting a base priority level for a database.


Defines conversation priorities for two specific services, and then defines a conversation priority that will match all
other conversation endpoints. This does not replace the default priority, which is always 5, but does minimize the
number of items that are assigned the default.
CREATE BROKER PRIORITY [//Adventure-Works.com/Expenses/ClaimPriority]
FOR CONVERSATION
SET (CONTRACT_NAME = ANY,
LOCAL_SERVICE_NAME = //Adventure-Works.com/Expenses/ClaimService,
REMOTE_SERVICE_NAME = ANY,
PRIORITY_LEVEL = 9);
CREATE BROKER PRIORITY [//Adventure-Works.com/Expenses/ApprovalPriority]
FOR CONVERSATION
SET (CONTRACT_NAME = ANY,
LOCAL_SERVICE_NAME = //Adventure-Works.com/Expenses/ClaimService,
REMOTE_SERVICE_NAME = ANY,
PRIORITY_LEVEL = 6);
CREATE BROKER PRIORITY [//Adventure-Works.com/Expenses/BasePriority]
FOR CONVERSATION
SET (CONTRACT_NAME = ANY,
LOCAL_SERVICE_NAME = ANY,
REMOTE_SERVICE_NAME = ANY,
PRIORITY_LEVEL = 3);

D. Creating three priority levels for a target service by using services


Supports a system that provides three levels of performance: Gold (high), Silver (medium), and Bronze (low ).
There is one contract, but each level has a separate initiator service. All initiator services communicate to a central
target service.

CREATE BROKER PRIORITY GoldInitToTargetPriority


FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = GoldInitiatorService,
REMOTE_SERVICE_NAME = N'TargetService',
PRIORITY_LEVEL = 6);
CREATE BROKER PRIORITY GoldTargetToInitPriority
FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = TargetService,
REMOTE_SERVICE_NAME = N'GoldInitiatorService',
PRIORITY_LEVEL = 6);
CREATE BROKER PRIORITY SilverInitToTargetPriority
FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = SilverInitiatorService,
REMOTE_SERVICE_NAME = N'TargetService',
PRIORITY_LEVEL = 4);
CREATE BROKER PRIORITY SilverTargetToInitPriority
FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = TargetService,
REMOTE_SERVICE_NAME = N'SilverInitiatorService',
PRIORITY_LEVEL = 4);
CREATE BROKER PRIORITY BronzeInitToTargetPriority
FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = BronzeInitiatorService,
REMOTE_SERVICE_NAME = N'TargetService',
PRIORITY_LEVEL = 2);
CREATE BROKER PRIORITY BronzeTargetToInitPriority
FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContract,
LOCAL_SERVICE_NAME = TargetService,
REMOTE_SERVICE_NAME = N'BronzeInitiatorService',
PRIORITY_LEVEL = 2);

E. Creating three priority levels for multiple services using contracts


Supports a system that provides three levels of performance: Gold (high), Silver (medium), and Bronze (low ). Each
level has a separate contract. These priorities apply to any services that are referenced by conversations that use
the contracts.

CREATE BROKER PRIORITY GoldPriority


FOR CONVERSATION
SET (CONTRACT_NAME = GoldContract,
LOCAL_SERVICE_NAME = ANY,
REMOTE_SERVICE_NAME = ANY,
PRIORITY_LEVEL = 6);
CREATE BROKER PRIORITY SilverPriority
FOR CONVERSATION
SET (CONTRACT_NAME = SilverContract,
LOCAL_SERVICE_NAME = ANY,
REMOTE_SERVICE_NAME = ANY,
PRIORITY_LEVEL = 4);
CREATE BROKER PRIORITY BronzePriority
FOR CONVERSATION
SET (CONTRACT_NAME = BronzeContract,
LOCAL_SERVICE_NAME = ANY,
REMOTE_SERVICE_NAME = ANY,
PRIORITY_LEVEL = 2);

See Also
ALTER BROKER PRIORITY (Transact-SQL )
BEGIN DIALOG CONVERSATION (Transact-SQL )
CREATE CONTRACT (Transact-SQL )
CREATE QUEUE (Transact-SQL )
CREATE SERVICE (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
GET CONVERSATION GROUP (Transact-SQL )
RECEIVE (Transact-SQL )
SEND (Transact-SQL )
sys.conversation_priorities (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds a certificate to a database in SQL Server.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

This feature is incompatible with database export using Data Tier Application Framework (DACFx). You must
drop all certificates before exporting.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

CREATE CERTIFICATE certificate_name [ AUTHORIZATION user_name ]


{ FROM <existing_keys> | <generate_new_keys> }
[ ACTIVE FOR BEGIN_DIALOG = { ON | OFF } ]

<existing_keys> ::=
ASSEMBLY assembly_name
| {
[ EXECUTABLE ] FILE = 'path_to_file'
[ WITH PRIVATE KEY ( <private_key_options> ) ]
}
| {
BINARY = asn_encoded_certificate
[ WITH PRIVATE KEY ( <private_key_options> ) ]
}
<generate_new_keys> ::=
[ ENCRYPTION BY PASSWORD = 'password' ]
WITH SUBJECT = 'certificate_subject_name'
[ , <date_options> [ ,...n ] ]

<private_key_options> ::=
{
FILE = 'path_to_private_key'
[ , DECRYPTION BY PASSWORD = 'password' ]
[ , ENCRYPTION BY PASSWORD = 'password' ]
}
|
{
BINARY = private_key_bits
[ , DECRYPTION BY PASSWORD = 'password' ]
[ , ENCRYPTION BY PASSWORD = 'password' ]
}

<date_options> ::=
START_DATE = 'datetime' | EXPIRY_DATE = 'datetime'
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

CREATE CERTIFICATE certificate_name


{ <generate_new_keys> | FROM <existing_keys> }
[ ; ]

<generate_new_keys> ::=
WITH SUBJECT ='certificate_subject_name'
[ , <date_options> [ ,...n ] ]

<existing_keys> ::=
{
FILE ='path_to_file'
WITH PRIVATE KEY
(
FILE ='path_to_private_key'
, DECRYPTION BY PASSWORD ='password'
)
}

<date_options> ::=
START_DATE ='datetime' | EXPIRY_DATE ='datetime'

Arguments
certificate_name
Is the name for the certificate in the database.
AUTHORIZATION user_name
Is the name of the user that owns this certificate.
ASSEMBLY assembly_name
Specifies a signed assembly that has already been loaded into the database.
[ EXECUTABLE ] FILE ='path_to_file'
Specifies the complete path, including file name, to a DER -encoded file that contains the certificate. If the
EXECUTABLE option is used, the file is a DLL that has been signed by the certificate. path_to_file can be a local
path or a UNC path to a network location. The file is accessed in the security context of the SQL Server service
account. This account must have the required file-system permissions.
WITH PRIVATE KEY
Specifies that the private key of the certificate is loaded into SQL Server. This clause is only valid when the
certificate is being created from a file. To load the private key of an assembly, use ALTER CERTIFICATE.
FILE ='path_to_private_key'
Specifies the complete path, including file name, to the private key. path_to_private_key can be a local path or a
UNC path to a network location. The file is accessed in the security context of the SQL Server service account.
This account must have the necessary file-system permissions.

NOTE
This option is not available in a contained database.

asn_encoded_certificate
ASN encoded certificate bits specified as a binary constant.
BINARY =private_key_bits
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Private key bits specified as binary constant. These bits can be in encrypted form. If encrypted, the user must
provide a decryption password. Password policy checks are not performed on this password. The private key
bits should be in a PVK file format.
DECRYPTION BY PASSWORD ='key_password'
Specifies the password required to decrypt a private key that is retrieved from a file. This clause is optional if the
private key is protected by a null password. Saving a private key to a file without password protection is not
recommended. If a password is required but no password is specified, the statement fails.
ENCRYPTION BY PASSWORD ='password'
Specifies the password used to encrypt the private key. Use this option only if you want to encrypt the certificate
with a password. If this clause is omitted, the private key is encrypted using the database master key. password
must meet the Windows password policy requirements of the computer that is running the instance of SQL
Server. For more information, see Password Policy.
SUBJECT ='certificate_subject_name'
The term subject refers to a field in the metadata of the certificate as defined in the X.509 standard. The subject
should be no more than 64 characters long, and this limit is enforced for SQL Server on Linux. For SQL Server
on Windows, the subject can be up to 128 characters long. Subjects that exceed 128 characters are truncated
when they are stored in the catalog, but the binary large object (BLOB ) that contains the certificate retains the
full subject name.
START_DATE ='datetime'
Is the date on which the certificate becomes valid. If not specified, START_DATE is set equal to the current date.
START_DATE is in UTC time and can be specified in any format that can be converted to a date and time.
EXPIRY_DATE ='datetime'
Is the date on which the certificate expires. If not specified, EXPIRY_DATE is set to a date one year after
START_DATE. EXPIRY_DATE is in UTC time and can be specified in any format that can be converted to a date
and time. SQL Server Service Broker checks the expiration date. However, expiration is not enforced when the
certificate is used for encryption.
ACTIVE FOR BEGIN_DIALOG = { ON | OFF }
Makes the certificate available to the initiator of a Service Broker dialog conversation. The default value is ON.

Remarks
A certificate is a database-level securable that follows the X.509 standard and supports X.509 V1 fields.
CREATE CERTIFICATE can load a certificate from a file or assembly. This statement can also generate a key pair
and create a self-signed certificate.
The Private Key must be <= 2500 bytes in encrypted format. Private keys generated by SQL Server are 1024
bits long through SQL Server 2014 (12.x) and are 2048 bits long beginning with SQL Server 2016 (13.x).
Private keys imported from an external source have a minimum length of 384 bits and a maximum length of
4,096 bits. The length of an imported private key must be an integer multiple of 64 bits. Certificates used for
TDE are limited to a private key size of 3456 bits.
The entire Serial Number of the certificate is stored but only the first 16 bytes appear in the sys.certificates
catalog view.
The entire Issuer field of the certificate is stored but only the first 884 bytes in the sys.certificates catalog view.
The private key must correspond to the public key specified by certificate_name.
When you create a certificate from a container, loading the private key is optional. But when SQL Server
generates a self-signed certificate, the private key is always created. By default, the private key is encrypted
using the database master key. If the database master key does not exist and no password is specified, the
statement fails.
The ENCRYPTION BY PASSWORD option is not required when the private key is encrypted with the database
master key. Use this option only when the private key is encrypted with a password. If no password is specified,
the private key of the certificate will be encrypted using the database master key. If the master key of the
database cannot be opened, omitting this clause causes an error.
You do not have to specify a decryption password when the private key is encrypted with the database master
key.

NOTE
Built-in functions for encryption and signing do not check the expiration dates of certificates. Users of these functions
must decide when to check certificate expiration.

A binary description of a certificate can be created by using the CERTENCODED (Transact-SQL ) and
CERTPRIVATEKEY (Transact-SQL ) functions. For an example that uses CERTPRIVATEKEY and
CERTENCODED to copy a certificate to another database, see example B in the topic CERTENCODED
(Transact-SQL ).

Permissions
Requires CREATE CERTIFICATE permission on the database. Only Windows logins, SQL Server logins, and
application roles can own certificates. Groups and roles cannot own certificates.

Examples
A. Creating a self-signed certificate
The following example creates a certificate called Shipping04 . The private key of this certificate is protected
using a password.

CREATE CERTIFICATE Shipping04


ENCRYPTION BY PASSWORD = 'pGFD4bb925DGvbd2439587y'
WITH SUBJECT = 'Sammamish Shipping Records',
EXPIRY_DATE = '20201031';
GO

B. Creating a certificate from a file


The following example creates a certificate in the database, loading the key pair from files.

CREATE CERTIFICATE Shipping11


FROM FILE = 'c:\Shipping\Certs\Shipping11.cer'
WITH PRIVATE KEY (FILE = 'c:\Shipping\Certs\Shipping11.pvk',
DECRYPTION BY PASSWORD = 'sldkflk34et6gs%53#v00');
GO

C. Creating a certificate from a signed executable file

CREATE CERTIFICATE Shipping19


FROM EXECUTABLE FILE = 'c:\Shipping\Certs\Shipping19.dll';
GO

Alternatively, you can create an assembly from the dll file, and then create a certificate from the assembly.
CREATE ASSEMBLY Shipping19
FROM ' c:\Shipping\Certs\Shipping19.dll'
WITH PERMISSION_SET = SAFE;
GO
CREATE CERTIFICATE Shipping19 FROM ASSEMBLY Shipping19;
GO

D. Creating a self-signed certificate


The following example creates a certificate called Shipping04 without specifying an encryption password. This
example can be used with Azure SQL Data Warehouse and Parallel Data Warehouse.

CREATE CERTIFICATE Shipping04


WITH SUBJECT = 'Sammamish Shipping Records';
GO

See Also
ALTER CERTIFICATE (Transact-SQL )
DROP CERTIFICATE (Transact-SQL )
BACKUP CERTIFICATE (Transact-SQL )
Encryption Hierarchy
EVENTDATA (Transact-SQL )
CERTENCODED (Transact-SQL )
CERTPRIVATEKEY (Transact-SQL )
CREATE COLUMNSTORE INDEX (Transact-SQL)
5/16/2018 • 26 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Convert a rowstore table to a clustered columnstore index or create a nonclustered columnstore index. Use a
columnstore index to efficiently run real-time operational analytics on an OLTP workload or to improve data
compression and query performance for data warehousing workloads.

NOTE
Starting with SQL Server 2016 (13.x), you can create the table as a clustered columnstore index. It is no longer necessary to
first create a rowstore table and then convert it to a clustered columnstore index.

TIP
For information on index design guidelines, refer to the SQL Server Index Design Guide.

Skip to examples:
Examples for converting a rowstore table to columnstore
Examples for nonclustered columnstore indexes
Go to scenarios:
Columnstore indexes for real-time operational analytics
Columnstore indexes for data warehousing
Learn more:
Columnstore indexes guide
Columnstore indexes feature summary
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

-- Create a clustered columnstore index on disk-based table.


CREATE CLUSTERED COLUMNSTORE INDEX index_name
ON [database_name. [schema_name ] . | schema_name . ] table_name
[ WITH ( < with_option> [ ,...n ] ) ]
[ ON <on_option> ]
[ ; ]

--Create a non-clustered columnstore index on a disk-based table.


CREATE [NONCLUSTERED] COLUMNSTORE INDEX index_name
ON [database_name. [schema_name ] . | schema_name . ] table_name
( column [ ,...n ] )
[ WHERE <filter_expression> [ AND <filter_expression> ] ]
[ WITH ( < with_option> [ ,...n ] ) ]
[ ON <on_option> ]
[ ; ]

<with_option> ::=
DROP_EXISTING = { ON | OFF } -- default is OFF
| MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
| COMPRESSION_DELAY = { 0 | delay [ Minutes ] }
| DATA_COMPRESSION = { COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( { partition_number_expression | range } [ ,...n ] ) ]

<on_option>::=
partition_scheme_name ( column_name )
| filegroup_name
| "default"

<filter_expression> ::=
column_name IN ( constant [ ,...n ]
| column_name { IS | IS NOT | = | <> | != | > | >= | !> | < | <= | !< } constant

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

CREATE CLUSTERED COLUMNSTORE INDEX index_name


ON [ database_name . [ schema_name ] . | schema_name . ] table_name
[ WITH ( DROP_EXISTING = { ON | OFF } ) ] --default is OFF
[;]

Arguments
CREATE CLUSTERED COLUMNSTORE INDEX
Create a clustered columnstore index in which all of the data is compressed and stored by column. The index
includes all of the columns in the table, and stores the entire table. If the existing table is a heap or clustered index,
the table is converted to a clustered columnstore index. If the table is already stored as a clustered columnstore
index, the existing index is dropped and rebuilt.
index_name
Specifies the name for the new index.
If the table already has a clustered columnstore index, you can specify the same name as the existing index, or you
can use the DROP EXISTING option to specify a new name.
ON [database_name. [schema_name ] . | schema_name . ] table_name
Specifies the one-, two-, or three-part name of the table to be stored as a clustered columnstore index. If the table
is a heap or clustered index the table is converted from rowstore to a columnstore. If the table is already a
columnstore, this statement rebuilds the clustered columnstore index.
WITH
DROP_EXISTING = [OFF ] | ON
DROP_EXISTING = ON specifies to drop the existing clustered columnstore index, and create a new columnstore
index.
The default, DROP_EXISTING = OFF expects the index name is the same as the existing name. An error occurs is
the specified index name already exists.
MAXDOP = max_degree_of_parallelism
Overrides the existing maximum degree of parallelism server configuration for the duration of the index operation.
Use MAXDOP to limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism values can be:
1 - Suppress parallel plan generation.
>1 - Restrict the maximum number of processors used in a parallel index operation to the specified number or
fewer based on the current system workload. For example, when MAXDOP = 4, the number of processors used
is 4 or less.
0 (default) - Use the actual number of processors or fewer based on the current system workload.
For more information, see Configure the max degree of parallelism Server Configuration Option, and
Configure Parallel Index Operations.
COMPRESSION_DEL AY = 0 | delay [ Minutes ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
For a disk-based table, delay specifies the minimum number of minutes a delta rowgroup in the CLOSED state
must remain in the delta rowgroup before SQL Server can compress it into the compressed rowgroup. Since disk-
based tables don't track insert and update times on individual rows, SQL Server applies the delay to delta
rowgroups in the CLOSED state.
The default is 0 minutes.
For recommendations on when to use COMPRESSION_DEL AY, see Get started with Columnstore for real time
operational analytics.
DATA_COMPRESSION = COLUMNSTORE | COLUMNSTORE_ARCHIVE
Applies to: SQL Server 2016 (13.x) through SQL Server 2017. Specifies the data compression option for the
specified table, partition number, or range of partitions. The options are as follows:
COLUMNSTORE
COLUMNSTORE is the default and specifies to compress with the most performant columnstore compression.
This is the typical choice.
COLUMNSTORE_ARCHIVE
COLUMNSTORE_ARCHIVE further compresses the table or partition to a smaller size. Use this option for
situations such as archival that require a smaller storage size and can afford more time for storage and retrieval.
For more information about compression, see Data Compression.
ON
With the ON options you can specify options for data storage, such as a partition scheme, a specific filegroup, or
the default filegroup. If the ON option is not specified, the index uses the settings partition or filegroup settings of
the existing table.
partition_scheme_name ( column_name )
Specifies the partition scheme for the table. The partition scheme must already exist in the database. To create the
partition scheme, see CREATE PARTITION SCHEME.
column_name specifies the column against which a partitioned index is partitioned. This column must match the
data type, length, and precision of the argument of the partition function that partition_scheme_name is using.
filegroup_name
Specifies the filegroup for storing the clustered columnstore index. If no location is specified and the table is not
partitioned, the index uses the same filegroup as the underlying table or view. The filegroup must already exist.
"default"
To create the index on the default filegroup, use "default" or [ default ].
If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current session.
QUOTED_IDENTIFIER is ON by default. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL ).
CREATE [NONCLUSTERED ] COLUMNSTORE INDEX
Create an in-memory nonclustered columnstore index on a rowstore table stored as a heap or clustered index. The
index can have a filtered condition and does not need to include all of the columns of the underlying table. The
columnstore index requires enough space to store a copy of the data. It is updateable and is updated as the
underlying table is changed. The nonclustered columnstore index on a clustered index enables real-time analytics.
index_name
Specifies the name of the index. index_name must be unique within the table, but does not have to be unique
within the database. Index names must follow the rules of identifiers.
( column [ ,...n ] )
Specifies the columns to store. A nonclustered columnstore index is limited to 1024 columns.
Each column must be of a supported data type for columnstore indexes. See Limitations and Restrictions for a list
of the supported data types.
ON [database_name. [schema_name ] . | schema_name . ] table_name
Specifies the one-, two-, or three-part name of the table that contains the index.
WITH DROP_EXISTING = [OFF ] | ON
DROP_EXISTING = ON The existing index is dropped and rebuilt. The index name specified must be the same as
a currently existing index; however, the index definition can be modified. For example, you can specify different
columns, or index options.
DROP_EXISTING = OFF An error is displayed if the specified index name already exists. The index type cannot be
changed by using DROP_EXISTING. In backward compatible syntax, WITH DROP_EXISTING is equivalent to
WITH DROP_EXISTING = ON.
MAXDOP = max_degree_of_parallelism
Overrides the Configure the max degree of parallelism Server Configuration Option configuration option for the
duration of the index operation. Use MAXDOP to limit the number of processors used in a parallel plan execution.
The maximum is 64 processors.
max_degree_of_parallelism values can be:
1 - Suppress parallel plan generation.
>1 - Restrict the maximum number of processors used in a parallel index operation to the specified number or
fewer based on the current system workload. For example, when MAXDOP = 4, the number of processors used
is 4 or less.
0 (default) - Use the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.
NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported by
the editions of SQL Server, see Editions and Supported Features for SQL Server 2016.

ONLINE = [ON | OFF ]


Applies to: SQL Server 2017 (14.x), in nonclustered columnstore indexes only. ON specifies that the nonclustered
columnstore index remains online and available while the new copy of the index is being built.
OFF specifies that the index is not available for use while the new copy is being built. As this is a nonclustered
index only, the base table remains available, only the nonclustered columnstore index is not used to satisfy queries
until the new index is complete.
COMPRESSION_DEL AY = 0 | <delay>[Minutes]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Specifies a lower bound on how long a row should stay in delta rowgroup before it is eligible for migration to
compressed rowgroup. For example, a customer can say that if a row is unchanged for 120 minutes, make it
eligible for compressing into columnar storage format. For columnstore index on disk-based tables, we don’t track
the time when a row was inserted or updated, we use the delta rowgroup closed time as a proxy for the row
instead. The default duration is 0 minutes. A row is migrated to columnar storage once 1 million rows have been
accumulated in delta rowgroup and it has been marked closed.
DATA_COMPRESSION
Specifies the data compression option for the specified table, partition number, or range of partitions. The options
are as follows:
COLUMNSTORE
Applies to: SQL Server 2016 (13.x) through SQL Server 2017. Applies only to columnstore indexes, including both
nonclustered columnstore and clustered columnstore indexes. COLUMNSTORE is the default and specifies to
compress with the most performant columnstore compression. This is the typical choice.
COLUMNSTORE_ARCHIVE
Applies to: SQL Server 2016 (13.x) through SQL Server 2017. Applies only to columnstore indexes, including both
nonclustered columnstore and clustered columnstore indexes. COLUMNSTORE_ARCHIVE further compresses
the table or partition to a smaller size. This can be used for archival, or for other situations that require a smaller
storage size and can afford more time for storage and retrieval.
For more information about compression, see Data Compression.
WHERE <filter_expression> [ AND <filter_expression> ] Applies to: SQL Server 2016 (13.x) through SQL Server
2017.
Called a filter predicate, this specifies which rows to include in the index. SQL Server creates filtered statistics on
the data rows in the filtered index.
The filter predicate uses simple comparison logic. Comparisons using NULL literals are not allowed with the
comparison operators. Use the IS NULL and IS NOT NULL operators instead.
Here are some examples of filter predicates for the Production.BillOfMaterials table:
WHERE StartDate > '20000101' AND EndDate <= '20000630'
WHERE ComponentID IN (533, 324, 753)
WHERE StartDate IN ('20000404', '20000905') AND EndDate IS NOT NULL

For guidance on filtered indexes, see Create Filtered Indexes.


ON
These options specify the filegroups on which the index is created.
partition_scheme_name ( column_name )
Specifies the partition scheme that defines the filegroups onto which the partitions of a partitioned index is
mapped. The partition scheme must exist within the database by executing CREATE PARTITION SCHEME.
column_name specifies the column against which a partitioned index is partitioned. This column must match the
data type, length, and precision of the argument of the partition function that partition_scheme_name is using.
column_name is not restricted to the columns in the index definition. When partitioning a columnstore index,
Database Engine adds the partitioning column as a column of the index, if it is not already specified.
If partition_scheme_name or filegroup is not specified and the table is partitioned, the index is placed in the same
partition scheme, using the same partitioning column, as the underlying table.
A columnstore index on a partitioned table must be partition aligned.
For more information about partitioning indexes, see Partitioned Tables and Indexes.
filegroup_name
Specifies a filegroup name on which to create the index. If filegroup_name is not specified and the table is not
partitioned, the index uses the same filegroup as the underlying table. The filegroup must already exist.
"default"
Creates the specified index on the default filegroup.
The term default, in this context, is not a keyword. It is an identifier for the default filegroup and must be delimited,
as in ON "default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the
current session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL ).

Permissions
Requires ALTER permission on the table.

General Remarks
A columnstore index can be created on a temporary table. When the table is dropped or the session ends, the index
is also dropped.

Filtered Indexes
A filtered index is an optimized nonclustered index, suited for queries that select a small percentage of rows from a
table. It uses a filter predicate to index a portion of the data in the table. A well-designed filtered index can improve
query performance, reduce storage costs, and reduce maintenance costs.
Required SET Options for Filtered Indexes
The SET options in the Required Value column are required whenever any of the following conditions occur:
Create a filtered index.
INSERT, UPDATE, DELETE, or MERGE operation modifies the data in a filtered index.
The filtered index is used by the query optimizer to produce the query plan.

DEFAULT
DEFAULT
DEFAULT SERVER OLE DB AND ODBC
SET OPTIONS REQUIRED VALUE VALUE VALUE DB-LIBRARY VALUE

ANSI_NULLS ON ON ON OFF

ANSI_PADDING ON ON ON OFF
DEFAULT
DEFAULT
DEFAULT SERVER OLE DB AND ODBC
SET OPTIONS REQUIRED VALUE VALUE VALUE DB-LIBRARY VALUE

ANSI_WARNINGS* ON ON ON OFF

ARITHABORT ON ON OFF OFF

CONCAT_NULL_YIEL ON ON ON OFF
DS_NULL

NUMERIC_ROUNDA OFF OFF OFF OFF


BORT

QUOTED_IDENTIFIE ON ON ON OFF
R

*Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the database compatibility


level is set to 90 or higher. If the database compatibility level is set to 80 or earlier, the ARITHABORT option
must explicitly be set to ON.
If the SET options are incorrect, the following conditions can occur:
The filtered index is not created.
The Database Engine generates an error and rolls back INSERT, UPDATE, DELETE, or MERGE statements
that change data in the index.
Query optimizer does not consider the index in the execution plan for any Transact-SQL statements.
For more information about Filtered Indexes, see Create Filtered Indexes.

Limitations and Restrictions


Each column in a columnstore index must be of one of the following common business data types:
datetimeoffset [ ( n ) ]
datetime2 [ ( n ) ]
datetime
smalldatetime
date
time [ ( n ) ]
float [ ( n ) ]
real [ ( n ) ]
decimal [ ( precision [ , scale ] ) ]
numeric [ ( precision [ , scale ] ) ]
money
smallmoney
bigint
int
smallint
tinyint
bit
nvarchar [ ( n ) ]
nvarchar(max) (Applies to SQL Server 2017 (14.x) and Premium tier, Standard tier (S3 and above), and all
VCore offerings tiers, in clustered columnstore indexes only)
nchar [ ( n ) ]
varchar [ ( n ) ]
varchar(max) (Applies to SQL Server 2017 (14.x) and Premium tier, Standard tier (S3 and above), and all VCore
offerings tiers, in clustered columnstore indexes only)
char [ ( n ) ]
varbinary [ ( n ) ]
varbinary (max) (Applies to SQL Server 2017 (14.x) and Azure SQL Database at Premium tier, Standard tier
(S3 and above), and all VCore offerings tiers, in clustered columnstore indexes only)
binary [ ( n ) ]
uniqueidentifier (Applies to SQL Server 2014 (12.x) and later)
If the underlying table has a column of a data type that is not supported for columnstore indexes, you must omit
that column from the nonclustered columnstore index.
Columns that use any of the following data types cannot be included in a columnstore index:
ntext, text, and image
nvarchar(max), varchar(max), and varbinary(max) (Applies to SQL Server 2016 (13.x) and prior versions, and
nonclustered columnstore indexes)
rowversion (and timestamp)
sql_variant
CLR types (hierarchyid and spatial types)
xml
uniqueidentifier (Applies to SQL Server 2012 (11.x))
Nonclustered columnstore indexes:
Cannot have more than 1024 columns.
Cannot be created as a constraint-based index. It is possible to have unique constraints, primary key constraints,
and foreign key constraints on a table with a columnstore index. Constraints are always enforced with a row -
store index. Constraints cannot be enforced with a columnstore (clustered or nonclustered) index.
Cannot be created on a view or indexed view.
Cannot include a sparse column.
Cannot be changed by using the ALTER INDEX statement. To change the nonclustered index, you must drop
and re-create the columnstore index instead. You can use ALTER INDEX to disable and rebuild a columnstore
index.
Cannot be created by using the INCLUDE keyword.
Cannot include the ASC or DESC keywords for sorting the index. Columnstore indexes are ordered according
to the compression algorithms. Sorting would eliminate many of the performance benefits.
Cannot include large object (LOB ) columns of type nvarchar(max), varchar(max), and varbinary(max) in
nonclustered column store indexes. Only clustered columnstore indexes support LOB types, beginning in SQL
Server 2017 (14.x) version and Azure SQL Database configured at Premium tier, Standard tier (S3 and above),
and all VCore offerings tiers tier. Note, prior versions do not support LOB types in clustered and nonclustered
columnstore indexes.
Columnstore indexes cannot be combined with the following features:
Computed columns. Starting with SQL Server 2017, a clustered columnstore index can contain a non-persisted
computed column. However, in SQL Server 2017, clustered columnstore indexes cannot contain persisted
computed columns, and you cannot created nonclustered indexes on computed columns.
Page and row compression, and vardecimal storage format (A columnstore index is already compressed in a
different format.)
Replication
Filestream
You cannot use cursors or triggers on a table with a clustered columnstore index. This restriction does not apply to
nonclustered columnstore indexes; you can use cursors and triggers on a table with a nonclustered columnstore
index.
SQL Server 2014 specific limitations
These limitations apply only to SQL Server 2014. In this release, we introduced updateable clustered columnstore
indexes. Nonclustered columnstore indexes were still read-only.
Change tracking. You cannot use change tracking with nonclustered columnstore indexes (NCCI) because they
are read-only. It does work for clustered columnstore indexes (CCI).
Change data capture. You cannot use change data capture for nonclustered columnstore index (NCCI) because
they are read-only. It does work for clustered columnstore indexes (CCI).
Readable secondary. You cannot access a clustered clustered columnstore index (CCI) from a readable
secondary of an Always OnReadable availability group. You can access a nonclustered columnstore index
(NCCI) from a readable secondary.
Multiple Active Result Sets (MARS ). SQL Server 2014 uses MARS for read-only connections to tables with
a columnstore index. However, SQL Server 2014 does not support MARS for concurrent data manipulation
language (DML ) operations on a table with a columnstore index. When this occurs, SQL Server terminates
the connections and aborts the transactions.
For information about the performance benefits and limitations of columnstore indexes, see Columnstore
Indexes Overview.

Metadata
All of the columns in a columnstore index are stored in the metadata as included columns. The columnstore index
does not have key columns. These system views provide information about columnstore indexes.
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
sys.partitions (Transact-SQL )
sys.column_store_segments (Transact-SQL )
sys.column_store_dictionaries (Transact-SQL )
sys.column_store_row_groups (Transact-SQL )

Examples for converting a rowstore table to columnstore


A. Convert a heap to a clustered columnstore index
This example creates a table as a heap and then converts it to a clustered columnstore index named cci_Simple.
This changes the storage for the entire table from rowstore to columnstore.
CREATE TABLE SimpleTable(
ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey [int] NOT NULL,
ShipDateKey [int] NOT NULL);
GO
CREATE CLUSTERED COLUMNSTORE INDEX cci_Simple ON SimpleTable;
GO

B. Convert a clustered index to a clustered columnstore index with the same name.
This example creates a table with clustered index, and then demonstrates the syntax of converting the clustered
index to a clustered columnstore index. This changes the storage for the entire table from rowstore to columnstore.

CREATE TABLE SimpleTable (


ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey [int] NOT NULL,
ShipDateKey [int] NOT NULL);
GO
CREATE CLUSTERED INDEX cl_simple ON SimpleTable (ProductKey);
GO
CREATE CLUSTERED COLUMNSTORE INDEX cl_simple ON SimpleTable
WITH (DROP_EXISTING = ON);
GO

C. Handle nonclustered indexes when converting a rowstore table to a columnstore index.


This example shows how to handle nonclustered indexes when converting a rowstore table to a columnstore index.
Actually, beginning with SQL Server 2016 (13.x) no special action is required; SQL Server automatically defines
and rebuilds the nonclustered indexes on the new clustered columnstore index.
If you want to drop the nonclustered indexes, use the DROP INDEX statement prior to creating the columnstore
index. The DROP EXISTING option only drops the clustered index that is being converted. It does not drop the
nonclustered indexes.
In SQL Server 2012 (11.x) and SQL Server 2014 (12.x), you could not create a nonclustered index on a
columnstore index. This example shows how in previous releases you need to drop the nonclustered indexes
before creating the columnstore index.
--Create the table for use with this example.
CREATE TABLE SimpleTable (
ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey [int] NOT NULL,
ShipDateKey [int] NOT NULL);
GO

--Create two nonclustered indexes for use with this example


CREATE INDEX nc1_simple ON SimpleTable (OrderDateKey);
CREATE INDEX nc2_simple ON SimpleTable (DueDateKey);
GO

--SQL Server 2012 and SQL Server 2014: you need to drop the nonclustered indexes
--in order to create the columnstore index.

DROP INDEX SimpleTable.nc1_simple;


DROP INDEX SimpleTable.nc2_simple;

--Convert the rowstore table to a columnstore index.


CREATE CLUSTERED COLUMNSTORE INDEX cci_simple ON SimpleTable;
GO

D. Convert a large fact table from rowstore to columnstore


This example explains how to convert a large fact table from a rowstore table to a columnstore table.
To convert a rowstore table to a columnstore table.
1. First, create a small table to use in this example.

--Create a rowstore table with a clustered index and a non-clustered index.


CREATE TABLE MyFactTable (
ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey [int] NOT NULL,
ShipDateKey [int] NOT NULL )
)
WITH (
CLUSTERED INDEX ( ProductKey )
);

--Add a non-clustered index.


CREATE INDEX my_index ON MyFactTable ( ProductKey, OrderDateKey );

2. Drop all non-clustered indexes from the rowstore table.

--Drop all non-clustered indexes


DROP INDEX my_index ON MyFactTable;

3. Drop the clustered index.


Do this only if you want to specify a new name for the index when it is converted to a clustered
columnstore index. If you do not drop the clustered index, the new clustered columnstore index has
the same name.
NOTE
The name of the index might be easier to remember if you use your own name. All rowstore clustered indexes
use the default name which is 'ClusteredIndex_<GUID>'.

--Process for dropping a clustered index.


--First, look up the name of the clustered rowstore index.
--Clustered rowstore indexes always use the DEFAULT name ‘ClusteredIndex_<GUID>’.
SELECT i.name
FROM sys.indexes i
JOIN sys.tables t
ON ( i.type_desc = 'CLUSTERED' ) WHERE t.name = 'MyFactTable';

--Drop the clustered rowstore index.


DROP INDEX ClusteredIndex_d473567f7ea04d7aafcac5364c241e09 ON MyDimTable;

4. Convert the rowstore table to a columnstore table with a clustered columnstore index.

--Option 1: Convert to columnstore and name the new clustered columnstore index MyCCI.
CREATE CLUSTERED COLUMNSTORE INDEX MyCCI ON MyFactTable;

--Option 2: Convert to columnstore and use the rowstore clustered


--index name for the columnstore clustered index name.
--First, look up the name of the clustered rowstore index.
SELECT i.name
FROM sys.indexes i
JOIN sys.tables t
ON ( i.type_desc = 'CLUSTERED' )
WHERE t.name = 'MyFactTable';

--Second, create the clustered columnstore index and


--Replace ClusteredIndex_d473567f7ea04d7aafcac5364c241e09
--with the name of your clustered index.
CREATE CLUSTERED COLUMNSTORE INDEX
ClusteredIndex_d473567f7ea04d7aafcac5364c241e09
ON MyFactTable
WITH DROP_EXISTING = ON;

E. Convert a columnstore table to a rowstore table with a clustered index


To convert a columnstore table to a rowstore table with a clustered index, use the CREATE INDEX statement with
the DROP_EXISTING option.

CREATE CLUSTERED INDEX ci_MyTable


ON MyFactTable
WITH ( DROP EXISTING = ON );

F. Convert a columnstore table to a rowstore heap


To convert a columnstore table to a rowstore heap, simply drop the clustered columnstore index.

DROP INDEX MyCCI


ON MyFactTable;

G. Defragment by rebuilding the entire clustered columnstore index


Applies to: SQL Server 2014
There are two ways to rebuild the full clustered columnstore index. You can use CREATE CLUSTERED
COLUMNSTORE INDEX, or ALTER INDEX (Transact-SQL ) and the REBUILD option. Both methods achieve the
same results.

NOTE
Beginning with SQL Server 2016, use ALTER INDEX REORGANIZE instead of rebuilding with the methods described in this
example.

--Determine the Clustered Columnstore Index name of MyDimTable.


SELECT i.object_id, i.name, t.object_id, t.name
FROM sys.indexes i
JOIN sys.tables t
ON (i.type_desc = 'CLUSTERED COLUMNSTORE')
WHERE t.name = 'RowstoreDimTable';

--Rebuild the entire index by using CREATE CLUSTERED INDEX.


CREATE CLUSTERED COLUMNSTORE INDEX my_CCI
ON MyFactTable
WITH ( DROP_EXISTING = ON );

--Rebuild the entire index by using ALTER INDEX and the REBUILD option.
ALTER INDEX my_CCI
ON MyFactTable
REBUILD PARTITION = ALL
WITH ( DROP_EXISTING = ON );

Examples for nonclustered columnstore indexes


A. Create a columnstore index as a secondary index on a rowstore table
This example creates a nonclustered columnstore index on a rowstore table. Only one columnstore index can be
created in this situation. The columnstore index requires extra storage since it contains a copy of the data in the
rowstore table. This example creates a simple table and a clustered index, and then demonstrates the syntax of
creating a nonclustered columnstore index.

CREATE TABLE SimpleTable


(ProductKey [int] NOT NULL,
OrderDateKey [int] NOT NULL,
DueDateKey [int] NOT NULL,
ShipDateKey [int] NOT NULL);
GO
CREATE CLUSTERED INDEX cl_simple ON SimpleTable (ProductKey);
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csindx_simple
ON SimpleTable
(OrderDateKey, DueDateKey, ShipDateKey);
GO

B. Create a simple nonclustered columnstore index using all options


The following example demonstrates the syntax of creating a nonclustered columnstore index by using all options.

CREATE NONCLUSTERED COLUMNSTORE INDEX csindx_simple


ON SimpleTable
(OrderDateKey, DueDateKey, ShipDateKey)
WITH (DROP_EXISTING = ON,
MAXDOP = 2)
ON "default"
GO
For a more complex example using partitioned tables, see Columnstore Indexes Overview.
C. Create a nonclustered columnstore index with a filtered predicate
The following example creates a filtered nonclustered columnstore index on the Production.BillOfMaterials table in
the AdventureWorks2012 database. The filter predicate can include columns that are not key columns in the
filtered index. The predicate in this example selects only the rows where EndDate is non-NULL.

IF EXISTS (SELECT name FROM sys.indexes


WHERE name = N'FIBillOfMaterialsWithEndDate'
AND object_id = OBJECT_ID(N'Production.BillOfMaterials'))
DROP INDEX FIBillOfMaterialsWithEndDate
ON Production.BillOfMaterials;
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX "FIBillOfMaterialsWithEndDate"
ON Production.BillOfMaterials (ComponentID, StartDate)
WHERE EndDate IS NOT NULL;

D. Change the data in a nonclustered columnstore index


Applies to: SQL Server 2012 (11.x) through SQL Server 2014 (12.x).
Once you create a nonclustered columnstore index on a table, you cannot directly modify the data in that table. A
query with INSERT, UPDATE, DELETE, or MERGE fails and returns an error message. To add or modify the data in
the table, you can do one of the following:
Disable or drop the columnstore index. You can then update the data in the table. If you disable the
columnstore index, you can rebuild the columnstore index when you finish updating the data. For example,

ALTER INDEX mycolumnstoreindex ON mytable DISABLE;


-- update mytable --
ALTER INDEX mycolumnstoreindex on mytable REBUILD

Load data into a staging table that does not have a columnstore index. Build a columnstore index on the
staging table. Switch the staging table into an empty partition of the main table.
Switch a partition from the table with the columnstore index into an empty staging table. If there is a
columnstore index on the staging table, disable the columnstore index. Perform any updates. Build (or
rebuild) the columnstore index. Switch the staging table back into the (now empty) partition of the main
table.

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


A. Change a clustered index to a clustered columnstore index
By using the CREATE CLUSTERED COLUMNSTORE INDEX statement with DROP_EXISTING = ON, you can:
Change a clustered index into a clustered columnstore index.
Rebuild a clustered columnstore index.
This example creates the xDimProduct table as a rowstore table with a clustered index, and then uses
CREATE CLUSTERED COLUMNSTORE INDEX to change the table from a rowstore table to a columnstore
table.
-- Uses AdventureWorks

IF EXISTS (SELECT name FROM sys.tables


WHERE name = N'xDimProduct'
AND object_id = OBJECT_ID (N'xDimProduct'))
DROP TABLE xDimProduct;

--Create a distributed table with a clustered index.


CREATE TABLE xDimProduct (ProductKey, ProductAlternateKey, ProductSubcategoryKey)
WITH ( DISTRIBUTION = HASH(ProductKey),
CLUSTERED INDEX (ProductKey) )
AS SELECT ProductKey, ProductAlternateKey, ProductSubcategoryKey FROM DimProduct;

--Change the existing clustered index


--to a clustered columnstore index with the same name.
--Look up the name of the index before running this statement.
CREATE CLUSTERED COLUMNSTORE INDEX <index_name>
ON xdimProduct
WITH ( DROP_EXISTING = ON );

B. Rebuild a clustered columnstore index


Building on the previous example, this example uses CREATE CLUSTERED COLUMNSTORE INDEX to rebuild
the existing clustered columnstore index called cci_xDimProduct.

--Rebuild the existing clustered columnstore index.


CREATE CLUSTERED COLUMNSTORE INDEX cci_xDimProduct
ON xdimProduct
WITH ( DROP_EXISTING = ON );

C. Change the name of a clustered columnstore index


To change the name of a clustered columnstore index, drop the existing clustered columnstore index, and then
recreate the index with a new name.
We recommend only doing this operation with a small table or an empty table. It takes a long time to drop a large
clustered columnstore index and rebuild with a different name.
Using the cci_xDimProduct clustered columnstore index from the previous example, this example drops the
cci_xDimProduct clustered columnstore index and then recreates the clustered columnstore index with the name
mycci_xDimProduct.

--For illustration purposes, drop the clustered columnstore index.


--The table continues to be distributed, but changes to a heap.
DROP INDEX cci_xdimProduct ON xDimProduct;

--Create a clustered index with a new name, mycci_xDimProduct.


CREATE CLUSTERED COLUMNSTORE INDEX mycci_xDimProduct
ON xdimProduct
WITH ( DROP_EXISTING = OFF );

D. Convert a columnstore table to a rowstore table with a clustered index


There might be a situation for which you want to drop a clustered columnstore index and create a clustered index.
This stores the table in rowstore format. This example converts a columnstore table to a rowstore table with a
clustered index with the same name. None of the data is lost. All data goes to the rowstore table and the columns
listed becomes the key columns in the clustered index.
--Drop the clustered columnstore index and create a clustered rowstore index.
--All of the columns are stored in the rowstore clustered index.
--The columns listed are the included columns in the index.
CREATE CLUSTERED INDEX cci_xDimProduct
ON xdimProduct (ProductKey, ProductAlternateKey, ProductSubcategoryKey, WeightUnitMeasureCode)
WITH ( DROP_EXISTING = ON);

E. Convert a columnstore table back to a rowstore heap


Use DROP INDEX (SQL Server PDW ) to drop the clustered columnstore index and convert the table to a
rowstore heap. This example converts the cci_xDimProduct table to a rowstore heap. The table continues to be
distributed, but is stored as a heap.

--Drop the clustered columnstore index. The table continues to be distributed, but changes to a heap.
DROP INDEX cci_xdimProduct ON xdimProduct;
CREATE COLUMN ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a column encryption key with the initial set of values, encrypted with the specified column master keys.
This is a metadata operation. A CEK can have up to two values which allows for a column master key rotation.
Creating a CEK is required before any column in the database can be encrypted using the Always Encrypted
(Database Engine) feature. CEK's can also be created by using SQL Server Management Studio. Before creating a
CEK, you must define a CMK by using Management Studio or the CREATE COLUMN MASTER KEY statement.
Transact-SQL Syntax Conventions

Syntax
CREATE COLUMN ENCRYPTION KEY key_name
WITH VALUES
(
COLUMN_MASTER_KEY = column_master_key_name,
ALGORITHM = 'algorithm_name',
ENCRYPTED_VALUE = varbinary_literal
)
[, (
COLUMN_MASTER_KEY = column_master_key_name,
ALGORITHM = 'algorithm_name',
ENCRYPTED_VALUE = varbinary_literal
) ]
[;]

Arguments
key_name
Is the name by which the column encryption key will be known in the database.
column_master_key_name
Specifies the name of the custom column master key (CMK) used for encrypting the column encryption key
(CEK).
algorithm_name
Name of the encryption algorithm used to encrypt the value of the column encryption key. The algorithm for the
system providers must be RSA_OAEP.
varbinary_literal
The encrypted CEK value BLOB.

WARNING
Never pass plaintext CEK values in this statement. Doing so will comprise the benefit of this feature.

Remarks
The CREATE COLUMN ENCRYPTION KEY statement must include at least one VALUES clause and may have
up to two. If only one is provided, you can use the ALTER COLUMN ENCRYPTION KEY statement to add a
second value later. You can also use the ALTER COLUMN ENCRYPTION KEY statement to remove a VALUES
clause.
Typically, a column encryption key is created with just one encrypted value. When a column master key needs to
be rotated (the current column master key needs to be replaced with the new column master key), you can add a
new value of the column encryption key, encrypted with the new column master key. This will allow you to ensure
client applications can access data encrypted with the column encryption key, while the new column master key is
being made available to client applications. An Always Encrypted enabled driver in a client application that does
not have access to the new master key, will be able to use the column encryption key value encrypted with the old
column master key to access sensitive data.
The encryption algorithms, Always Encrypted supports, require the plaintext value to have 256 bits.
An encrypted value should be generated using a key store provider that encapsulates the key store holding the
column master key. For more information, see Always Encrypted (client development).
Use sys.columns (Transact-SQL ), sys.column_encryption_keys (Transact-SQL ) and
sys.column_encryption_key_values (Transact-SQL ) to view information about column encryption keys.

Permissions
Requires the ALTER ANY COLUMN ENCRYPTION KEY permission.

Examples
A. Creating a column encryption key
The following example creates a column encryption key called MyCEK .

CREATE COLUMN ENCRYPTION KEY MyCEK


WITH VALUES
(
COLUMN_MASTER_KEY = MyCMK,
ALGORITHM = 'RSA_OAEP',
ENCRYPTED_VALUE =
0x01700000016C006F00630061006C006D0061006300680069006E0065002F006D0079002F003200660061006600640038003100320031
00340034003400650062003100610032006500300036003900330034003800610035006400340030003200330038006500660062006300
6300610031006300284FC4316518CF3328A6D9304F65DD2CE387B79D95D077B4156E9ED8683FC0E09FA848275C685373228762B02DF252
2AFF6D661782607B4A2275F2F922A5324B392C9D498E4ECFC61B79F0553EE8FB2E5A8635C4DBC0224D5A7F1B136C182DCDE32A00451F1A
7AC6B4492067FD0FAC7D3D6F4AB7FC0E86614455DBB2AB37013E0A5B8B5089B180CA36D8B06CDB15E95A7D06E25AACB645D42C85B0B7EA
2962BD3080B9A7CDB805C6279FE7DD6941E7EA4C2139E0D4101D8D7891076E70D433A214E82D9030CF1F40C503103075DEEB3D64537D15
D244F503C2750CF940B71967F51095BFA51A85D2F764C78704CAB6F015EA87753355367C5C9F66E465C0C66BADEDFDF76FB7E5C21A0D89
A2FCCA8595471F8918B1387E055FA0B816E74201CD5C50129D29C015895CD073925B6EA87CAF4A4FAF018C06A3856F5DFB724F42807543
F777D82B809232B465D983E6F19DFB572BEA7B61C50154605452A891190FB5A0C4E464862CF5EFAD5E7D91F7D65AA1A78F688E69A1EB09
8AB42E95C674E234173CD7E0925541AD5AE7CED9A3D12FDFE6EB8EA4F8AAD2629D4F5A18BA3DDCC9CF7F352A892D4BEBDC4A1303F9C683
DACD51A237E34B045EBE579A381E26B40DCFBF49EFFA6F65D17F37C6DBA54AA99A65D5573D4EB5BA038E024910A4D36B79A1D4E3C70349
DADFF08FD8B4DEE77FDB57F01CB276ED5E676F1EC973154F86
);
GO

Creating a Column Encryption Key with 2 Values


The following example creates a column encryption key called TwoValueCEK with two values.
CREATE COLUMN ENCRYPTION KEY TwoValueCEK
WITH VALUES
(
COLUMN_MASTER_KEY = CMK1,
ALGORITHM = 'RSA_OAEP',
ENCRYPTED_VALUE =
0x016E000001630075007200720065006E00740075007300650072002F006D0079002F0037006300380061003100310033003400320037
00380062003700300063003800310039006200390063003900340036006100660034003900650061003000320065003800620065003800
3400340065006C33A82ECF04A7185824B4545457AC5244CD9C219E64067B9520C0081B8399B58C2863F7494ABE3694BD87D55FFD7576FF
DC47C28F94ECC99577DF4FB8FA19AA95764FEF889CDE0F176DA5897B74382FBB22756CE2921050A09201A0EB6AF3D6091014C30146EA62
635EE8CBF0A8074DEDFF125CEA80D1C0F5E8C58750A07D270E2A8BF824EE4C0C156366BF26D38CCE49EBDD5639A2DF029A7DBAE5A5D111
F2F2FA3246DF8C2FA83C1E542C10570FADA98F6B29478DC58CE5CBDD407CCEFCDB97814525F6F32BECA266014AC346AC39C4F185C6C0F0
A24FEC4DFA015649624692DE7865B9827BA22C3B574C9FD169F822B609F902288C5880EB25F14BD990D871B1BC4BA3A5B237AF76D26354
773FA2A25CF4511AF58C911E601CFCB1905128C997844EED056C2AE7F0B48700AB41307E470FF9520997D0EB0D887DE11AFE574FFE845B
7DC6C03FEEE8D467236368FC0CB2FDBD54DADC65B10B3DE6C80DF8B7B3F8F3CE5BE914713EE7B1FA5B7A578359592B8A5FDFDDE5FF9F39
2BC87C3CD02FBA94582AC063BBB9FFAC803FD489E16BEB28C4E3374A8478C737236A0B232F5A9DDE4D119573F1AEAE94B2192B81575AD6
F57E670C1B2AB91045124DFDAEC2898F3F0112026DFC93BF9391D667D1AD7ED7D4E6BB119BBCEF1D1ADA589DD3E1082C3DAD13223BE438
EB9574DA04E9D8A06320CAC6D3EC21D5D1C2A0AA484C7C
),
(
COLUMN_MASTER_KEY = CMK2,
ALGORITHM = 'RSA_OAEP',
ENCRYPTED_VALUE =
0x016E000001630075007200720065006E00740075007300650072002F006D0079002F0064006500650063006200660034006100340031
00300038003400620035003300320036006600320063006200620035003000360038006500390062006100300032003000360061003700
3800310066001DDA6134C3B73A90D349C8905782DD819B428162CF5B051639BA46EC69A7C8C8F81591A92C395711493B25DCBCCC57836E
5B9F17A0713E840721D098F3F8E023ABCDFE2F6D8CC4339FC8F88630ED9EBADA5CA8EEAFA84164C1095B12AE161EABC1DF778C07F07D41
3AF1ED900F578FC00894BEE705EAC60F4A5090BBE09885D2EFE1C915F7B4C581D9CE3FDAB78ACF4829F85752E9FC985DEB8773889EE4A1
945BD554724803A6F5DC0A2CD5EFE001ABED8D61E8449E4FAA9E4DD392DA8D292ECC6EB149E843E395CDE0F98D04940A28C4B05F747149
B34A0BAEC04FFF3E304C84AF1FF81225E615B5F94E334378A0A888EF88F4E79F66CB377E3C21964AACB5049C08435FE84EEEF39D20A665
C17E04898914A85B3DE23D56575EBC682D154F4F15C37723E04974DB370180A9A579BC84F6BC9B5E7C223E5CBEE721E57EE07EFDCC0A32
57BBEBF9ADFFB00DBF7EF682EC1C4C47451438F90B4CF8DA709940F72CFDC91C6EB4E37B4ED7E2385B1FF71B28A1D2669FBEB18EA89F9D
391D2FDDEA0ED362E6A591AC64EF4AE31CA8766C259ECB77D01A7F5C36B8418F91C1BEADDD4491C80F0016B66421B4B788C55127135DA2
FA625FB7FD195FB40D90A6C67328602ECAF3EC4F5894BFD84A99EB4753BE0D22E0D4DE6A0ADFEDC80EB1B556749B4A8AD00E73B329C958
27AB91C0256347E85E3C5FD6726D0E1FE82C925D3DF4A9
);
GO

See Also
ALTER COLUMN ENCRYPTION KEY (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL )
Always Encrypted (Database Engine)
sys.column_encryption_keys (Transact-SQL )
sys.column_encryption_key_values (Transact-SQL )
sys.columns (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a column master key metadata object in a database. A column master key metadata entry that represents
a key, stored in an external key store, which is used to protect (encrypt) column encryption keys when using the
Always Encrypted (Database Engine) feature. Multiple column master keys allow for key rotation; periodically
changing the key to enhance security. You can create a column master key in a key store and its corresponding
metadata object in the database by using the Object Explorer in SQL Server Management Studio or PowerShell.
For details, see Overview of Key Management for Always Encrypted.
Transact-SQL Syntax Conventions

Syntax
CREATE COLUMN MASTER KEY key_name
WITH (
KEY_STORE_PROVIDER_NAME = 'key_store_provider_name',
KEY_PATH = 'key_path'
)
[;]

Arguments
key_name
Is the name by which the column master key will be known in the database.
key_store_provider_name
Specifies the name of a key store provider, which is a client-side software component that encapsulates a key store
containing the column master key. An Always Encrypted-enabled client driver uses a key store provider name to
look up a key store provider in driver’s registry of key store providers. The driver uses the provider to decrypt
column encryption keys, protected by a column master key, stored in the underlying key store. A plaintext value of
the column encryption key is then used to encrypt query parameters, corresponding to encrypted database
columns, or to decrypt query results from encrypted columns.
Always Encrypted-enabled client driver libraries include key store providers for popular key stores.
A set of available providers depend on the type and the version of the client driver. Please refer to the Always
Encrypted documentation for particular drivers:
Develop Applications using Always Encrypted with the .NET Framework Provider for SQL Server
The below tables captures the names of system providers:

KEY STORE PROVIDER NAME UNDERLYING KEY STORE

'MSSQL_CERTIFICATE_STORE' Windows Certificate Store

'MSSQL_CSP_PROVIDER' A store, such as a hardware security module (HSM), that


supports Microsoft CryptoAPI.
KEY STORE PROVIDER NAME UNDERLYING KEY STORE

'MSSQL_CNG_STORE' A store, such as a hardware security module (HSM), that


supports Cryptography API: Next Generation.

'Azure_Key_Vault' See Getting Started with Azure Key Vault

You can implement a custom key store provider, in order to store column master keys in a store for which there is
no built-in key store provider in your Always Encrypted-enabled client driver. Note that the names of custom key
store providers cannot start with 'MSSQL_', which is a prefix reserved for Microsoft key store providers.
key_path
The path of the key in the column master key store. The key path must be valid in the context of each client
application that is expected to encrypt or decrypt data stored in a column (indirectly) protected by the referenced
column master key and the client application needs to be permitted to access the key. The format of the key path
is specific to the key store provider. The following list describes the format of key paths for particular Microsoft
system key store providers.
Provider name: MSSQL_CERTIFICATE_STORE
Key path format: CertificateStoreName/CertificateStoreLocation/CertificateThumbprint
Where:
CertificateStoreLocation
Certificate store location, which must be Current User or Local Machine. For more information, see Local
Machine and Current User Certificate Stores.
CertificateStore
Certificate store name, for example 'My'.
CertificateThumbprint
Certificate thumbprint.
Examples:

N'CurrentUser/My/BBF037EC4A133ADCA89FFAEC16CA5BFA8878FB94'

N'LocalMachine/My/CA5BFA8878FB94BBF037EC4A133ADCA89FFAEC16'

Provider name: MSSQL_CSP_PROVIDER


Key path format: ProviderName/KeyIdentifier
Where:
ProviderName
The name a Cryptography Service Provider (CSP ), which implements CAPI, for the column master key
store. If you use an HSM as a key store, this must be the name of the CSP your HSM vendor supplies. The
provider must be installed on a client computer.
KeyIdentifier
Identifier of the key, used as a column master key, in the key store.
Examples:

N'My HSM CSP Provider/AlwaysEncryptedKey1'


Provider name: MSSQL_CNG_STORE
Key path format: ProviderName/KeyIdentifier
Where:
ProviderName
Name of the Key Storage Provider (KSP ), which implements the Cryptography: Next Generation (CNG )
API, for the column master key store. If you use an HSM as a key store, this must be the name of the KSP
your HSM vendor supplies. The provider needs to be installed on a client computer.
KeyIdentifier
Identifier of the key, used as a column master key, in the key store.
Examples:

N'My HSM CNG Provider/AlwaysEncryptedKey1'

Provider name: AZURE_KEY_STORE


Key path format: KeyUrl
Where:
KeyUrl
The URL of the key in Azure Key Vault
Example:
N'https://myvault.vault.azure.net:443/keys/MyCMK/4c05f1a41b12488f9cba2ea964b6a700'

Remarks
Creating a column master key metadata entry is required before a column encryption key metadata entry can be
created in the database and before any column in the database can be encrypted using Always Encrypted. Note
that, a column master key entry in the metadata does not contain the actual column master key, which must be
stored in an external column key store (outside of SQL Server). The key store provider name and the column
master key path in the metadata must be valid for a client application to be able to use the column master key to
decrypt a column encryption key encrypted with the column master key, and to query encrypted columns.

Permissions
Requires the ALTER ANY COLUMN MASTER KEY permission.

Examples
A. Creating a column master key
Creating a column master key metadata entry for a column master key stored in Certificate Store, for client
applications that use the MSSQL_CERTIFICATE_STORE provider to access the column master key:

CREATE COLUMN MASTER KEY MyCMK


WITH (
KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',
KEY_PATH = 'Current User/Personal/f2260f28d909d21c642a3d8e0b45a830e79a1420'
);

Creating a column master key metadata entry for a column master key that is accessed by client applications that
use the MSSQL_CNG_STORE provider:

CREATE COLUMN MASTER KEY MyCMK


WITH (
KEY_STORE_PROVIDER_NAME = N'MSSQL_CNG_STORE',
KEY_PATH = N'My HSM CNG Provider/AlwaysEncryptedKey'
);

Creating a column master key stored in the Azure Key Vault, for client applications that use the
AZURE_KEY_VAULT provider, to access the column master key.

CREATE COLUMN MASTER KEY MyCMK


WITH (
KEY_STORE_PROVIDER_NAME = N'AZURE_KEY_VAULT',
KEY_PATH = N'https://myvault.vault.azure.net:443/keys/
MyCMK/4c05f1a41b12488f9cba2ea964b6a700');

Creating a CMK stored in a custom column master key store:

CREATE COLUMN MASTER KEY MyCMK


WITH (
KEY_STORE_PROVIDER_NAME = 'CUSTOM_KEY_STORE',
KEY_PATH = 'https://contoso.vault/sales_db_tce_key'
);

See Also
DROP COLUMN MASTER KEY (Transact-SQL )
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
sys.column_master_keys (Transact-SQL )
Always Encrypted (Database Engine)
Overview of Key Management for Always Encrypted
CREATE CONTRACT (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new contract. A contract defines the message types that are used in a Service Broker conversation and
also determines which side of the conversation can send messages of that type. Each conversation follows a
contract. The initiating service specifies the contract for the conversation when the conversation starts. The target
service specifies the contracts that the target service accepts conversations for.
Transact-SQL Syntax Conventions

Syntax
CREATE CONTRACT contract_name
[ AUTHORIZATION owner_name ]
( { { message_type_name | [ DEFAULT ] }
SENT BY { INITIATOR | TARGET | ANY }
} [ ,...n] )
[ ; ]

Arguments
contract_name
Is the name of the contract to create. A new contract is created in the current database and owned by the principal
specified in the AUTHORIZATION clause. Server, database, and schema names cannot be specified. The
contract_name can be up to 128 characters.

NOTE
Do not create a contract that uses the keyword ANY for the contract_name. When you specify ANY for a contract name in
CREATE BROKER PRIORITY, the priority is considered for all contracts. It is not limited to a contract whose name is ANY.

AUTHORIZATION owner_name
Sets the owner of the contract to the specified database user or role. When the current user is dbo or sa,
owner_name can be the name of any valid user or role. Otherwise, owner_name must be the name of the current
user, the name of a user that the current user has impersonate permissions for, or the name of a role to which the
current user belongs. When this clause is omitted, the contract belongs to the current user.
message_type_name
Is the name of a message type to be included as part of the contract.
SENT BY
Specifies which endpoint can send a message of the indicated message type. Contracts document the messages
that services can use to have specific conversations. Each conversation has two endpoints: the initiator endpoint,
the service that started the conversation, and the target endpoint, the service that the initiator is contacting.
INITIATOR
Indicates that only the initiator of the conversation can send messages of the specified message type. A service
that starts a conversation is referred to as the initiator of the conversation.
TARGET
Indicates that only the target of the conversation can send messages of the specified message type. A service that
accepts a conversation that was started by another service is referred to as the target of the conversation.
ANY
Indicates that messages of this type can be sent by both the initiator and the target.
[ DEFAULT ]
Indicates that this contract supports messages of the default message type. By default, all databases contain a
message type named DEFAULT. This message type uses a validation of NONE. In the context of this clause,
DEFAULT is not a keyword, and must be delimited as an identifier. Microsoft SQL Server also provides a
DEFAULT contract which specifies the DEFAULT message type.

Remarks
The order of message types in the contract is not significant. After the target has received the first message,
Service Broker allows either side of the conversation to send any message allowed for that side of the
conversation at any time. For example, if the initiator of the conversation can send the message type
//Adventure-Works.com/Expenses/SubmitExpense, Service Broker allows the initiator to send any number
of SubmitExpense messages during the conversation.
The message types and directions in a contract cannot be changed. To change the AUTHORIZATION for a
contract, use the ALTER AUTHORIZATION statement.
A contract must allow the initiator to send a message. The CREATE CONTRACT statement fails when the contract
does not contain at least one message type that is SENT BY ANY or SENT BY INITIATOR.
Regardless of the contract, a service can always receive the message types
http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer ,
http://schemas.microsoft.com/SQL/ServiceBroker/Error , and
http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog . Service Broker uses these message types for system
messages to the application.
A contract cannot be a temporary object. Contract names starting with # are permitted, but are permanent objects.

Permissions
By default, members of the db_ddladmin or db_owner fixed database roles and the sysadmin fixed server role
can create contracts.
By default, the owner of the contract, members of the db_ddladmin or db_owner fixed database roles, and
members of the sysadmin fixed server role have REFERENCES permission on a contract.
The user executing the CREATE CONTRACT statement must have REFERENCES permission on all message
types specified.

Examples
A. Creating a contract
The following example creates an expense reimbursement contract based on three message types.
CREATE MESSAGE TYPE
[//Adventure-Works.com/Expenses/SubmitExpense]
VALIDATION = WELL_FORMED_XML ;

CREATE MESSAGE TYPE


[//Adventure-Works.com/Expenses/ExpenseApprovedOrDenied]
VALIDATION = WELL_FORMED_XML ;

CREATE MESSAGE TYPE


[//Adventure-Works.com/Expenses/ExpenseReimbursed]
VALIDATION= WELL_FORMED_XML ;

CREATE CONTRACT
[//Adventure-Works.com/Expenses/ExpenseSubmission]
( [//Adventure-Works.com/Expenses/SubmitExpense]
SENT BY INITIATOR,
[//Adventure-Works.com/Expenses/ExpenseApprovedOrDenied]
SENT BY TARGET,
[//Adventure-Works.com/Expenses/ExpenseReimbursed]
SENT BY TARGET
) ;

See Also
DROP CONTRACT (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE CREDENTIAL (Transact-SQL)
5/30/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Creates a server-level credential. A credential is a record that contains the authentication information that is
required to connect to a resource outside SQL Server. Most credentials include a Windows user and password.
For example, saving a database backup to some location might require SQL Server to provide special credentials
to access that location. For more information, see Credentials (Database Engine).

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

NOTE
To make the credential at the database-level use CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL). Use a server-level
credential when you need to use the same credential for multiple databases on the server. Use a database-scoped credential
to make the database more portable. When a database is moved to a new server, the database scoped credential will move
with it. Use database scoped credentials on SQL Database.

Transact-SQL Syntax Conventions

Syntax
CREATE CREDENTIAL credential_name
WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]
[ FOR CRYPTOGRAPHIC PROVIDER cryptographic_provider_name ]

Arguments
credential_name
Specifies the name of the credential being created. credential_name cannot start with the number (#) sign.
System credentials start with ##. When using a shared access signature (SAS ), this name must match the
container path, start with https and must not contain a forward slash. See example D below.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server. When the credential is used to
access the Azure Key Vault, the IDENTITY is the name of the key vault. See example C below. When the
credential is using a shared access signature (SAS ), the IDENTITY is SHARED ACCESS SIGNATURE. See
example D below.
SECRET ='secret'
Specifies the secret required for outgoing authentication.
When the credential is used to access the Azure Key Vault the SECRET argument of CREATE CREDENTIAL
requires the <Client ID> (without hyphens) and <Secret> of a Service Principal in the Azure Active Directory to
be passed together without a space between them. See example C below. When the credential is using a shared
access signature, the SECRET is the shared access signature token. See example D below. For information about
creating a stored access policy and a shared access signature on an Azure container, see Lesson 1: Create a stored
access policy and a shared access signature on an Azure container.
FOR CRYPTOGRAPHIC PROVIDER cryptographic_provider_name
Specifies the name of an Enterprise Key Management Provider (EKM ). For more information about Key
Management, see Extensible Key Management (EKM ).

Remarks
When IDENTITY is a Windows user, the secret can be the password. The secret is encrypted using the service
master key. If the service master key is regenerated, the secret is re-encrypted using the new service master key.
After creating a credential, you can map it to a SQL Server login by using CREATE LOGIN or ALTER LOGIN. A
SQL Server login can be mapped to only one credential, but a single credential can be mapped to multiple SQL
Server logins. For more information, see Credentials (Database Engine). A server-level credential can only be
mapped to a login, not to a database user.
Information about credentials is visible in the sys.credentials catalog view.
If there is no login mapped credential for the provider, the credential mapped to SQL Server service account is
used.
A login can have multiple credentials mapped to it as long as they are used with distinctive providers. There must
be only one mapped credential per provider per login. The same credential can be mapped to other logins.

Permissions
Requires ALTER ANY CREDENTIAL permission.

Examples
A. Basic Example
The following example creates the credential called AlterEgo . The credential contains the Windows user Mary5
and a password.

CREATE CREDENTIAL AlterEgo WITH IDENTITY = 'Mary5',


SECRET = '<EnterStrongPasswordHere>';
GO

B. Creating a Credential for EKM


The following example uses a previously created account called User1OnEKM on an EKM module through the
EKM’s Management tools, with a basic account type and password. The sysadmin account on the server creates a
credential that is used to connect to the EKM account, and assigns it to the User1 SQL Server account:
CREATE CREDENTIAL CredentialForEKM
WITH IDENTITY='User1OnEKM', SECRET='<EnterStrongPasswordHere>'
FOR CRYPTOGRAPHIC PROVIDER MyEKMProvider;
GO

/* Modify the login to assign the cryptographic provider credential */


ALTER LOGIN Login1
ADD CREDENTIAL CredentialForEKM;

/* Modify the login to assign a non cryptographic provider credential */


ALTER LOGIN Login1
WITH CREDENTIAL = AlterEgo;
GO

C. Creating a Credential for EKM Using the Azure Key Vault


The following example creates a SQL Server credential for the Database Engine to use when accessing the Azure
Key Vault using the SQL Server Connector for Microsoft Azure Key Vault. For a complete example of using
the SQL Server Connector, see Extensible Key Management Using Azure Key Vault (SQL Server).

IMPORTANT
The IDENTITY argument of CREATE CREDENTIAL requires the key vault name. The SECRET argument of CREATE
CREDENTIAL requires the <Client ID> (without hyphens) and <Secret> to be passed together without a space between
them.

In the following example, the Client ID ( EF5C8E09-4D2A-4A76-9998-D93440D8115D ) is stripped of the hyphens and
entered as the string EF5C8E094D2A4A769998D93440D8115D and the Secret is represented by the string
SECRET_DBEngine.

USE master;
CREATE CREDENTIAL Azure_EKM_TDE_cred
WITH IDENTITY = 'ContosoKeyVault',
SECRET = 'EF5C8E094D2A4A769998D93440D8115DSECRET_DBEngine'
FOR CRYPTOGRAPHIC PROVIDER AzureKeyVault_EKM_Prov ;

The following example creates the same credential by using variables for the Client ID and Secret strings, which
are then concatenated together to form the SECRET argument. The REPLACE function is used to remove the
hyphens from the Client ID.

DECLARE @AuthClientId uniqueidentifier = 'EF5C8E09-4D2A-4A76-9998-D93440D8115D';


DECLARE @AuthClientSecret varchar(200) = 'SECRET_DBEngine';
DECLARE @pwd varchar(max) = REPLACE(CONVERT(varchar(36), @AuthClientId) , '-', '') + @AuthClientSecret;

EXEC ('CREATE CREDENTIAL Azure_EKM_TDE_cred


WITH IDENTITY = 'ContosoKeyVault', SECRET = ''' + @PWD + '''
FOR CRYPTOGRAPHIC PROVIDER AzureKeyVault_EKM_Prov ;');

D. Creating a Credential using a SAS Token


Applies to: SQL Server 2014 (12.x) through current version.
The following example creates a shared access signature credential using a SAS token. For a tutorial on creating a
stored access policy and a shared access signature on an Azure container, and then creating a credential using the
shared access signature, see Tutorial: Using the Microsoft Azure Blob storage service with SQL Server 2016
databases.
IMPORTANT
THE CREDENTIAL NAME argument requires that the name match the container path, start with https and not contain a
trailing forward slash. The IDENTITY argument requires the name, SHARED ACCESS SIGNATURE. The SECRET argument
requires the shared access signature token.

USE master
CREATE CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>] --
this name must match the container path, start with https and must not contain a trailing forward slash.
WITH IDENTITY='SHARED ACCESS SIGNATURE' -- this is a mandatory string and do not change it.
, SECRET = 'sharedaccesssignature' –- this is the shared access signature token
GO

See Also
Credentials (Database Engine)
ALTER CREDENTIAL (Transact-SQL )
DROP CREDENTIAL (Transact-SQL )
CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL )
CREATE LOGIN (Transact-SQL )
ALTER LOGIN (Transact-SQL )
sys.credentials (Transact-SQL )
Lesson 2: Create a SQL Server credential using a shared access signature
Shared Access Signatures
CREATE CRYPTOGRAPHIC PROVIDER (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a cryptographic provider within SQL Server from an Extensible Key Management (EKM ) provider.
Transact-SQL Syntax Conventions

Syntax
CREATE CRYPTOGRAPHIC PROVIDER provider_name
FROM FILE = path_of_DLL

Arguments
provider_name
Is the name of the Extensible Key Management provider.
path_of_DLL
Is the path of the .dll file that implements the SQL Server Extensible Key Management interface. When using the
SQL Server Connector for Microsoft Azure Key Vault the default location is 'C:\Program Files\Microsoft
SQL Server Connector for Microsoft Azure Key Vault\Microsoft.AzureKeyVaultService.EKM.dll'.

Remarks
All keys created by a provider will reference the provider by its GUID. The GUID is retained across all versions of
the DLL.
The DLL that implements SQLEKM interface must be digitally signed by using any certificate. SQL Server will
verify the signature. This includes its certificate chain, which must have its root installed at the Trusted Root Cert
Authorities location on a Windows system. If the signature is not verified correctly, the CREATE
CRYPTOGRAPHIC PROVIDER statement will fail. For more information about certificates and certificate chains,
see SQL Server Certificates and Asymmetric Keys.
When an EKM provider dll does not implement all of the necessary methods, CREATE CRYPTOGRAPHIC
PROVIDER can return error 33085:
One or more methods cannot be found in cryptographic provider library '%.*ls'.

When the header file used to create the EKM provider dll is out of date, CREATE CRYPTOGRAPHIC PROVIDER
can return error 33032:
SQL Crypto API version '%02d.%02d' implemented by provider is not supported. Supported version is '%02d.%02d'.

Permissions
Requires CONTROL SERVER permission or membership in the sysadmin fixed server role.
Examples
The following example creates a cryptographic provider called SecurityProvider in SQL Server from a .dll file. The
.dll file is named c:\SecurityProvider\SecurityProvider_v1.dll and it is installed on the server. The provider's
certificate must first be installed on the server.

-- Install the provider


CREATE CRYPTOGRAPHIC PROVIDER SecurityProvider
FROM FILE = 'C:\SecurityProvider\SecurityProvider_v1.dll';

See Also
Extensible Key Management (EKM )
ALTER CRYPTOGRAPHIC PROVIDER (Transact-SQL )
DROP CRYPTOGRAPHIC PROVIDER (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Extensible Key Management Using Azure Key Vault (SQL Server)
CREATE DATABASE (SQL Server Transact-SQL)
5/3/2018 • 29 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new database and the files used to store the database, a database snapshot, or attaches a database
from the detached files of a previously created database.
Transact-SQL Syntax Conventions

Syntax
Create a database
CREATE DATABASE database_name
[ CONTAINMENT = { NONE | PARTIAL } ]
[ ON
[ PRIMARY ] <filespec> [ ,...n ]
[ , <filegroup> [ ,...n ] ]
[ LOG ON <filespec> [ ,...n ] ]
]
[ COLLATE collation_name ]
[ WITH <option> [,...n ] ]
[;]

<option> ::=
{
FILESTREAM ( <filestream_option> [,...n ] )
| DEFAULT_FULLTEXT_LANGUAGE = { lcid | language_name | language_alias }
| DEFAULT_LANGUAGE = { lcid | language_name | language_alias }
| NESTED_TRIGGERS = { OFF | ON }
| TRANSFORM_NOISE_WORDS = { OFF | ON}
| TWO_DIGIT_YEAR_CUTOFF = <two_digit_year_cutoff>
| DB_CHAINING { OFF | ON }
| TRUSTWORTHY { OFF | ON }
}

<filestream_option> ::=
{
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL }
| DIRECTORY_NAME = 'directory_name'
}

<filespec> ::=
{
(
NAME = logical_file_name ,
FILENAME = { 'os_file_name' | 'filestream_path' }
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB | % ] ]
)
}

<filegroup> ::=
{
FILEGROUP filegroup name [ [ CONTAINS FILESTREAM ] [ DEFAULT ] | CONTAINS MEMORY_OPTIMIZED_DATA ]
<filespec> [ ,...n ]
}

<service_broker_option> ::=
{
ENABLE_BROKER
| NEW_BROKER
| ERROR_BROKER_CONVERSATIONS
}

Attach a database
CREATE DATABASE database_name
ON <filespec> [ ,...n ]
FOR { { ATTACH [ WITH <attach_database_option> [ , ...n ] ] }
| ATTACH_REBUILD_LOG }
[;]

<attach_database_option> ::=
{
<service_broker_option>
| RESTRICTED_USER
| FILESTREAM ( DIRECTORY_NAME = { 'directory_name' | NULL } )
}

Create a database snapshot

CREATE DATABASE database_snapshot_name


ON
(
NAME = logical_file_name,
FILENAME = 'os_file_name'
) [ ,...n ]
AS SNAPSHOT OF source_database_name
[;]

Arguments
database_name
Is the name of the new database. Database names must be unique within an instance of SQL Server and
comply with the rules for identifiers.
database_name can be a maximum of 128 characters, unless a logical name is not specified for the log file. If a
logical log file name is not specified, SQL Server generates the logical_file_name and the os_file_name for the
log by appending a suffix to database_name. This limits database_name to 123 characters so that the
generated logical file name is no more than 128 characters.
If data file name is not specified, SQL Server uses database_name as both the logical_file_name and as the
os_file_name. The default path is obtained from the registry. The default path can be changed by using the
Server Properties (Database Settings Page) in Management Studio. Changing the default path requires
restarting SQL Server.
CONTAINMENT = { NONE | PARTIAL }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017
Specifies the containment status of the database. NONE = non-contained database. PARTIAL = partially
contained database.
ON
Specifies that the disk files used to store the data sections of the database, data files, are explicitly defined. ON is
required when followed by a comma-separated list of <filespec> items that define the data files for the primary
filegroup. The list of files in the primary filegroup can be followed by an optional, comma-separated list of
<filegroup> items that define user filegroups and their files.
PRIMARY
Specifies that the associated <filespec> list defines the primary file. The first file specified in the <filespec>
entry in the primary filegroup becomes the primary file. A database can have only one primary file. For more
information, see Database Files and Filegroups.
If PRIMARY is not specified, the first file listed in the CREATE DATABASE statement becomes the primary file.
LOG ON
Specifies that the disk files used to store the database log, log files, are explicitly defined. LOG ON is followed
by a comma-separated list of <filespec> items that define the log files. If LOG ON is not specified, one log file is
automatically created, which has a size that is 25 percent of the sum of the sizes of all the data files for the
database, or 512 KB, whichever is larger. This file is placed in the default log-file location. For information about
this location, see View or Change the Default Locations for Data and Log Files (SQL Server Management
Studio).
LOG ON cannot be specified on a database snapshot.
COLL ATE collation_name
Specifies the default collation for the database. Collation name can be either a Windows collation name or a
SQL collation name. If not specified, the database is assigned the default collation of the instance of SQL Server.
A collation name cannot be specified on a database snapshot.
A collation name cannot be specified with the FOR ATTACH or FOR ATTACH_REBUILD_LOG clauses. For
information about how to change the collation of an attached database, visit this Microsoft Web site.
For more information about the Windows and SQL collation names, see COLL ATE (Transact-SQL ).

NOTE
Contained databases are collated differently than non-contained databases. Please see Contained Database Collations for
more information.

WITH <option>
<filestream_options>
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the level of non-transactional FILESTREAM access to the database.

VALUE DESCRIPTION

OFF Non-transactional access is disabled.

READONLY FILESTREAM data in this database can be read by non-


transactional processes.

FULL Full non-transactional access to FILESTREAM FileTables is


enabled.

DIRECTORY_NAME = <directory_name> Applies to: SQL Server 2012 (11.x) through SQL Server
2017
A windows-compatible directory name. This name should be unique among all the Database_Directory
names in the SQL Server instance. Uniqueness comparison is case-insensitive, regardless of SQL Server
collation settings. This option should be set before creating a FileTable in this database.
The following options are allowable only when CONTAINMENT has been set to PARTIAL. If
CONTAINMENT is set to NONE, errors will occur.
DEFAULT_FULLTEXT_LANGUAGE = <lcid> | <language name> | <language alias>
Applies to: SQL Server 2012 (11.x) through SQL Server 2017

See [Configure the default full-text language Server Configuration Option](../../database-


engine/configure-windows/configure-the-default-full-text-language-server-configuration-option.md) for a
full description of this option.

DEFAULT_LANGUAGE = <lcid> | <language name> | <language alias>


Applies to: SQL Server 2012 (11.x) through SQL Server 2017

See [Configure the default language Server Configuration Option](../../database-engine/configure-


windows/configure-the-default-language-server-configuration-option.md) for a full description of this
option.

NESTED_TRIGGERS = { OFF | ON }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017

See [Configure the nested triggers Server Configuration Option](../../database-engine/configure-


windows/configure-the-nested-triggers-server-configuration-option.md) for a full description of this
option.

TRANSFORM_NOISE_WORDS = { OFF | ON }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017

See [transform noise words Server Configuration Option](../../database-engine/configure-windows/transform-


noise-words-server-configuration-option.md)for a full description of this option.

TWO_DIGIT_YEAR_CUTOFF = { 2049 | <any year between 1753 and 9999> }


Four digits representing a year. 2049 is the default value. See Configure the two digit year cutoff Server
Configuration Option for a full description of this option.
DB_CHAINING { OFF | ON }
When ON is specified, the database can be the source or target of a cross-database ownership chain.
When OFF, the database cannot participate in cross-database ownership chaining. The default is OFF.

IMPORTANT
The instance of SQL Server will recognize this setting when the cross db ownership chaining server option is 0
(OFF). When cross db ownership chaining is 1 (ON), all user databases can participate in cross-database
ownership chains, regardless of the value of this option. This option is set by using sp_configure.

To set this option, requires membership in the sysadmin fixed server role. The DB_CHAINING option
cannot be set on these system databases: master, model, tempdb.
TRUSTWORTHY { OFF | ON }
When ON is specified, database modules (for example, views, user-defined functions, or stored
procedures) that use an impersonation context can access resources outside the database.
When OFF, database modules in an impersonation context cannot access resources outside the database.
The default is OFF.
TRUSTWORTHY is set to OFF whenever the database is attached.
By default, all system databases except the msdb database have TRUSTWORTHY set to OFF. The value
cannot be changed for the model and tempdb databases. We recommend that you never set the
TRUSTWORTHY option to ON for the master database.
To set this option, requires membership in the sysadmin fixed server role.
FOR ATTACH [ WITH < attach_database_option > ] Specifies that the database is created by attaching an
existing set of operating system files. There must be a <filespec> entry that specifies the primary file. The
only other <filespec> entries required are those for any files that have a different path from when the
database was first created or last attached. A <filespec> entry must be specified for these files.
FOR ATTACH requires the following:
All data files (MDF and NDF ) must be available.
If multiple log files exist, they must all be available.
If a read/write database has a single log file that is currently unavailable, and if the database was shut
down with no users or open transactions before the attach operation, FOR ATTACH automatically
rebuilds the log file and updates the primary file. In contrast, for a read-only database, the log cannot be
rebuilt because the primary file cannot be updated. Therefore, when you attach a read-only database
with a log that is unavailable, you must provide the log files, or the files in the FOR ATTACH clause.

NOTE
A database created by a more recent version of SQL Server cannot be attached in earlier versions.

In SQL Server, any full-text files that are part of the database that is being attached will be attached with the
database. To specify a new path of the full-text catalog, specify the new location without the full-text operating
system file name. For more information, see the Examples section.
Attaching a database that contains a FILESTREAM option of "Directory name", into a SQL Server instance will
prompt SQL Server to verify that the Database_Directory name is unique. If it is not, the attach operation fails
with the error, "FILESTREAM Database_Directory name <name> is not unique in this SQL Server instance". To
avoid this error, the optional parameter, directory_name, should be passed in to this operation.
FOR ATTACH cannot be specified on a database snapshot.
FOR ATTACH can specify the RESTRICTED_USER option. RESTRICTED_USER allows for only members of the
db_owner fixed database role and dbcreator and sysadmin fixed server roles to connect to the database, but
does not limit their number. Attempts by unqualified users are refused.
If the database uses Service Broker, use the WITH <service_broker_option> in your FOR ATTACH clause:
<service_broker_option> Controls Service Broker message delivery and the Service Broker identifier for the
database. Service Broker options can only be specified when the FOR ATTACH clause is used.
ENABLE_BROKER
Specifies that Service Broker is enabled for the specified database. That is, message delivery is started, and
is_broker_enabled is set to true in the sys.databases catalog view. The database retains the existing Service
Broker identifier.
NEW_BROKER
Creates a new service_broker_guid value in both sys.databases and the restored database and ends all
conversation endpoints with clean up. The broker is enabled, but no message is sent to the remote conversation
endpoints. Any route that references the old Service Broker identifier must be re-created with the new identifier.
ERROR_BROKER_CONVERSATIONS
Ends all conversations with an error stating that the database is attached or restored. The broker is disabled
until this operation is completed and then enabled. The database retains the existing Service Broker identifier.
When you attach a replicated database that was copied instead of being detached, consider the following:
If you attach the database to the same server instance and version as the original database, no additional
steps are required.
If you attach the database to the same server instance but with an upgraded version, you must execute
sp_vupgrade_replication to upgrade replication after the attach operation is complete.
If you attach the database to a different server instance, regardless of version, you must execute
sp_removedbreplication to remove replication after the attach operation is complete.

NOTE
Attach works with the vardecimal storage format, but the SQL Server Database Engine must be upgraded to at least
SQL Server 2005 SP2. You cannot attach a database using vardecimal storage format to an earlier version of SQL Server.
For more information about the vardecimal storage format, see Data Compression.

When a database is first attached or restored to a new instance of SQL Server, a copy of the database master
key (encrypted by the service master key) is not yet stored in the server. You must use the OPEN MASTER
KEY statement to decrypt the database master key (DMK). Once the DMK has been decrypted, you have the
option of enabling automatic decryption in the future by using the ALTER MASTER KEY REGENERATE
statement to provision the server with a copy of the DMK, encrypted with the service master key (SMK). When
a database has been upgraded from an earlier version, the DMK should be regenerated to use the newer AES
algorithm. For more information about regenerating the DMK, see ALTER MASTER KEY (Transact-SQL ). The
time required to regenerate the DMK key to upgrade to AES depends upon the number of objects protected by
the DMK. Regenerating the DMK key to upgrade to AES is only necessary once, and has no impact on future
regenerations as part of a key rotation strategy. For information about how to upgrade a database by using
attach, see Upgrade a Database Using Detach and Attach (Transact-SQL ).

IMPORTANT
We recommend that you do not attach databases from unknown or untrusted sources. Such databases could contain
malicious code that might execute unintended Transact-SQL code or cause errors by modifying the schema or the physical
database structure. Before you use a database from an unknown or untrusted source, run DBCC CHECKDB on the
database on a nonproduction server, and also examine the code, such as stored procedures or other user-defined code, in
the database.

NOTE
The TRUSTWORTHY and DB_CHAINING options have no affect when attaching a database.

FOR ATTACH_REBUILD_LOG
Specifies that the database is created by attaching an existing set of operating system files. This option is limited
to read/write databases. There must be a <filespec> entry specifying the primary file. If one or more transaction
log files are missing, the log file is rebuilt. The ATTACH_REBUILD_LOG automatically creates a new, 1 MB log
file. This file is placed in the default log-file location. For information about this location, see View or Change the
Default Locations for Data and Log Files (SQL Server Management Studio).
NOTE
If the log files are available, the Database Engine uses those files instead of rebuilding the log files.

FOR ATTACH_REBUILD_LOG requires the following:


A clean shutdown of the database.
All data files (MDF and NDF ) must be available.

IMPORTANT
This operation breaks the log backup chain. We recommend that a full database backup be performed after the operation
is completed. For more information, see BACKUP (Transact-SQL).

Typically, FOR ATTACH_REBUILD_LOG is used when you copy a read/write database with a large log to
another server where the copy will be used mostly, or only, for read operations, and therefore requires less log
space than the original database.
FOR ATTACH_REBUILD_LOG cannot be specified on a database snapshot.
For more information about attaching and detaching databases, see Database Detach and Attach (SQL Server).
<filespec>
Controls the file properties.
NAME logical_file_name
Specifies the logical name for the file. NAME is required when FILENAME is specified, except when specifying
one of the FOR ATTACH clauses. A FILESTREAM filegroup cannot be named PRIMARY.
logical_file_name
Is the logical name used in SQL Server when referencing the file. Logical_file_name must be unique in the
database and comply with the rules for identifiers. The name can be a character or Unicode constant, or a
regular or delimited identifier.
FILENAME { 'os_file_name' | 'filestream_path' }
Specifies the operating system (physical) file name.
' os_file_name '
Is the path and file name used by the operating system when you create the file. The file must reside on one of
the following devices: the local server on which SQL Server is installed, a Storage Area Network [SAN ], or an
iSCSI-based network. The specified path must exist before executing the CREATE DATABASE statement. For
more information, see "Database Files and Filegroups" in the Remarks section.
SIZE, MAXSIZE, and FILEGROWTH parameters can be set when a UNC path is specified for the file.
If the file is on a raw partition, os_file_name must specify only the drive letter of an existing raw partition. Only
one data file can be created on each raw partition.
Data files should not be put on compressed file systems unless the files are read-only secondary files, or the
database is read-only. Log files should never be put on compressed file systems.
' filestream_path '
For a FILESTREAM filegroup, FILENAME refers to a path where FILESTREAM data will be stored. The path up
to the last folder must exist, and the last folder must not exist. For example, if you specify the path
C:\MyFiles\MyFilestreamData, C:\MyFiles must exist before you run ALTER DATABASE, but the
MyFilestreamData folder must not exist.
The filegroup and file ( <filespec> ) must be created in the same statement.
The SIZE and FILEGROWTH properties do not apply to a FILESTREAM filegroup.
SIZE size
Specifies the size of the file.
SIZE cannot be specified when the os_file_name is specified as a UNC path. SIZE does not apply to a
FILESTREAM filegroup.
size
Is the initial size of the file.
When size is not supplied for the primary file, the Database Engine uses the size of the primary file in the model
database. The default size of model is 8 MB (beginning with SQL Server 2016 (13.x)) or 1 MB (for earlier
versions). When a secondary data file or log file is specified, but size is not specified for the file, the Database
Engine makes the file 8 MB (beginning with SQL Server 2016 (13.x)) or 1 MB (for earlier versions). The size
specified for the primary file must be at least as large as the primary file of the model database.
The kilobyte (KB ), megabyte (MB ), gigabyte (GB ), or terabyte (TB ) suffixes can be used. The default is MB.
Specify a whole number; do not include a decimal. Size is an integer value. For values greater than
2147483647, use larger units.
MAXSIZE max_size
Specifies the maximum size to which the file can grow. MAXSIZE cannot be specified when the os_file_name is
specified as a UNC path.
max_size
Is the maximum file size. The KB, MB, GB, and TB suffixes can be used. The default is MB. Specify a whole
number; do not include a decimal. If max_size is not specified, the file grows until the disk is full. Max_size is an
integer value. For values greater than 2147483647, use larger units.
UNLIMITED
Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a
maximum size of 2 TB, and a data file has a maximum size of 16 TB.

NOTE
There is no maximum size when this option is specified for a FILESTREAM container. It continues to grow until the disk is
full.

FILEGROWTH growth_increment
Specifies the automatic growth increment of the file. The FILEGROWTH setting for a file cannot exceed the
MAXSIZE setting. FILEGROWTH cannot be specified when the os_file_name is specified as a UNC path.
FILEGROWTH does not apply to a FILESTREAM filegroup.
growth_increment
Is the amount of space added to the file every time new space is required.
The value can be specified in MB, KB, GB, TB, or percent (%). If a number is specified without an MB, KB, or %
suffix, the default is MB. When % is specified, the growth increment size is the specified percentage of the size of
the file at the time the increment occurs. The size specified is rounded to the nearest 64 KB, and the minimum
value is 64 KB.
A value of 0 indicates that automatic growth is off and no additional space is allowed.
If FILEGROWTH is not specified, the default values are:
VERSION DEFAULT VALUES

Beginning SQL Server 2016 (13.x) Data 64 MB. Log files 64 MB.

Beginning SQL Server 2005 Data 1 MB. Log files 10%.

Prior to SQL Server 2005 Data 10%. Log files 10%.

<filegroup>
Controls the filegroup properties. Filegroup cannot be specified on a database snapshot.
FILEGROUP filegroup_name
Is the logical name of the filegroup.
filegroup_name
filegroup_name must be unique in the database and cannot be the system-provided names PRIMARY and
PRIMARY_LOG. The name can be a character or Unicode constant, or a regular or delimited identifier. The
name must comply with the rules for identifiers.
CONTAINS FILESTREAM
Specifies that the filegroup stores FILESTREAM binary large objects (BLOBs) in the file system.
CONTAINS MEMORY_OPTIMIZED_DATA
Applies to: SQL Server 2014 (12.x) through SQL Server 2017
Specifies that the filegroup stores memory_optimized data in the file system. For more information, see In-
Memory OLTP (In-Memory Optimization). Only one MEMORY_OPTIMIZED_DATA filegroup is allowed per
database. For code samples that create a filegroup to store memory-optimized data, see Creating a Memory-
Optimized Table and a Natively Compiled Stored Procedure.
DEFAULT
Specifies the named filegroup is the default filegroup in the database.
database_snapshot_name
Is the name of the new database snapshot. Database snapshot names must be unique within an instance of
SQL Server and comply with the rules for identifiers. database_snapshot_name can be a maximum of 128
characters.
ON ( NAME =logical_file_name, FILENAME ='os_file_name') [ ,... n ]
For creating a database snapshot, specifies a list of files in the source database. For the snapshot to work, all the
data files must be specified individually. However, log files are not allowed for database snapshots.
FILESTREAM filegroups are not supported by database snapshots. If a FILESTREAM data file is included in a
CREATE DATABASE ON clause, the statement will fail and an error will be raised.
For descriptions of NAME and FILENAME and their values see the descriptions of the equivalent <filespec>
values.

NOTE
When you create a database snapshot, the other <filespec> options and the keyword PRIMARY are disallowed.

AS SNAPSHOT OF source_database_name
Specifies that the database being created is a database snapshot of the source database specified by
source_database_name. The snapshot and source database must be on the same instance.
For more information, see "Database Snapshots" in the Remarks section.

Remarks
The master database should be backed up whenever a user database is created, modified, or dropped.
The CREATE DATABASE statement must run in autocommit mode (the default transaction management mode)
and is not allowed in an explicit or implicit transaction.
You can use one CREATE DATABASE statement to create a database and the files that store the database. SQL
Server implements the CREATE DATABASE statement by using the following steps:
1. The SQL Server uses a copy of the model database to initialize the database and its metadata.
2. A service broker GUID is assigned to the database.
3. The Database Engine then fills the rest of the database with empty pages, except for pages that have
internal data that records how the space is used in the database.
A maximum of 32,767 databases can be specified on an instance of SQL Server.
Each database has an owner that can perform special activities in the database. The owner is the user that
creates the database. The database owner can be changed by using sp_changedbowner.
Some database features depend on features or capabilities present in the file system for full functionality of a
database. Some examples of features that depend on file system feature set include:
DBCC CHECKDB
FileStream
Online backups using VSS and file snapshots
Database snapshot creation
Memory Optimized Data filegroup

Database Files and Filegroups


Every database has at least two files, a primary file and a transaction log file, and at least one filegroup. A
maximum of 32,767 files and 32,767 filegroups can be specified for each database.
When you create a database, make the data files as large as possible based on the maximum amount of data
you expect in the database
We recommend that you use a Storage Area Network (SAN ), iSCSI-based network, or locally attached disk for
the storage of your SQL Server database files, because this configuration optimizes SQL Server performance
and reliability.

Database Snapshots
You can use the CREATE DATABASE statement to create a read-only, static view, a database snapshot of the
source database. A database snapshot is transactionally consistent with the source database as it existed at the
time when the snapshot was created. A source database can have multiple snapshots.

NOTE
When you create a database snapshot, the CREATE DATABASE statement cannot reference log files, offline files, restoring
files, and defunct files.

If creating a database snapshot fails, the snapshot becomes suspect and must be deleted. For more information,
see DROP DATABASE (Transact-SQL ).
Each snapshot persists until it is deleted by using DROP DATABASE.
For more information, see Database Snapshots (SQL Server).

Database Options
Several database options are automatically set whenever you create a database. For a list of these options, see
ALTER DATABASE SET Options (Transact-SQL ).

The model Database and Creating New Databases


All user-defined objects in the model database are copied to all newly created databases. You can add any
objects, such as tables, views, stored procedures, data types, and so on, to the model database to be included in
all newly created databases.
When a CREATE DATABASE database_name statement is specified without additional size parameters, the
primary data file is made the same size as the primary file in the model database.
Unless FOR ATTACH is specified, each new database inherits the database option settings from the model
database. For example, the database option auto shrink is set to true in model and in any new databases you
create. If you change the options in the model database, these new option settings are used in any new
databases you create. Changing operations in the model database does not affect existing databases. If FOR
ATTACH is specified on the CREATE DATABASE statement, the new database inherits the database option
settings of the original database.

Viewing Database Information


You can use catalog views, system functions, and system stored procedures to return information about
databases, files, and filegroups. For more information, see System Views (Transact-SQL ).

Permissions
Requires CREATE DATABASE, CREATE ANY DATABASE, or ALTER ANY DATABASE permission.
To maintain control over disk use on an instance of SQL Server, permission to create databases is typically
limited to a few login accounts.
The following example provides the permission to create a database to the database user Fay.

USE master;
GO
GRANT CREATE DATABASE TO [Fay];
GO

Permissions on Data and Log Files


In SQL Server, certain permissions are set on the data and log files of each database. The following permissions
are set whenever the following operations are applied to a database:

Created Modified to add a new file

Attached Backed up
Detached Restored

The permissions prevent the files from being accidentally tampered with if they reside in a directory that has
open permissions.

NOTE
Microsoft SQL Server 2005 Express Edition does not set data and log file permissions.

Examples
A. Creating a database without specifying files
The following example creates the database mytest and creates a corresponding primary and transaction log
file. Because the statement has no <filespec> items, the primary database file is the size of the model database
primary file. The transaction log is set to the larger of these values: 512KB or 25% the size of the primary data
file. Because MAXSIZE is not specified, the files can grow to fill all available disk space. This example also
demonstrates how to drop the database named mytest if it exists, before creating the mytest database.

USE master;
GO
IF DB_ID (N'mytest') IS NOT NULL
DROP DATABASE mytest;
GO
CREATE DATABASE mytest;
GO
-- Verify the database files and sizes
SELECT name, size, size*1.0/128 AS [Size in MBs]
FROM sys.master_files
WHERE name = N'mytest';
GO

B. Creating a database that specifies the data and transaction log files
The following example creates the database Sales . Because the keyword PRIMARY is not used, the first file (
Sales_dat ) becomes the primary file. Because neither MB nor KB is specified in the SIZE parameter for the
Sales_dat file, it uses MB and is allocated in megabytes. The Sales_log file is allocated in megabytes because
the MB suffix is explicitly stated in the SIZE parameter.

USE master;
GO
CREATE DATABASE Sales
ON
( NAME = Sales_dat,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\saledat.mdf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 )
LOG ON
( NAME = Sales_log,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\salelog.ldf',
SIZE = 5MB,
MAXSIZE = 25MB,
FILEGROWTH = 5MB ) ;
GO

C. Creating a database by specifying multiple data and transaction log files


The following example creates the database Archive that has three 100-MB data files and two 100-MB
transaction log files. The primary file is the first file in the list and is explicitly specified with the PRIMARY
keyword. The transaction log files are specified following the LOG ON keywords. Note the extensions used for
the files in the FILENAME option: .mdf is used for primary data files, .ndf is used for the secondary data files,
and .ldf is used for transaction log files. This example places the database on the D: drive instead of with the
master database.

USE master;
GO
CREATE DATABASE Archive
ON
PRIMARY
(NAME = Arch1,
FILENAME = 'D:\SalesData\archdat1.mdf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20),
( NAME = Arch2,
FILENAME = 'D:\SalesData\archdat2.ndf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20),
( NAME = Arch3,
FILENAME = 'D:\SalesData\archdat3.ndf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20)
LOG ON
(NAME = Archlog1,
FILENAME = 'D:\SalesData\archlog1.ldf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20),
(NAME = Archlog2,
FILENAME = 'D:\SalesData\archlog2.ldf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20) ;
GO

D. Creating a database that has filegroups


The following example creates the database Sales that has the following filegroups:
The primary filegroup with the files Spri1_dat and Spri2_dat . The FILEGROWTH increments for these
files are specified as 15% .
A filegroup named SalesGroup1 with the files SGrp1Fi1 and SGrp1Fi2 .
A filegroup named SalesGroup2 with the files SGrp2Fi1 and SGrp2Fi2 .
This example places the data and log files on different disks to improve performance.
USE master;
GO
CREATE DATABASE Sales
ON PRIMARY
( NAME = SPri1_dat,
FILENAME = 'D:\SalesData\SPri1dat.mdf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 15% ),
( NAME = SPri2_dat,
FILENAME = 'D:\SalesData\SPri2dt.ndf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 15% ),
FILEGROUP SalesGroup1
( NAME = SGrp1Fi1_dat,
FILENAME = 'D:\SalesData\SG1Fi1dt.ndf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 ),
( NAME = SGrp1Fi2_dat,
FILENAME = 'D:\SalesData\SG1Fi2dt.ndf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 ),
FILEGROUP SalesGroup2
( NAME = SGrp2Fi1_dat,
FILENAME = 'D:\SalesData\SG2Fi1dt.ndf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 ),
( NAME = SGrp2Fi2_dat,
FILENAME = 'D:\SalesData\SG2Fi2dt.ndf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 )
LOG ON
( NAME = Sales_log,
FILENAME = 'E:\SalesLog\salelog.ldf',
SIZE = 5MB,
MAXSIZE = 25MB,
FILEGROWTH = 5MB ) ;
GO

E. Attaching a database
The following example detaches the database Archive created in example D, and then attaches it by using the
FOR ATTACH clause. Archive was defined to have multiple data and log files. However, because the location of
the files has not changed since they were created, only the primary file has to be specified in the FOR ATTACH
clause. Beginning with SQL Server 2005, any full-text files that are part of the database that is being attached
will be attached with the database.

USE master;
GO
sp_detach_db Archive;
GO
CREATE DATABASE Archive
ON (FILENAME = 'D:\SalesData\archdat1.mdf')
FOR ATTACH ;
GO

F. Creating a database snapshot


The following example creates the database snapshot sales_snapshot0600 . Because a database snapshot is
read-only, a log file cannot be specified. In conformance with the syntax, every file in the source database is
specified, and filegroups are not specified.
The source database for this example is the Sales database created in example D.

USE master;
GO
CREATE DATABASE sales_snapshot0600 ON
( NAME = SPri1_dat, FILENAME = 'D:\SalesData\SPri1dat_0600.ss'),
( NAME = SPri2_dat, FILENAME = 'D:\SalesData\SPri2dt_0600.ss'),
( NAME = SGrp1Fi1_dat, FILENAME = 'D:\SalesData\SG1Fi1dt_0600.ss'),
( NAME = SGrp1Fi2_dat, FILENAME = 'D:\SalesData\SG1Fi2dt_0600.ss'),
( NAME = SGrp2Fi1_dat, FILENAME = 'D:\SalesData\SG2Fi1dt_0600.ss'),
( NAME = SGrp2Fi2_dat, FILENAME = 'D:\SalesData\SG2Fi2dt_0600.ss')
AS SNAPSHOT OF Sales ;
GO

G. Creating a database and specifying a collation name and options


The following example creates the database MyOptionsTest . A collation name is specified and the TRUSTYWORTHY
and DB_CHAINING options are set to ON .

USE master;
GO
IF DB_ID (N'MyOptionsTest') IS NOT NULL
DROP DATABASE MyOptionsTest;
GO
CREATE DATABASE MyOptionsTest
COLLATE French_CI_AI
WITH TRUSTWORTHY ON, DB_CHAINING ON;
GO
--Verifying collation and option settings.
SELECT name, collation_name, is_trustworthy_on, is_db_chaining_on
FROM sys.databases
WHERE name = N'MyOptionsTest';
GO

H. Attaching a full-text catalog that has been moved


The following example attaches the full-text catalog AdvWksFtCat along with the AdventureWorks2012 data and
log files. In this example, the full-text catalog is moved from its default location to a new location
c:\myFTCatalogs . The data and log files remain in their default locations.

USE master;
GO
--Detach the AdventureWorks2012 database
sp_detach_db AdventureWorks2012;
GO
-- Physically move the full text catalog to the new location.
--Attach the AdventureWorks2012 database and specify the new location of the full-text catalog.
CREATE DATABASE AdventureWorks2012 ON
(FILENAME = 'c:\Program Files\Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\Data\AdventureWorks2012_data.mdf'),
(FILENAME = 'c:\Program Files\Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\Data\AdventureWorks2012_log.ldf'),
(FILENAME = 'c:\myFTCatalogs\AdvWksFtCat')
FOR ATTACH;
GO

I. Creating a database that specifies a row filegroup and two FILESTREAM filegroups
The following example creates the FileStreamDB database. The database is created with one row filegroup and
two FILESTREAM filegroups. Each filegroup contains one file:
FileStreamDB_data contains row data. It contains one file, FileStreamDB_data.mdf with the default path.
FileStreamPhotos contains FILESTREAM data. It contains two FILESTREAM data containers, FSPhotos ,
located at C:\MyFSfolder\Photos and FSPhotos2 , located at D:\MyFSfolder\Photos . It is marked as the
default FILESTREAM filegroup.
FileStreamResumes contains FILESTREAM data. It contains one FILESTREAM data container, FSResumes ,
located at C:\MyFSfolder\Resumes .

USE master;
GO
-- Get the SQL Server data path.
DECLARE @data_path nvarchar(256);
SET @data_path = (SELECT SUBSTRING(physical_name, 1, CHARINDEX(N'master.mdf', LOWER(physical_name)) - 1)
FROM master.sys.master_files
WHERE database_id = 1 AND file_id = 1);

-- Execute the CREATE DATABASE statement.


EXECUTE ('CREATE DATABASE FileStreamDB
ON PRIMARY
(
NAME = FileStreamDB_data
,FILENAME = ''' + @data_path + 'FileStreamDB_data.mdf''
,SIZE = 10MB
,MAXSIZE = 50MB
,FILEGROWTH = 15%
),
FILEGROUP FileStreamPhotos CONTAINS FILESTREAM DEFAULT
(
NAME = FSPhotos
,FILENAME = ''C:\MyFSfolder\Photos''
-- SIZE and FILEGROWTH should not be specified here.
-- If they are specified an error will be raised.
, MAXSIZE = 5000 MB
),
(
NAME = FSPhotos2
, FILENAME = ''D:\MyFSfolder\Photos''
, MAXSIZE = 10000 MB
),
FILEGROUP FileStreamResumes CONTAINS FILESTREAM
(
NAME = FileStreamResumes
,FILENAME = ''C:\MyFSfolder\Resumes''
)
LOG ON
(
NAME = FileStream_log
,FILENAME = ''' + @data_path + 'FileStreamDB_log.ldf''
,SIZE = 5MB
,MAXSIZE = 25MB
,FILEGROWTH = 5MB
)'
);
GO

J. Creating a database that has a FILESTREAM filegroup with multiple files


The following example creates the BlobStore1 database. The database is created with one row filegroup and
one FILESTREAM filegroup, FS . The FILESTREAM filegroup contains two files, FS1 and FS2 . Then the
database is altered by adding a third file, FS3 , to the FILESTREAM filegroup.
USE master;
GO

CREATE DATABASE [BlobStore1]


CONTAINMENT = NONE
ON PRIMARY
(
NAME = N'BlobStore1',
FILENAME = N'C:\BlobStore\BlobStore1.mdf',
SIZE = 100MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 1MB
),
FILEGROUP [FS] CONTAINS FILESTREAM DEFAULT
(
NAME = N'FS1',
FILENAME = N'C:\BlobStore\FS1',
MAXSIZE = UNLIMITED
),
(
NAME = N'FS2',
FILENAME = N'C:\BlobStore\FS2',
MAXSIZE = 100MB
)
LOG ON
(
NAME = N'BlobStore1_log',
FILENAME = N'C:\BlobStore\BlobStore1_log.ldf',
SIZE = 100MB,
MAXSIZE = 1GB,
FILEGROWTH = 1MB
);
GO

ALTER DATABASE [BlobStore1]


ADD FILE
(
NAME = N'FS3',
FILENAME = N'C:\BlobStore\FS3',
MAXSIZE = 100MB
)
TO FILEGROUP [FS];
GO

See Also
ALTER DATABASE (Transact-SQL )
Database Detach and Attach (SQL Server)
DROP DATABASE (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_changedbowner (Transact-SQL )
sp_detach_db (Transact-SQL )
sp_removedbreplication (Transact-SQL )
Database Snapshots (SQL Server)
Move Database Files
Databases
Binary Large Object (Blob) Data (SQL Server)
CREATE DATABASE (Azure SQL Database)
5/16/2018 • 11 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a new database.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Syntax
CREATE DATABASE database_name [ COLLATE collation_name ]
{
(<edition_options> [, ...n])
}

[ WITH CATALOG_COLLATION = { DATABASE_DEFAULT | SQL_Latin1_General_CP1_CI_AS } ]

<edition_options> ::=
{

MAXSIZE = { 100 MB | 250 MB | 500 MB | 1 … 1024 … 4096 GB }


| ( EDITION = { 'basic' | 'standard' | 'premium' | 'GeneralPurpose' | 'BusinessCritical' }
| SERVICE_OBJECTIVE =
{ 'basic' | 'S0' | 'S1' | 'S2' | 'S3' | 'S4'| 'S6'| 'S7'| 'S9'| 'S12' |
| 'P1' | 'P2' | 'P4'| 'P6' | 'P11' | 'P15'
| 'GP_GEN4_1' | 'GP_GEN4_2' | 'GP_GEN4_4' | 'GP_GEN4_8' | 'GP_GEN4_16' | 'GP_GEN4_24' |
| 'BC_GEN4_1' | 'BC_GEN4_2' | 'BC_GEN4_4' | 'BC_GEN4_8' | 'BC_GEN4_16' | 'BC_GEN4_24' |
| 'GP_GEN5_2' | 'GP_GEN5_4' | 'GP_GEN5_8' | 'GP_GEN5_16' | 'GP_GEN5_24' | 'GP_GEN5_32' | 'GP_GEN5_48' |
'GP_GEN5_80' |
| 'BC_GEN5_2' | 'BC_GEN5_4' | 'BC_GEN5_8' | 'BC_GEN5_16' | 'BC_GEN5_24' | 'BC_GEN5_32' | 'BC_GEN5_48' |
'BC_GEN5_80' |
| { ELASTIC_POOL(name = <elastic_pool_name>) } } )
}

[;]
To copy a database:
CREATE DATABASE database_name
AS COPY OF [source_server_name.] source_database_name
[ ( SERVICE_OBJECTIVE =
{ 'basic' | 'S0' | 'S1' | 'S2' | 'S3' | 'S4'| 'S6'| 'S7'| 'S9'| 'S12' |
| 'GP_GEN4_1' | 'GP_GEN4_2' | 'GP_GEN4_4' | 'GP_GEN4_8' | 'GP_GEN4_16' | 'GP_GEN4_24' |
| 'BC_GEN4_1' | 'BC_GEN4_2' | 'BC_GEN4_4' | 'BC_GEN4_8' | 'BC_GEN4_16' | 'BC_GEN4_24' |
| 'GP_GEN5_2' | 'GP_GEN5_4' | 'GP_GEN5_8' | 'GP_GEN5_16' | 'GP_GEN5_24' | 'GP_GEN5_32' | 'GP_GEN5_48' |
'GP_GEN5_80' |
| 'BC_GEN5_2' | 'BC_GEN5_4' | 'BC_GEN5_8' | 'BC_GEN5_16' | 'BC_GEN5_24' | 'BC_GEN5_32' | 'BC_GEN5_48' |
'BC_GEN5_80' |
| { ELASTIC_POOL(name = <elastic_pool_name>) } } )
]
[;]

Arguments
This syntax diagram demonstrates the supported arguments in Azure SQL Database.
database_name
The name of the new database. This name must be unique on the SQL server, which can host both Azure SQL
Database databases and SQL Data Warehouse databases, and comply with the SQL Server rules for identifiers.
For more information, see Identifiers.
Collation_name
Specifies the default collation for the database. Collation name can be either a Windows collation name or a SQL
collation name. If not specified, the database is assigned the default collation, which is
SQL_Latin1_General_CP1_CI_AS.
For more information about the Windows and SQL collation names, COLL ATE (Transact-SQL ).
CATALOG_COLL ATION
Specifies the default collation for the metadata catalog. DATABASE_DEFAULT specifies that the metadata catalog
used for system views and system tables be collated to match the default collation for the database. This is the
behavior found in SQL Server.
SQL_Latin1_General_CP1_CI_AS specifies that the metadata catalog used for system views and tables be collated
to a fixed SQL_Latin1_General_CP1_CI_AS collation. This is the default setting on Azure SQL Database if
unspecified.
EDITION
Specifies the service tier of the database. The available values are: 'basic', 'standard', 'premium', 'GeneralPurpose',
and 'BusinessCritical'. Support for 'premiumrs' has been removed. For questions, use this e-mail alias: premium-
rs@microsoft.com.
When EDITION is specified but MAXSIZE is not specified, MAXSIZE is set to the most restrictive size that the
edition supports.
MAXSIZE
Specifies the maximum size of the database. MAXSIZE must be valid for the specified EDITION (service tier)
Following are the supported MAXSIZE values and defaults (D ) for the service tiers.
DTU -based model
MAXSIZE BASIC S0-S2 S3-S12 P1-P6 P11-P15

100 MB √ √ √ √ √

250 MB √ √ √ √ √

500 MB √ √ √ √ √

1 GB √ √ √ √ √

2 GB √ (D) √ √ √ √

5 GB N/A √ √ √ √

10 GB N/A √ √ √ √

20 GB N/A √ √ √ √

30 GB N/A √ √ √ √

40 GB N/A √ √ √ √

50 GB N/A √ √ √ √

100 GB N/A √ √ √ √

150 GB N/A √ √ √ √

200 GB N/A √ √ √ √

250 GB N/A √ (D) √ (D) √ √

300 GB N/A N/A √ √ √

400 GB N/A N/A √ √ √

500 GB N/A N/A √ √ (D) √

750 GB N/A N/A √ √ √

1024 GB N/A N/A √ √ √ (D)

From 1024 GB N/A N/A N/A N/A √


up to 4096 GB in
increments of
256 GB*

* P11 and P15 allow MAXSIZE up to 4 TB with 1024 GB being the default size. P11 and P15 can use up to 4 TB of
included storage at no additional charge. In the Premium tier, MAXSIZE greater than 1 TB is currently available in
the following regions: US East2, West US, US Gov Virginia, West Europe, Germany Central, South East Asia,
Japan East, Australia East, Canada Central, and Canada East. For additional details regarding resource limitations
for the DTU -based model, see DTU -based resource limits.
The MAXSIZE value for the DTU -based model, if specified, has to be a valid value shown in the table above for the
service tier specified.
vCore-based model
General Purpose service tier - Generation 4 compute platform

MAXSIZE GP_GEN4_1 GP_GEN4_2 GP_GEN4_4 GP_GEN4_8 GP_GEN4_16 GP4_24

Max data size 1024 1024 1536 3072 4096 4096


(GB)

General Purpose service tier - Generation 5 compute platform

GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_8


MAXSIZE 2 4 8 16 24 32 48 0

Max data 1024 1024 1536 3072 4096 4096 4096 4096
size (GB)

Business Critical service tier - Generation 4 compute platform

PERFORMANCE
LEVEL BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16

Max data size 1024 1024 1024 1024 1024


(GB)

Business Critical service tier - Generation 5 compute platform

BC_GEN5_1 BC_GEN5_2 BC_GEN5_3 BC_GEN5_4 BC_GEN5_8


MAXSIZE BC_GEN5_2 BC_GEN5_4 BC_GEN5_8 6 4 2 8 0

Max data 1024 1024 1024 1024 2048 4096 4096 4096
size (GB)

If no MAXSIZE value is set when using the vCore model, the default is 32 GB. For additional details regarding
resource limitsations for vCore-based model, see vCore-based resource limits.
The following rules apply to MAXSIZE and EDITION arguments:
If EDITION is specified but MAXSIZE is not specified, the default value for the edition is used. For example, if
the EDITION is set to Standard, and the MAXSIZE is not specified, then the MAXSIZE is automatically set to
250 MB.
If neither MAXSIZE nor EDITION is specified, the EDITION is set to Standard (S0), and MAXSIZE is set to 250
GB.
SERVICE_OBJECTIVE
Specifies the performance level. Available values for service objective are: S0 , S1 , S2 , S3 , S4 , S6 , S7 , S9 ,
S12 , P1 , P2 , P4 , P6 , P11 , P15 , GP_GEN4_1 , GP_GEN4_2 , GP_GEN4_4 , GP_GEN4_8 , GP_GEN4_16 , GP_GEN4_24 ,
BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16 , BC_GEN4_24 , GP_Gen5_2 , GP_Gen5_4 , GP_Gen5_8 ,
GP_Gen5_16 , GP_Gen5_24 , GP_Gen5_32 , GP_Gen5_48 , GP_Gen5_80 , BC_Gen5_2 , BC_Gen5_4 , BC_Gen5_8 , BC_Gen5_16 ,
BC_Gen5_24 , BC_Gen5_32 , BC_Gen5_48 , BC_Gen5_80 .

For service objective descriptions and more information about the size, editions, and the service objectives
combinations, see Azure SQL Database Service Tiers. If the specified SERVICE_OBJECTIVE is not supported by
the EDITION, you receive an error. To change the SERVICE_OBJECTIVE value from one tier to another (for
example from S1 to P1), you must also change the EDITION value. For service objective descriptions and more
information about the size, editions, and the service objectives combinations, see Azure SQL Database Service
Tiers and Performance Levels, DTU -based resource limits and vCore-based resource limits. Support for PRS
service objectives have been removed. For questions, use this e-mail alias: premium-rs@microsoft.com.
EL ASTIC_POOL (name = <elastic_pool_name>)
To create a new database in an elastic database pool, set the SERVICE_OBJECTIVE of the database to
EL ASTIC_POOL and provide the name of the pool. For more information, see Create and manage a SQL
Database elastic database pool (preview ).
AS COPY OF [source_server_name.]source_database_name
For copying a database to the same or a different SQL Database server.
source_server_name
The name of the SQL Database server where the source database is located. This parameter is optional when the
source database and the destination database are to be located on the same SQL Database server.

NOTE
The AS COPY OF argument does not support the fully qualified unique domain names. In other words, if your server's fully
qualified domain name is serverName.database.windows.net , use only serverName during database copy.

source_database_name
The name of the database that is to be copied.
Azure SQL Database does not support the following arguments and options when using the CREATE DATABASE
statement:
Parameters related to the physical placement of file, such as <filespec> and <filegroup>
External access options, such as DB_CHAINING and TRUSTWORTHY
Attaching a database
Service broker options, such as ENABLE_BROKER, NEW_BROKER, and
ERROR_BROKER_CONVERSATIONS
Database snapshot
For more information about the arguments and the CREATE DATABASE statement, see CREATE DATABASE.

Remarks
Databases in Azure SQL Database have several default settings that are set when the database is created. For
more information about these default settings, see the list of values in DATABASEPROPERTYEX.
MAXSIZE provides the ability to limit the size of the database. If the size of the database reaches its MAXSIZE, you
receive error code 40544. When this occurs, you cannot insert or update data, or create new objects (such as
tables, stored procedures, views, and functions). However, you can still read and delete data, truncate tables, drop
tables and indexes, and rebuild indexes. You can then update MAXSIZE to a value larger than your current
database size or delete some data to free storage space. There may be as much as a fifteen-minute delay before
you can insert new data.
IMPORTANT
The CREATE DATABASE statement must be the only statement in a Transact-SQL batch.

To change the size, edition, or service objective values later, use ALTER DATABASE (Azure SQL Database).
The CATALOG_COLL ATION argument is only available during database creation.

Database Copies
Copying a database using the CREATE DATABASE statement is an asynchronous operation. Therefore, a connection to
the SQL Database server is not needed for the full duration of the copy process. The CREATE DATABASE statement
returns control to the user after the entry in sys.databases is created but before the database copy operation is
complete. In other words, the CREATE DATABASE statement returns successfully when the database copy is still in
progress.
Monitoring the copy process on an SQL Database server: Query the percentage_complete or
replication_state_desc columns in the dm_database_copies or the state column in the sys.databases view.
The sys.dm_operation_status view can be used as well as it returns the status of database operations including
database copy.
At the time the copy process completes successfully, the destination database is transactionally consistent with the
source database.
The following syntax and semantic rules apply to your use of the AS COPY OF argument:
The source server name and the server name for the copy target may be the same or different. When they
are the same, this parameter is optional and the server context of the current session is used by default.
The source and destination database names must be specified, unique, and comply with the SQL Server
rules for identifiers. For more information, see Identifiers.
The CREATE DATABASE statement must be executed within the context of the master database of the SQL
Database server where the new database will be created.
After the copying completes, the destination database must be managed as an independent database. You
can execute the ALTER DATABASE and DROP DATABASE statements against the new database independently of
the source database. You can also copy the new database to another new database.
The source database may continue to be accessed while the database copy is in progress.
For more information, see Create a copy of an Azure SQL database using Transact-SQL.

Permissions
To create a database, a login must be one of the following:
The server-level principal login
The Azure AD administrator for the local Azure SQL Server
A login that is a member of the dbmanager database role
Additional requirements for using CREATE DATABASE ... AS COPY OF syntax: The login executing the
statement on the local server must also be at least the db_owner on the source server. If the login is based
on SQL Server authentication, the login executing the statement on the local server must have a matching
login on the source SQL Database server, with an identical name and password.
Examples
For a quick start tutorial showing you how to connect to an Azure SQL database using SQL Server Management
Studio, see Azure SQL Database: Use SQL Server Management Studio to connect and query data.
Simple Example
A simple example for creating a database.

CREATE DATABASE TestDB1;

Simple Example with Edition


A simple example for creating a standard database.

CREATE DATABASE TestDB2


( EDITION = 'standard' );

Example with Additional Options


An example using multiple options.

CREATE DATABASE hito


COLLATE Japanese_Bushu_Kakusu_100_CS_AS_KS_WS
( MAXSIZE = 500 MB, EDITION = 'standard', SERVICE_OBJECTIVE = 'S1' ) ;

Creating a Copy
An example creating a copy of a database.

CREATE DATABASE escuela


AS COPY OF school;

Creating a Database in an Elastic Pool


Creates new database in pool named S3M100:

CREATE DATABASE db1 ( SERVICE_OBJECTIVE = ELASTIC_POOL ( name = S3M100 ) ) ;

Creating a Copy of a Database on Another Server


The following example creates a copy of the db_original database, named db_copy in the P2 performance level for
a single database. This is true regardless of whether db_original is in an elastic pool or a performance level for a
single database.

CREATE DATABASE db_copy


AS COPY OF ozabzw7545.db_original ( SERVICE_OBJECTIVE = 'P2' ) ;

The following example creates a copy of the db_original database, named db_copy in an elastic pool named ep1.
This is true regardless of whether db_original is in an elastic pool or a performance level for a single database. If
db_original is in an elastic pool with a different name, then db_copy is still created in ep1.

CREATE DATABASE db_copy


AS COPY OF ozabzw7545.db_original
(SERVICE_OBJECTIVE = ELASTIC_POOL( name = ep1 ) ) ;
Create database with specified catalog collation value
The following example sets the catalog collation to DATABASE_DEFAULT during database creation, which sets the
catalog collation to be the same as the database collation.

CREATE DATABASE TestDB3 COLLATE Japanese_XJIS_140 (MAXSIZE = 100 MB, EDITION = ‘basic’)
WITH CATALOG_COLLATION = DATABASE_DEFAULT

See also
sys.dm_database_copies (Azure SQL Database)
ALTER DATABASE (Azure SQL Database)
CREATE DATABASE (Azure SQL Data Warehouse)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new database.

Syntax
CREATE DATABASE database_name [ COLLATE collation_name ]
(
[ MAXSIZE = {
250 | 500 | 750 | 1024 | 5120 | 10240 | 20480 | 30720
| 40960 | 51200 | 61440 | 71680 | 81920 | 92160 | 102400
| 153600 | 204800 | 245760
} GB ,
]
EDITION = 'datawarehouse',
SERVICE_OBJECTIVE = {
'DW100' | 'DW200' | 'DW300' | 'DW400' | 'DW500' | 'DW600'
| 'DW1000' | 'DW1200' | 'DW1500' | 'DW2000' | 'DW3000' | 'DW6000'
| 'DW1000c' | 'DW1500c' | 'DW2000c' | 'DW2500c' | 'DW3000c' | 'DW5000c'
| 'DW6000c' | 'DW7500c' | 'DW10000c' | 'DW15000c' | 'DW30000c'
}
)
[;]

Arguments
database_name
The name of the new database. This name must be unique on the SQL server, which can host both Azure SQL
Database databases and SQL Data Warehouse databases, and comply with the SQL Server rules for identifiers.
For more information, see Identifiers.
collation_name
Specifies the default collation for the database. Collation name can be either a Windows collation name or a SQL
collation name. If not specified, the database is assigned the default collation, which is
SQL_Latin1_General_CP1_CI_AS.
For more information about the Windows and SQL collation names, see COLL ATE (Transact-SQL ).
EDITION
Specifies the service tier of the database. For SQL Data Warehouse use 'datawarehouse' .
MAXSIZE
The default is 245,760 GB (240 TB ).
Applies to: Optimized for Elasticity performance tier
The maximum allowable size for the database. The database cannot grow beyond MAXSIZE.
Applies to: Optimized for Compute performance tier
The maximum allowable size for rowstore data in the database. Data stored in rowstore tables, a columnstore
index's deltastore, or a nonclustered index on a clustered columnstore index cannot grow beyond MAXSIZE. Data
compressed into columnstore format does not have a size limit and is not constrained by MAXSIZE.
SERVICE_OBJECTIVE
Specifies the performance level. For more information about service objectives for SQL Data Warehouse, see
Performance Tiers.

General Remarks
Use DATABASEPROPERTYEX (Transact-SQL ) to see the database properties.
Use ALTER DATABASE (Azure SQL Data Warehouse) to change the max size, or service objective values later.
SQL Data Warehouse is set to COMPATIBILITY_LEVEL 130 and cannot be changed. For more details, see
Improved Query Performance with Compatibility Level 130 in Azure SQL Database.

Permissions
Required permissions:
Server level principal login, created by the provisioning process, or
Member of the dbmanager database role.

Error Handling
If the size of the database reaches MAXSIZE you will receive error code 40544. When this occurs, you cannot
insert and update data, or create new objects (such as tables, stored procedures, views, and functions). You can still
read and delete data, truncate tables, drop tables and indexes, and rebuild indexes. You can then update MAXSIZE
to a value larger than your current database size or delete some data to free storage space. There may be as much
as a fifteen-minute delay before you can insert new data.

Limitations and Restrictions


You must be connected to the master database to create a new database.
The CREATE DATABASE statement must be the only statement in a Transact-SQL batch.
You cannot change the database collation after the database is created.

Examples: Azure SQL Data Warehouse


A. Simple example
A simple example for creating a data warehouse database. This creates the database with the smallest max size
which is 10240 GB, the default collation which is SQL_Latin1_General_CP1_CI_AS, and the smallest compute
power which is DW100.

CREATE DATABASE TestDW


(EDITION = 'datawarehouse', SERVICE_OBJECTIVE='DW100');

B. Create a data warehouse database with all the options


An example of creating a a 10 terabyte data warehouse using all the options.

CREATE DATABASE TestDW COLLATE Latin1_General_100_CI_AS_KS_WS


(MAXSIZE = 10240 GB, EDITION = 'datawarehouse', SERVICE_OBJECTIVE = 'DW1000');
See Also
ALTER DATABASE (Azure SQL Data Warehouse( CREATE TABLE (Azure SQL Data Warehouse) DROP
DATABASE (Transact-SQL (
CREATE DATABASE (Parallel Data Warehouse)
5/4/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a new database on a Parallel Data Warehouse appliance. Use this statement to create all files associated
with an appliance database and to set maximum size and auto-growth options for the database tables and
transaction log.
Transact-SQL Syntax Conventions (Transact-SQL )

Syntax
CREATE DATABASE database_name
WITH (
[ AUTOGROW = ON | OFF , ]
REPLICATED_SIZE = replicated_size [ GB ] ,
DISTRIBUTED_SIZE = distributed_size [ GB ] ,
LOG_SIZE = log_size [ GB ] )
[;]

Arguments
database_name
The name of the new database. For more information on permitted database names, see "Object Naming Rules"
and "Reserved Database Names" in the Parallel Data Warehouse product documentation.
AUTOGROW = ON | OFF
Specifies whether the replicated_size, distributed_size, and log_size parameters for this database will automatically
grow as needed beyond their specified sizes. Default value is OFF.
If AUTOGROW is ON, replicated_size, distributed_size, and log_size will grow as required (not in blocks of the
initial specified size) with each data insert, update, or other action that requires more storage than has already been
allocated.
If AUTOGROW is OFF, the sizes will not grow automatically. Parallel Data Warehouse will return an error when
attempting an action that requires replicated_size, distributed_size, or log_size to grow beyond their specified value.
AUTOGROW is either ON for all sizes or OFF for all sizes. For example, it is not possible to set AUTOGROW ON
for log_size, but not set it for replicated_size.
replicated_size [ GB ]
A positive number. Sets the size (in integer or decimal gigabytes) for the total space allocated to replicated tables
and corresponding data on each Compute node. For minimum and maximum replicated_size requirements, see
"Minimum and Maximum Values" in the Parallel Data Warehouse product documentation.
If AUTOGROW is ON, replicated tables will be permitted to grow beyond this limit.
If AUTOGROW is OFF, an error will be returned if a user attempts to create a new replicated table, insert data into
an existing replicated table, or update an existing replicated table in a manner that would increase the size beyond
replicated_size.
distributed_size [ GB ]
A positive number. The size, in integer or decimal gigabytes, for the total space allocated to distributed tables (and
corresponding data) across the appliance. For minimum and maximum distributed_size requirements, see
"Minimum and Maximum Values" in the Parallel Data Warehouse product documentation.
If AUTOGROW is ON, distributed tables will be permitted to grow beyond this limit.
If AUTOGROW is OFF, an error will be returned if a user attempts to create a new distributed table, insert data
into an existing distributed table, or update an existing distributed table in a manner that would increase the size
beyond distributed_size.
log_size [ GB ]
A positive number. The size (in integer or decimal gigabytes) for the transaction log across the appliance.
For minimum and maximum log_size requirements, see "Minimum and Maximum Values" in the Parallel Data
Warehouse product documentation.
If AUTOGROW is ON, the log file is permitted to grow beyond this limit. Use the DBCC SHRINKLOG (Azure SQL
Data Warehouse) statement to reduce the size of the log files to their original size.
If AUTOGROW is OFF, an error will be returned to the user for any action that would increase the log size on an
individual Compute node beyond log_size.

Permissions
Requires the CREATE ANY DATABASE permission in the master database, or membership in the sysadmin fixed
server role.
The following example provides the permission to create a database to the database user Fay.

USE master;
GO
GRANT CREATE ANY DATABASE TO [Fay];
GO

General Remarks
Databases are created with database compatibility level 120, which is the compatibility level for SQL Server 2014
(12.x). This ensures the database will be able to use all of the SQL Server 2014 (12.x) functionality that PDW uses.

Limitations and Restrictions


The CREATE DATABASE statement is not allowed in an explicit transaction. For more information, see Statements.
For information on minimum and maximum constraints on databases, see "Minimum and Maximum Values" in
the Parallel Data Warehouse product documentation.
At the time a database is created, there must be enough available free space on each Compute node to allocate the
combined total of the following sizes:
SQL Server database with tables the size of replicated_table_size.
SQL Server database with tables the size of (distributed_table_size / number of Compute nodes ).
SQL Server logs the size of (log_size / number of Compute nodes).

Locking
Takes a shared lock on the DATABASE object.
Metadata

After this operation succeeds, an entry for this database will appear in the sys.databases (Transact-SQL ) and
sys.objects (Transact-SQL )metadata views.

Examples: Parallel Data Warehouse


A. Basic database creation examples
The following example creates the database mytest with a storage allocation of 100 GB per Compute node for
replicated tables, 500 GB per appliance for distributed tables, and 100 GB per appliance for the transaction log. In
this example, AUTOGROW is off by default.

CREATE DATABASE mytest


WITH
(REPLICATED_SIZE = 100 GB,
DISTRIBUTED_SIZE = 500 GB,
LOG_SIZE = 100 GB );

The following example creates the database mytest with the same parameters as above, except that AUTOGROW
is turned on. This allows the database to grow outside the specified size parameters.

CREATE DATABASE mytest


WITH
(AUTOGROW = ON,
REPLICATED_SIZE = 100 GB,
DISTRIBUTED_SIZE = 500 GB,
LOG_SIZE = 100 GB);

B. Creating a database with partial gigabyte sizes


The following example creates the database mytest , with AUTOGROW off, a storage allocation of 1.5 GB per
Compute node for replicated tables, 5.25 GB per appliance for distributed tables, and 10 GB per appliance for the
transaction log.

CREATE DATABASE mytest


WITH
(REPLICATED_SIZE = 1.5 GB,
DISTRIBUTED_SIZE = 5.25 GB,
LOG_SIZE = 10 GB);

See Also
ALTER DATABASE (Parallel Data Warehouse)
DROP DATABASE (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION
(Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a database audit specification object using the SQL Server audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions

Syntax
CREATE DATABASE AUDIT SPECIFICATION audit_specification_name
{
FOR SERVER AUDIT audit_name
[ { ADD ( { <audit_action_specification> | audit_action_group_name } )
} [, ...n] ]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]
<audit_action_specification>::=
{
action [ ,...n ]ON [ class :: ] securable BY principal [ ,...n ]
}

Arguments
audit_specification_name
Is the name of the audit specification.
audit_name
Is the name of the audit to which this specification is applied.
audit_action_specification
Is the specification of actions on securables by principals that should be recorded in the audit.
action
Is the name of one or more database-level auditable actions. For a list of audit actions, see SQL Server Audit
Action Groups and Actions.
audit_action_group_name
Is the name of one or more groups of database-level auditable actions. For a list of audit action groups, see SQL
Server Audit Action Groups and Actions.
class
Is the class name (if applicable) on the securable.
securable
Is the table, view, or other securable object in the database on which to apply the audit action or audit action
group. For more information, see Securables.
principal
Is the name of database principal on which to apply the audit action or audit action group. For more information,
see Principals (Database Engine).
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification.

Remarks
Database audit specifications are non-securable objects that reside in a given database. When a database audit
specification is created, it is in a disabled state.

Permissions
Users with the ALTER ANY DATABASE AUDIT permission can create database audit specifications and bind them to
any audit.
After a database audit specification is created, it can be viewed by principals with the CONTROL SERVER ,
ALTER ANY DATABASE AUDIT permissions, or the sysadmin account.

Examples
The following example creates a server audit called Payrole_Security_Audit and then a database audit
specification called Payrole_Security_Audit that audits SELECT and INSERT statements by the dbo user, for the
HumanResources.EmployeePayHistory table in the AdventureWorks2012 database.

USE master ;
GO
-- Create the server audit.
CREATE SERVER AUDIT Payrole_Security_Audit
TO FILE ( FILEPATH =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA' ) ;
GO
-- Enable the server audit.
ALTER SERVER AUDIT Payrole_Security_Audit
WITH (STATE = ON) ;
GO
-- Move to the target database.
USE AdventureWorks2012 ;
GO
-- Create the database audit specification.
CREATE DATABASE AUDIT SPECIFICATION Audit_Pay_Tables
FOR SERVER AUDIT Payrole_Security_Audit
ADD (SELECT , INSERT
ON HumanResources.EmployeePayHistory BY dbo )
WITH (STATE = ON) ;
GO

See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
CREATE DATABASE ENCRYPTION KEY (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an encryption key that is used for transparently encrypting a database. For more information about
transparent database encryption, see Transparent Data Encryption (TDE ).
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

CREATE DATABASE ENCRYPTION KEY


WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
ENCRYPTION BY SERVER
{
CERTIFICATE Encryptor_Name |
ASYMMETRIC KEY Encryptor_Name
}
[ ; ]

-- Syntax for Parallel Data Warehouse

CREATE DATABASE ENCRYPTION KEY


WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name
[ ; ]

Arguments
WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
Specifies the encryption algorithm that is used for the encryption key.

NOTE
Beginning with SQL Server 2016, all algorithms other than AES_128, AES_192, and AES_256 are deprecated. To use older
algorithms (not recommended) you must set the database to database compatibility level 120 or lower.

ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name


Specifies the name of the encryptor used to encrypt the database encryption key.
ENCRYPTION BY SERVER ASYMMETRIC KEY Encryptor_Name
Specifies the name of the asymmetric key used to encrypt the database encryption key. In order to encrypt the
database encryption key with an asymmetric key, the asymmetric key must reside on an extensible key
management provider.
Remarks
A database encryption key is required before a database can be encrypted by using Transparent Database
Encryption (TDE ). When a database is transparently encrypted, the whole database is encrypted at the file level,
without any special code modifications. The certificate or asymmetric key that is used to encrypt the database
encryption key must be located in the master system database.
Database encryption statements are allowed only on user databases.
The database encryption key cannot be exported from the database. It is available only to the system, to users who
have debugging permissions on the server, and to users who have access to the certificates that encrypt and
decrypt the database encryption key.
The database encryption key does not have to be regenerated when a database owner (dbo) is changed.
A database encryption key is automatically created for a SQL Database database. You do not need to create a key
using the CREATE DATABASE ENCRYPTION KEY statement.

Permissions
Requires CONTROL permission on the database and VIEW DEFINITION permission on the certificate or
asymmetric key that is used to encrypt the database encryption key.

Examples
For additional examples using TDE, see Transparent Data Encryption (TDE ), Enable TDE on SQL Server Using
EKM, and Extensible Key Management Using Azure Key Vault (SQL Server).
The following example creates a database encryption key by using the AES_256 algorithm, and protects the private
key with a certificate named MyServerCert .

USE AdventureWorks2012;
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MyServerCert;
GO

See Also
Transparent Data Encryption (TDE )
SQL Server Encryption
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
ALTER DATABASE SET Options (Transact-SQL )
ALTER DATABASE ENCRYPTION KEY (Transact-SQL )
DROP DATABASE ENCRYPTION KEY (Transact-SQL )
sys.dm_database_encryption_keys (Transact-SQL )
CREATE DATABASE SCOPED CREDENTIAL
(Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a database credential. A database credential is not mapped to a server login or database user. The
credential is used by the database to access to the external location anytime the database is performing an
operation that requires access.
Transact-SQL Syntax Conventions

Syntax
CREATE DATABASE SCOPED CREDENTIAL credential_name
WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]

Arguments
credential_name
Specifies the name of the database scoped credential being created. credential_name cannot start with the
number (#) sign. System credentials start with ##.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server. To import a file from Azure
Blob storage using share key, the identity name must be SHARED ACCESS SIGNATURE . To load data into SQL DW, any
valid value can be used for identity. For more information about shared access signatures, see Using Shared
Access Signatures (SAS ).
SECRET ='secret'
Specifies the secret required for outgoing authentication. SECRET is required to import a file from Azure Blob
storage. To load from Azure Blob storage into SQL DW, the Secret must be the Azure Storage Key.

WARNING
The SAS key value might begin with a '?' (question mark). When you use the SAS key, you must remove the leading '?'.
Otherwise your efforts might be blocked.

Remarks
A database scoped credential is a record that contains the authentication information that is required to connect
to a resource outside SQL Server. Most credentials include a Windows user and password.
Before creating a database scoped credential, the database must have a master key to protect the credential. For
more information, see CREATE MASTER KEY (Transact-SQL ).
When IDENTITY is a Windows user, the secret can be the password. The secret is encrypted using the service
master key. If the service master key is regenerated, the secret is re-encrypted using the new service master key.
Information about database scoped credentials is visible in the sys.database_scoped_credentials catalog view.
Hereare some applications of database scoped credentials:
SQL Server uses a database scoped credential to access non-public Azure blob storage or Kerberos-
secured Hadoop clusters with PolyBase. To learn more, see CREATE EXTERNAL DATA SOURCE (Transact-
SQL ).
SQL Data Warehouse uses a database scoped credential to access non-public Azure blob storage with
PolyBase. To learn more, see CREATE EXTERNAL DATA SOURCE (Transact-SQL ).
SQL Database uses database scoped credentials for its global query feature. This is the ability to query
across multiple database shards.
SQL Database uses database scoped credentials to write extended event files to Azure blob storage.
SQL Database uses database scoped credentials for elastic pools. For more information, see Tame
explosive growth with elastic databases
BULK INSERT and OPENROWSET use database scoped credentials to access data from Azure blob
storage. For more information, see Examples of Bulk Access to Data in Azure Blob Storage.

Permissions
Requires CONTROL permission on the database.

Examples
A. Creating a database scoped credential for your application.
The following example creates the database scoped credential called AppCred . The database scoped credential
contains the Windows user Mary5 and a password.

-- Create a db master key if one does not already exist, using your own password.
CREATE MASTER KEY ENCRYPTION BY PASSWORD='<EnterStrongPasswordHere>';

-- Create a database scoped credential.


CREATE DATABASE SCOPED CREDENTIAL AppCred WITH IDENTITY = 'Mary5',
SECRET = '<EnterStrongPasswordHere>';
GO

B. Creating a database scoped credential for a shared access signature.


The following example creates a database scoped credential that can be used to create an external data source,
which can do bulk operations, such as BULK INSERT and OPENROWSET. Shared Access Signatures cannot be
used with PolyBase in SQL Server, APS or SQL DW.

CREATE DATABASE SCOPED CREDENTIAL MyCredentials


WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'QLYMgmSXMklt%2FI1U6DcVrQixnlU5Sgbtk1qDRakUBGs%3D';

C. Creating a database scoped credential for PolyBase Connectivity to Azure Data Lake Store.
The following example creates a database scoped credential that can be used to create an external data source,
which can be used by PolyBase in Azure SQL Data Warehouse.
Azure Data Lake Store uses an Azure Active Directory Application for Service to Service Authentication. Please
create an AAD application and document your client_id, OAuth_2.0_Token_EndPoint, and Key before you try to
create a database scoped credential.

CREATE DATABASE SCOPED CREDENTIAL ADL_User


WITH
IDENTITY = '<client_id>@\<OAuth_2.0_Token_EndPoint>'
SECRET = '<key>'
;

More information
Credentials (Database Engine)
ALTER DATABASE SCOPED CREDENTIAL (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.database_scoped_credentials
CREATE CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
CREATE DEFAULT (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an object called a default. When bound to a column or an alias data type, a default specifies a value to be
inserted into the column to which the object is bound (or into all columns, in the case of an alias data type), when
no value is explicitly supplied during an insert.

IMPORTANT
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature. Instead, use default definitions created using the DEFAULT
keyword of ALTER TABLE or CREATE TABLE.

Transact-SQL Syntax Conventions

Syntax
CREATE DEFAULT [ schema_name . ] default_name
AS constant_expression [ ; ]

Arguments
schema_name
Is the name of the schema to which the default belongs.
default_name
Is the name of the default. Default names must conform to the rules for identifiers. Specifying the default owner
name is optional.
constant_expression
Is an expression that contains only constant values (it cannot include the names of any columns or other database
objects). Any constant, built-in function, or mathematical expression can be used, except those that contain alias
data types. User-defined functions cannot be used.. Enclose character and date constants in single quotation marks
('); monetary, integer, and floating-point constants do not require quotation marks. Binary data must be preceded
by 0x, and monetary data must be preceded by a dollar sign ($). The default value must be compatible with the
column data type.

Remarks
A default name can be created only in the current database. Within a database, default names must be unique by
schema. When a default is created, use sp_bindefault to bind it to a column or to an alias data type.
If the default is not compatible with the column to which it is bound, SQL Server generates an error message
when trying to insert the default value. For example, N/A cannot be used as a default for a numeric column.
If the default value is too long for the column to which it is bound, the value is truncated.
CREATE DEFAULT statements cannot be combined with other Transact-SQL statements in a single batch.
A default must be dropped before creating a new one of the same name, and the default must be unbound by
executing sp_unbindefault before it is dropped.
If a column has both a default and a rule associated with it, the default value must not violate the rule. A default
that conflicts with a rule is never inserted, and SQL Server generates an error message each time it attempts to
insert the default.
When bound to a column, a default value is inserted when:
A value is not explicitly inserted.
Either the DEFAULT VALUES or DEFAULT keywords are used with INSERT to insert default values.
If NOT NULL is specified when creating a column and a default is not created for it, an error message is
generated when a user fails to make an entry in that column. The following table illustrates the relationship
between the existence of a default and the definition of a column as NULL or NOT NULL. The entries in the
table show the result.

ENTER NULL, NO
COLUMN DEFINITION NO ENTRY, NO DEFAULT NO ENTRY, DEFAULT DEFAULT ENTER NULL, DEFAULT

NULL NULL default NULL NULL

NOT NULL Error default error error

To rename a default, use sp_rename. For a report on a default, use sp_help.

Permissions
To execute CREATE DEFAULT, at a minimum, a user must have CREATE DEFAULT permission in the current
database and ALTER permission on the schema in which the default is being created.

Examples
A. Creating a simple character default
The following example creates a character default called unknown .

USE AdventureWorks2012;
GO
CREATE DEFAULT phonedflt AS 'unknown';

B. Binding a default
The following example binds the default created in example A. The default takes effect only if no entry is specified
for the Phone column of the Contact table. Note that omitting any entry is different from explicitly stating NULL
in an INSERT statement.
Because a default named phonedflt does not exist, the following Transact-SQL statement fails. This example is for
illustration only.

USE AdventureWorks2012;
GO
sp_bindefault 'phonedflt', 'Person.PersonPhone.PhoneNumber';
See Also
ALTER TABLE (Transact-SQL )
CREATE RULE (Transact-SQL )
CREATE TABLE (Transact-SQL )
DROP DEFAULT (Transact-SQL )
DROP RULE (Transact-SQL )
Expressions (Transact-SQL )
INSERT (Transact-SQL )
sp_bindefault (Transact-SQL )
sp_help (Transact-SQL )
sp_helptext (Transact-SQL )
sp_rename (Transact-SQL )
sp_unbindefault (Transact-SQL )
CREATE ENDPOINT (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates endpoints and defines their properties, including the methods available to client applications. For related
permissions information, see GRANT Endpoint Permissions (Transact-SQL ).
The syntax for CREATE ENDPOINT can logically be broken into two parts:
The first part starts with AS and ends before the FOR clause.
In this part, you provide information specific to the transport protocol (TCP ) and set a listening port
number for the endpoint, as well as the method of endpoint authentication and/or a list of IP addresses (if
any) that you want to restrict from accessing the endpoint.
The second part starts with the FOR clause.
In this part, you define the payload that is supported on the endpoint. The payload can be one of several
supported types: Transact-SQL, service broker, database mirroring. In this part, you also include language-
specific information.

NOTE: Native XML Web Services (SOAP/HTTP endpoints) was removed in SQL Server 2012 (11.x).

Transact-SQL Syntax Conventions

Syntax
CREATE ENDPOINT endPointName [ AUTHORIZATION login ]
[ STATE = { STARTED | STOPPED | DISABLED } ]
AS { TCP } (
<protocol_specific_arguments>
)
FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING } (
<language_specific_arguments>
)

<AS TCP_protocol_specific_arguments> ::=


AS TCP (
LISTENER_PORT = listenerPort
[ [ , ] LISTENER_IP = ALL | ( 4-part-ip ) | ( "ip_address_v6" ) ]

<FOR SERVICE_BROKER_language_specific_arguments> ::=


FOR SERVICE_BROKER (
[ AUTHENTICATION = {
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
| CERTIFICATE certificate_name
| WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
| CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
} ]
[ [ , ] ENCRYPTION = { DISABLED | { { SUPPORTED | REQUIRED }
[ ALGORITHM { AES | RC4 | AES RC4 | RC4 AES } ] }
]
[ [ , ] MESSAGE_FORWARDING = { ENABLED | DISABLED } ]
[ [ , ] MESSAGE_FORWARD_SIZE = forward_size ]
)

<FOR DATABASE_MIRRORING_language_specific_arguments> ::=


FOR DATABASE_MIRRORING (
[ AUTHENTICATION = {
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
| CERTIFICATE certificate_name
| WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
| CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
[ [ [ , ] ] ENCRYPTION = { DISABLED | { { SUPPORTED | REQUIRED }
[ ALGORITHM { AES | RC4 | AES RC4 | RC4 AES } ] }

]
[ , ] ROLE = { WITNESS | PARTNER | ALL }
)

Arguments
endPointName
Is the assigned name for the endpoint you are creating. Use when updating or deleting the endpoint.
AUTHORIZATION login
Specifies a valid SQL Server or Windows login that is assigned ownership of the newly created endpoint object. If
AUTHORIZATION is not specified, by default, the caller becomes owner of the newly created object.
To assign ownership by specifying AUTHORIZATION, the caller must have IMPERSONATE permission on the
specified login.
To reassign ownership, see ALTER ENDPOINT (Transact-SQL ).
STATE = { STARTED | STOPPED | DISABLED }
Is the state of the endpoint when it is created. If the state is not specified when the endpoint is created, STOPPED
is the default.
STARTED
Endpoint is started and is actively listening for connections.
DISABLED
Endpoint is disabled. In this state, the server listens to port requests but returns errors to clients.
STOPPED
Endpoint is stopped. In this state, the server does not listen to the endpoint port or respond to any attempted
requests to use the endpoint.
To change the state, use ALTER ENDPOINT (Transact-SQL ).
AS { TCP }
Specifies the transport protocol to use.
FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING }
Specifies the payload type.
Currently, there are no Transact-SQL language-specific arguments to pass in the <language_specific_arguments>
parameter.
TCP Protocol Option
The following arguments apply only to the TCP protocol option.
LISTENER_PORT =listenerPort
Specifies the port number listened to for connections by the service broker TCP/IP protocol. By convention, 4022
is used but any number between 1024 and 32767 is valid.
LISTENER_IP = ALL | (4 -part-ip ) | ( "ip_address_v6" )
Specifies the IP address that the endpoint will listen on. The default is ALL. This means that the listener will accept
a connection on any valid IP address.
If you configure database mirroring with an IP address instead of a fully-qualified domain name (
ALTER DATABASE SET PARTNER = partner_IP_address or ALTER DATABASE SET WITNESS = witness_IP_address ), you have
to specify LISTENER_IP =IP_address instead of LISTENER_IP=ALL when you create mirroring endpoints.
SERVICE_BROKER and DATABASE_MIRRORING Options
The following AUTHENTICATION and ENCRYPTION arguments are common to the SERVICE_BROKER and
DATABASE_MIRRORING options.

NOTE
For options that are specific to SERVICE_BROKER, see "SERVICE_BROKER Options," later in this section. For options that are
specific to DATABASE_MIRRORING, see "DATABASE_MIRRORING Options," later in this section.

AUTHENTICATION = <authentication_options> Specifies the TCP/IP authentication requirements for


connections for this endpoint. The default is WINDOWS.
The supported authentication methods include NTLM and or Kerberos or both.

IMPORTANT
All mirroring connections on a server instance use a single database mirroring endpoint. Any attempt to create an additional
database mirroring endpoint will fail.

<authentication_options> ::=
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
Specifies that the endpoint is to connect using Windows Authentication protocol to authenticate the endpoints.
This is the default.
If you specify an authorization method (NTLM or KERBEROS ), that method is always used as the authentication
protocol. The default value, NEGOTIATE, causes the endpoint to use the Windows negotiation protocol to choose
either NTLM or Kerberos.
CERTIFICATE certificate_name
Specifies that the endpoint is to authenticate the connection using the certificate specified by certificate_name to
establish identity for authorization. The far endpoint must have a certificate with the public key matching the
private key of the specified certificate.
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
Specifies that endpoint is to try to connect by using Windows Authentication and, if that attempt fails, to then try
using the specified certificate.
CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
Specifies that endpoint is to try to connect by using the specified certificate and, if that attempt fails, to then try
using Windows Authentication.
ENCRYPTION = { DISABLED | SUPPORTED | REQUIRED } [ALGORITHM { AES | RC4 | AES RC4 | RC4 AES } ]
Specifies whether encryption is used in the process. The default is REQUIRED.
DISABLED
Specifies that data sent over a connection is not encrypted.
SUPPORTED
Specifies that the data is encrypted only if the opposite endpoint specifies either SUPPORTED or REQUIRED.
REQUIRED
Specifies that connections to this endpoint must use encryption. Therefore, to connect to this endpoint, another
endpoint must have ENCRYPTION set to either SUPPORTED or REQUIRED.
Optionally, you can use the ALGORITHM argument to specify the form of encryption used by the endpoint, as
follows:
AES
Specifies that the endpoint must use the AES algorithm. This is the default in SQL Server 2016 (13.x) and later.
RC4
Specifies that the endpoint must use the RC4 algorithm. This is the default through SQL Server 2014 (12.x).

NOTE
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or RC4_128
when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of the AES
algorithms instead. In SQL Server 2012 (11.x) and later versions, material encrypted using RC4 or RC4_128 can be
decrypted in any compatibility level.

AES RC4
Specifies that the two endpoints will negotiate for an encryption algorithm with this endpoint giving preference to
the AES algorithm.
RC4 AES
Specifies that the two endpoints will negotiate for an encryption algorithm with this endpoint giving preference to
the RC4 algorithm.
NOTE
The RC4 algorithm is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Do not use this
feature in new development work, and modify applications that currently use this feature as soon as possible. We
recommend that you use AES.

If both endpoints specify both algorithms but in different orders, the endpoint accepting the connection wins.
SERVICE_BROKER Options
The following arguments are specific to the SERVICE_BROKER option.
MESSAGE_FORWARDING = { ENABLED | DISABLED }
Determines whether messages received by this endpoint that are for services located elsewhere will be forwarded.
ENABLED
Forwards messages if a forwarding address is available.
DISABLED
Discards messages for services located elsewhere. This is the default.
MESSAGE_FORWARD_SIZE =forward_size
Specifies the maximum amount of storage in megabytes to allocate for the endpoint to use when storing
messages that are to be forwarded.
DATABASE_MIRRORING Options
The following argument is specific to the DATABASE_MIRRORING option.
ROLE = { WITNESS | PARTNER | ALL }
Specifies the database mirroring role or roles that the endpoint supports.
WITNESS
Enables the endpoint to perform in the role of a witness in the mirroring process.

NOTE
For SQL Server 2005 Express Edition, WITNESS is the only option available.

PARTNER
Enables the endpoint to perform in the role of a partner in the mirroring process.
ALL
Enables the endpoint to perform in the role of both a witness and a partner in the mirroring process.
For more information about these roles, see Database Mirroring (SQL Server).

NOTE
There is no default port for DATABASE_MIRRORING.

Remarks
ENDPOINT DDL statements cannot be executed inside a user transaction. ENDPOINT DDL statements do not
fail even if an active snapshot isolation level transaction is using the endpoint being altered.
Requests can be executed against an ENDPOINT by the following:
Members of sysadmin fixed server role
The owner of the endpoint
Users or groups that have been granted CONNECT permission on the endpoint

Permissions
Requires CREATE ENDPOINT permission, or membership in the sysadmin fixed server role. For more
information, see GRANT Endpoint Permissions (Transact-SQL ).

Example
Creating a database mirroring endpoint
The following example creates a database mirroring endpoint. The endpoint uses port number 7022 , although
any available port number would work. The endpoint is configured to use Windows Authentication using only
Kerberos. The ENCRYPTION option is configured to the nondefault value of SUPPORTED to support encrypted or
unencrypted data. The endpoint is being configured to support both the partner and witness roles.

CREATE ENDPOINT endpoint_mirroring


STATE = STARTED
AS TCP ( LISTENER_PORT = 7022 )
FOR DATABASE_MIRRORING (
AUTHENTICATION = WINDOWS KERBEROS,
ENCRYPTION = SUPPORTED,
ROLE=ALL);
GO

See also
ALTER ENDPOINT (Transact-SQL )
Choose an Encryption Algorithm
DROP ENDPOINT (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE EVENT NOTIFICATION (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an object that sends information about a database or server event to a service broker service. Event
notifications are created only by using Transact-SQL statements.
Transact-SQL Syntax Conventions

Syntax
CREATE EVENT NOTIFICATION event_notification_name
ON { SERVER | DATABASE | QUEUE queue_name }
[ WITH FAN_IN ]
FOR { event_type | event_group } [ ,...n ]
TO SERVICE 'broker_service' , { 'broker_instance_specifier' | 'current database' }
[ ; ]

Arguments
event_notification_name
Is the name of the event notification. An event notification name must comply with the rules for identifiers and
must be unique within the scope in which they are created: SERVER, DATABASE, or object_name.
SERVER
Applies the scope of the event notification to the current instance of SQL Server. If specified, the notification fires
whenever the specified event in the FOR clause occurs anywhere in the instance of SQL Server.

NOTE
This option is not available in a contained database.

DATABASE
Applies the scope of the event notification to the current database. If specified, the notification fires whenever the
specified event in the FOR clause occurs in the current database.
QUEUE
Applies the scope of the notification to a specific queue in the current database. QUEUE can be specified only if
FOR QUEUE_ACTIVATION or FOR BROKER_QUEUE_DISABLED is also specified.
queue_name
Is the name of the queue to which the event notification applies. queue_name can be specified only if QUEUE is
specified.
WITH FAN_IN
Instructs SQL Server to send only one message per event to any specified service for all event notifications that:
Are created on the same event.
Are created by the same principal (as identified by the same SID ).
Specify the same service and broker_instance_specifier.
Specify WITH FAN_IN.
For example, three event notifications are created. All event notifications specify FOR ALTER_TABLE, WITH
FAN_IN, the same TO SERVICE clause, and are created by the same SID. When an ALTER TABLE statement
is run, the messages that are created by these three event notifications are merged into one. Therefore, the
target service receives only one message of the event.
event_type
Is the name of an event type that causes the event notification to execute. event_type can be a Transact-SQL
DDL event type, a SQL Trace event type, or a Service Broker event type. For a list of qualifying Transact-SQL
DDL event types, see DDL Events. Service Broker event types are QUEUE_ACTIVATION and
BROKER_QUEUE_DISABLED. For more information, see Event Notifications.
event_group
Is the name of a predefined group of Transact-SQL or SQL Trace event types. An event notification can fire
after execution of any event that belongs to an event group. For a list of DDL event groups, the Transact-
SQL events they cover, and the scope at which they can be defined, see DDL Event Groups.
event_group also acts as a macro, when the CREATE EVENT NOTIFICATION statement finishes, by adding
the event types it covers to the sys.events catalog view.
' broker_service '
Specifies the target service that receives the event instance data. SQL Server opens one or more
conversations to the target service for the event notification. This service must honor the same SQL Server
Events message type and contract that is used to send the message.
The conversations remain open until the event notification is dropped. Certain errors could cause the
conversations to close earlier. Ending some or all conversations explicitly might prevent the target service
from receiving more messages.
{ 'broker_instance_specifier' | 'current database' }
Specifies a service broker instance against which broker_service is resolved. The value for a specific service
broker can be acquired by querying the service_broker_guid column of the sys.databases catalog view.
Use 'current database' to specify the service broker instance in the current database. 'current database'
is a case-insensitive string literal.

NOTE
This option is not available in a contained database.

Remarks
Service Broker includes a message type and contract specifically for event notifications. Therefore, a Service Broker
initiating service does not have to be created because one already exists that specifies the following contract name:
http://schemas.microsoft.com/SQL/Notifications/PostEventNotification

The target service that receives event notifications must honor this preexisting contract.

IMPORTANT
Service Broker dialog security should be configured for event notifications that send messages to a service broker on a
remote server. Dialog security must be configured manually according to the full security model. For more information, see
Configure Dialog Security for Event Notifications.
If an event transaction that activates a notification is rolled back, the sending of the event notification is also rolled
back. Event notifications do not fire by an action defined in a trigger when the transaction is committed or rolled
back inside the trigger. Because trace events are not bound by transactions, event notifications based on trace
events are sent regardless of whether the transaction that activates them is rolled back.
If the conversation between the server and the target service is broken after an event notification fires, an error is
reported and the event notification is dropped.
The event transaction that originally started the notification is not affected by the success or failure of the sending
of the event notification.
Any failure to send an event notification is logged.

Permissions
To create an event notification that is scoped to the database (ON DATABASE ), requires CREATE DATABASE DDL
EVENT NOTIFICATION permission in the current database.
To create an event notification on a DDL statement that is scoped to the server (ON SERVER ), requires CREATE
DDL EVENT NOTIFICATION permission in the server.
To create an event notification on a trace event, requires CREATE TRACE EVENT NOTIFICATION permission in
the server.
To create an event notification that is scoped to a queue, requires ALTER permission on the queue.

Examples
NOTE
In Examples A and B below, the GUID in the TO SERVICE 'NotifyService' clause ('8140a771-3c4b-4479-8ac0-
81008ab17984') is specific to the computer on which the example was set up. For that instance, that was the GUID for the
AdventureWorks2012 database.
To copy and run these examples, you need to replace this GUID with one from your computer and SQL Server instance. As
explained in the Arguments section above, you can acquire the 'broker_instance_specifier' by querying the
service_broker_guid column of the sys.databases catalog view.

A. Creating an event notification that is server scoped


The following example creates the required objects to set up a target service using Service Broker. The target
service references the message type and contract of the initiating service specifically for event notifications. Then
an event notification is created on that target service that sends a notification whenever an Object_Created trace
event happens on the instance of SQL Server.
--Create a queue to receive messages.
CREATE QUEUE NotifyQueue ;
GO
--Create a service on the queue that references
--the event notifications contract.
CREATE SERVICE NotifyService
ON QUEUE NotifyQueue
([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
GO
--Create a route on the service to define the address
--to which Service Broker sends messages for the service.
CREATE ROUTE NotifyRoute
WITH SERVICE_NAME = 'NotifyService',
ADDRESS = 'LOCAL';
GO
--Create the event notification.
CREATE EVENT NOTIFICATION log_ddl1
ON SERVER
FOR Object_Created
TO SERVICE 'NotifyService',
'8140a771-3c4b-4479-8ac0-81008ab17984' ;

B. Creating an event notification that is database scoped


The following example creates an event notification on the same target service as the previous example. The event
notification fires after an ALTER_TABLE event occurs on the AdventureWorks2012 sample database.

CREATE EVENT NOTIFICATION Notify_ALTER_T1


ON DATABASE
FOR ALTER_TABLE
TO SERVICE 'NotifyService',
'8140a771-3c4b-4479-8ac0-81008ab17984';

C. Getting information about an event notification that is server scoped


The following example queries the sys.server_event_notifications catalog view for metadata about event
notification log_ddl1 that was created with server scope.

SELECT * FROM sys.server_event_notifications


WHERE name = 'log_ddl1';

D. Getting information about an event notification that is database scoped


The following example queries the sys.event_notifications catalog view for metadata about event notification
Notify_ALTER_T1 that was created with database scope.

SELECT * FROM sys.event_notifications


WHERE name = 'Notify_ALTER_T1';

See Also
Event Notifications
DROP EVENT NOTIFICATION (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.event_notifications (Transact-SQL )
sys.server_event_notifications (Transact-SQL )
sys.events (Transact-SQL )
sys.server_events (Transact-SQL )
CREATE EVENT SESSION (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an Extended Events session that identifies the source of the events, the event session targets, and the event
session options.
Transact-SQL Syntax Conventions.

Syntax
CREATE EVENT SESSION event_session_name
ON SERVER
{
<event_definition> [ ,...n]
[ <event_target_definition> [ ,...n] ]
[ WITH ( <event_session_options> [ ,...n] ) ]
}
;

<event_definition>::=
{
ADD EVENT [event_module_guid].event_package_name.event_name
[ ( {
[ SET { event_customizable_attribute = <value> [ ,...n] } ]
[ ACTION ( { [event_module_guid].event_package_name.action_name [ ,...n] } ) ]
[ WHERE <predicate_expression> ]
} ) ]
}

<predicate_expression> ::=
{
[ NOT ] <predicate_factor> | {( <predicate_expression> ) }
[ { AND | OR } [ NOT ] { <predicate_factor> | ( <predicate_expression> ) } ]
[ ,...n ]
}

<predicate_factor>::=
{
<predicate_leaf> | ( <predicate_expression> )
}

<predicate_leaf>::=
{
<predicate_source_declaration> { = | < > | ! = | > | > = | < | < = } <value>
| [event_module_guid].event_package_name.predicate_compare_name ( <predicate_source_declaration>, <value>
)
}

<predicate_source_declaration>::=
{
event_field_name | ( [event_module_guid].event_package_name.predicate_source_name )
}

<value>::=
{
number | 'string'
}

<event_target_definition>::=
{
ADD TARGET [event_module_guid].event_package_name.target_name
[ ( SET { target_parameter_name = <value> [ ,...n] } ) ]
}

<event_session_options>::=
{
[ MAX_MEMORY = size [ KB | MB ] ]
[ [,] EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS | ALLOW_MULTIPLE_EVENT_LOSS | NO_EVENT_LOSS } ]
[ [,] MAX_DISPATCH_LATENCY = { seconds SECONDS | INFINITE } ]
[ [,] MAX_EVENT_SIZE = size [ KB | MB ] ]
[ [,] MEMORY_PARTITION_MODE = { NONE | PER_NODE | PER_CPU } ]
[ [,] TRACK_CAUSALITY = { ON | OFF } ]
[ [,] STARTUP_STATE = { ON | OFF } ]
}
Arguments
event_session_name
Is the user-defined name for the event session. event_session_name is alphanumeric, can be up to 128 characters,
must be unique within an instance of SQL Server, and must comply with the rules for Identifiers.
ADD EVENT [ event_module_guid ].event_package_name.event_name
Is the event to associate with the event session, where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the action object.
event_name is the event object.
Events appear in the sys.dm_xe_objects view as object_type 'event'.
SET { event_customizable_attribute= <value> [ ,...n] }
Allows customizable attributes for the event to be set. Customizable attributes appear in the
sys.dm_xe_object_columns view as column_type 'customizable ' and object_name = event_name.
ACTION ( { [event_module_guid].event_package_name.action_name [ ,...n] })
Is the action to associate with the event session, where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the action object.
action_name is the action object.
Actions appear in the sys.dm_xe_objects view as object_type 'action'.
WHERE <predicate_expression> Specifies the predicate expression used to determine if an event should be
processed. If <predicate_expression> is true, the event is processed further by the actions and targets for
the session. If <predicate_expression> is false, the event is dropped by the session before being processed
by the actions and targets for the session. Predicate expressions are limited to 3000 characters, which limits
string arguments.
event_field_name
Is the name of the event field that identifies the predicate source.
[event_module_guid].event_package_name.predicate_source_name
Is the name of the global predicate source where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the predicate object.
predicate_source_name is defined in the sys.dm_xe_objects view as object_type 'pred_source'.
[event_module_guid].event_package_name.predicate_compare_name
Is the name of the predicate object to associate with the event, where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the predicate object.
predicate_compare_name is a global source defined in the sys.dm_xe_objects view as object_type
'pred_compare'.
number
Is any numeric type including decimal. Limitations are the lack of available physical memory or a number
that is too large to be represented as a 64-bit integer.
'string'
Either an ANSI or Unicode string as required by the predicate compare. No implicit string type conversion
is performed for the predicate compare functions. Passing the wrong type results in an error.
ADD TARGET [event_module_guid].event_package_name.target_name
Is the target to associate with the event session, where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the action object.
target_name is the target. Targets appear in sys.dm_xe_objects view as object_type 'target'.
SET { target_parameter_name= <value> [, ...n] }
Sets a target parameter. Target parameters appear in the sys.dm_xe_object_columns view as column_type
'customizable' and object_name = target_name.

IMPORTANT
If you are using the ring buffer target, we recommend that you set the max_memory target parameter to 2048 kilobytes
(KB) to help avoid possible data truncation of the XML output. For more information about when to use the different target
types, see SQL Server Extended Events Targets.

WITH ( <event_session_options> [ ,...n] ) Specifies options to use with the event session.
MAX_MEMORY =size [ KB | MB ]
Specifies the maximum amount of memory to allocate to the session for event buffering. The default is 4 MB. size
is a whole number and can be a kilobyte (KB ) or a megabyte (MB ) value.
EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS | ALLOW_MULTIPLE_EVENT_LOSS |
NO_EVENT_LOSS }
Specifies the event retention mode to use for handling event loss.
ALLOW_SINGLE_EVENT_LOSS
An event can be lost from the session. A single event is only dropped when all the event buffers are full. Losing a
single event when event buffers are full allows for acceptable SQL Server performance characteristics, while
minimizing the loss of data in the processed event stream.
ALLOW_MULTIPLE_EVENT_LOSS
Full event buffers containing multiple events can be lost from the session. The number of events lost is dependant
upon the memory size allocated to the session, the partitioning of the memory, and the size of the events in the
buffer. This option minimizes performance impact on the server when event buffers are quickly filled, but large
numbers of events can be lost from the session.
NO_EVENT_LOSS
No event loss is allowed. This option ensures that all events raised will be retained. Using this option forces all
tasks that fire events to wait until space is available in an event buffer. This may cause detectable performance
issues while the event session is active. User connections may stall while waiting for events to be flushed from the
buffer.
MAX_DISPATCH_L ATENCY = { seconds SECONDS | INFINITE }
Specifies the amount of time that events will be buffered in memory before being dispatched to event session
targets. By default, this value is set to 30 seconds.
seconds SECONDS
The time, in seconds, to wait before starting to flush buffers to targets. seconds is a whole number. The minimum
latency value is 1 second. However, 0 can be used to specify INFINITE latency.
INFINITE
Flush buffers to targets only when the buffers are full, or when the event session closes.

NOTE
MAX_DISPATCH_LATENCY = 0 SECONDS is equivalent to MAX_DISPATCH_LATENCY = INFINITE.

MAX_EVENT_SIZE =size [ KB | MB ]
Specifies the maximum allowable size for events. MAX_EVENT_SIZE should only be set to allow single events
larger than MAX_MEMORY; setting it to less than MAX_MEMORY will raise an error. size is a whole number and
can be a kilobyte (KB ) or a megabyte (MB ) value. If size is specified in kilobytes, the minimum allowable size is 64
KB. When MAX_EVENT_SIZE is set, two buffers of size are created in addition to MAX_MEMORY. This means that
the total memory used for event buffering is MAX_MEMORY + 2 * MAX_EVENT_SIZE.
MEMORY_PARTITION_MODE = { NONE | PER_NODE | PER_CPU }
Specifies the location where event buffers are created.
NONE
A single set of buffers are created within the SQL Server instance.
PER_NODE
A set of buffers are created for each NUMA node.
PER_CPU
A set of buffers are created for each CPU.
TRACK_CAUSALITY = { ON | OFF }
Specifies whether or not causality is tracked. If enabled, causality allows related events on different server
connections to be correlated together.
STARTUP_STATE = { ON | OFF }
Specifies whether or not to start this event session automatically when SQL Server starts.

NOTE
If STARTUP_STATE = ON, the event session will only start if SQL Server is stopped and then restarted.

ON
The event session is started at startup.
OFF
The event session is not started at startup.

Remarks
The order of precedence for the logical operators is NOT (highest), followed by AND, followed by OR.

Permissions
Requires the ALTER ANY EVENT SESSION permission.

Examples
The following example shows how to create an event session named test_session . This example adds two events
and uses the Event Tracing for Windows target.

IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='test_session')


DROP EVENT session test_session ON SERVER;
GO
CREATE EVENT SESSION test_session
ON SERVER
ADD EVENT sqlos.async_io_requested,
ADD EVENT sqlserver.lock_acquired
ADD TARGET package0.etw_classic_sync_target
(SET default_etw_session_logfile_path = N'C:\demo\traces\sqletw.etl' )
WITH (MAX_MEMORY=4MB, MAX_EVENT_SIZE=4MB);
GO

See Also
ALTER EVENT SESSION (Transact-SQL )
DROP EVENT SESSION (Transact-SQL )
sys.server_event_sessions (Transact-SQL )
sys.dm_xe_objects (Transact-SQL )
sys.dm_xe_object_columns (Transact-SQL )
CREATE EXTERNAL DATA SOURCE (Transact-SQL)
5/16/2018 • 13 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an external data source for PolyBase, or Elastic Database queries. Depending on the scenario, the syntax
differs significantly. An external data source created for PolyBase cannot be used for Elastic Database queries.
Similarly, an external data source created for Elastic Database queries cannot be used for PolyBase, etc.

NOTE
PolyBase is supported only on SQL Server 2016 (or higher), Azure SQL Data Warehouse, and Parallel Data Warehouse.
Elastic Database queries are supported only on Azure SQL Database v12 or later.

For PolyBase scenarios, the external data source is either a Hadoop File System (HDFS ), an Azure storage blob
container, or Azure Data Lake Store. For more information, see Get started with PolyBase.
For Elastic Database query scenarios, the external source is either a shard map manager (on Azure SQL
Database), or a remote database (on Azure SQL Database). Use sp_execute_remote (Azure SQL Database) after
creating an external data source. For more information, see Elastic Database query.
The Azure Blob storage external data source supports BULK INSERT and OPENROWSET syntax, and is different
than Azure Blob storage for PolyBase.
Transact-SQL Syntax Conventions

Syntax
-- PolyBase only: Hadoop cluster as data source
-- (on SQL Server 2016)
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://NameNode_URI[:port]'
[, RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI[:port]' ]
[, CREDENTIAL = credential_name ]
)
[;]

-- PolyBase only: Azure Storage Blob as data source


-- (on SQL Server 2016 and Azure SQL Data Warehouse)
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = HADOOP,
LOCATION = 'wasb[s]://container@account_name.blob.core.windows.net'
[, CREDENTIAL = credential_name ]
)
[;]

-- PolyBase only: Azure Data Lake Store


-- (on Azure SQL Data Warehouse)
CREATE EXTERNAL DATA SOURCE AzureDataLakeStore
WITH (
TYPE = HADOOP,
LOCATION = 'adl://<AzureDataLake account_name>.azuredatalake.net',
CREDENTIAL = AzureStorageCredential
);

-- PolyBase only: Hadoop cluster as data source


-- (on Parallel Data Warehouse)
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://NameNode_URI[:port]'
[, JOB_TRACKER_LOCATION = 'JobTracker_URI[:port]' ]
)
[;]

-- PolyBase only: Azure Storage Blob as data source


-- (on Parallel Data Warehouse)
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = HADOOP,
LOCATION = 'wasb[s]://container@account_name.blob.core.windows.net'
)
[;]

-- Elastic Database query only: a shard map manager as data source


-- (only on Azure SQL Database)
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = SHARD_MAP_MANAGER,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = '\<ElasticDatabase_ShardMapManagerDb'>,
CREDENTIAL = <ElasticDBQueryCred>,
SHARD_MAP_NAME = '<ShardMapName>'
)
[;]

-- Elastic Database query only: a remote database on Azure SQL Database as data source
-- (only on Azure SQL Database)
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = RDBMS,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = '<Remote_Database_Name>',
CREDENTIAL = <SQL_Credential>
)
[;]

-- Bulk operations only: Azure Storage Blob as data source


-- (on SQL Server 2017 or later, and Azure SQL Database).
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://storage_account_name.blob.core.windows.net/container_name'
[, CREDENTIAL = credential_name ]
)

Arguments
data_source_name Specifies the user-defined name for the data source. The name must be unique within the
database in SQL Server, Azure SQL Database, and Azure SQL Data Warehouse. The name must be unique
within the server in Parallel Data Warehouse.
TYPE = [ HADOOP | SHARD_MAP_MANAGER | RDBMS | BLOB_STORAGE ]
Specifies the data source type. Use HADOOP when the external data source is Hadoop or Azure Storage blob
for Hadoop. Use SHARD_MAP_MANAGER when creating an external data source for Elastic Database query
for sharding on Azure SQL Database. Use RDBMS with external data sources for cross-database queries with
Elastic Database query on Azure SQL Database. Use BLOB_STORAGE when performing bulk operations using
BULK INSERT or OPENROWSET with SQL Server 2017 (14.x).
LOCATION = <location_path> HADOOP
For HADOOP, specifies the Uniform Resource Indicator (URI) for a Hadoop cluster.
LOCATION = 'hdfs:\/\/*NameNode\_URI*\[:*port*\]'
NameNode_URI: The machine name or IP address of the Hadoop cluster Namenode.
port: The Namenode IPC port. This is indicated by the fs.default.name configuration parameter in Hadoop. If
the value is not specified, 8020 will be used by default.
Example: LOCATION = 'hdfs://10.10.10.10:8020'
For Azure blob storage with Hadoop, specifies the URI for connecting to Azure blob storage.
LOCATION = 'wasb[s]://container@account_name.blob.core.windows.net'
wasb[s]: Specifies the protocol for Azure blob storage. The [s] is optional and specifies a secure SSL connection;
data sent from SQL Server is securely encrypted through the SSL protocol. We strongly recommend using
'wasbs' instead of 'wasb'. Note that the location can use asv[s] instead of wasb[s]. The asv[s] syntax is
deprecated and will be removed in a future release.
container: Specifies the name of the Azure blob storage container. To specify the root container of a domain’s
storage account, use the domain name instead of the container name. Root containers are read-only, so data
cannot be written back to the container.
account_name: The fully qualified domain name (FQDN ) of the Azure storage account.
Example: LOCATION = 'wasbs://dailylogs@myaccount.blob.core.windows.net/'
For Azure Data Lake Store, location specifies the URI for connecting to your Azure Data Lake Store.
SHARD_MAP_MANAGER
For SHARD_MAP_MANAGER, specifies the logical server name that hosts the shard map manager in Azure
SQL Database or a SQL Server database on an Azure virtual machine.

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';

CREATE DATABASE SCOPED CREDENTIAL ElasticDBQueryCred


WITH IDENTITY = '<username>',
SECRET = '<password>';

CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc WITH


(TYPE = SHARD_MAP_MANAGER,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = 'ElasticScaleStarterKit_ShardMapManagerDb',
CREDENTIAL = ElasticDBQueryCred,
SHARD_MAP_NAME = 'CustomerIDShardMap'
) ;

For a step-by-step tutorial, see Getting started with elastic queries for sharding (horizontal partitioning).
RDBMS
For RDBMS, specifies the logical server name of the remote database in Azure SQL Database.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';

CREATE DATABASE SCOPED CREDENTIAL SQL_Credential


WITH IDENTITY = '<username>',
SECRET = '<password>';

CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc WITH


(TYPE = RDBMS,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = 'Customers',
CREDENTIAL = SQL_Credential
) ;

For a step-by-step tutorial on RDBMS, see Getting started with cross-database queries (vertical partitioning).
BLOB_STORAGE
For bulk operations only, LOCATION must be valid the URL to Azure Blob storage and container. Do not put /,
file name, or shared access signature parameters at the end of the LOCATION URL.
The credential used, must be created using SHARED ACCESS SIGNATURE as the identity. For more information on
shared access signatures, see Using Shared Access Signatures (SAS ). For an example of accessing blob storage,
see example F of BULK INSERT.
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI[:port]'
Specifies the Hadoop resource manager location. When specified, the query optimizer can make a cost-based
decision to pre-process data for a PolyBase query by using Hadoop’s computation capabilities with MapReduce.
Called predicate pushdown, this can significantly reduce the volume of data transferred between Hadoop and
SQL, and therefore improve query performance.
When this is not specified, pushing compute to Hadoop is disabled for PolyBase queries.
If the port is not specified, the default value is determined using the current setting for ‘hadoop connectivity’
configuration.

HADOOP CONNECTIVITY DEFAULT RESOURCE MANAGER PORT

1 50300

2 50300

3 8021

4 8032

5 8050

6 8032

7 8050

For a complete list of Hadoop distributions and versions supported by each connectivity value, see PolyBase
Connectivity Configuration (Transact-SQL ).
IMPORTANT
The RESOURCE_MANAGER_LOCATION value is a string and is not validated when you create the external data source.
Entering an incorrect value can cause future delays when accessing the location.

Hadoop examples:
Hortonworks HDP 2.0, 2.1, 2.2. 2.3 on Windows:

RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:8032'

Hortonworks HDP 1.3 on Windows:

RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:50300'

Hortonworks HDP 2.0, 2.1, 2.2, 2.3 on Linux:

RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:8050'

Hortonworks HDP 1.3 on Linux:

RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:50300'

Cloudera 4.3 on Linux:

RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:8021'

Cloudera 5.1 - 5.11 on Linux:

RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:8032'

CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source. For an example, see
C. Create an Azure blob storage external data source. To create a credential, see CREATE CREDENTIAL
(Transact-SQL ). Note that CREDENTIAL is not required for public data sets that allow anonymous
access.
DATABASE_NAME = 'QueryDatabaseName'
The name of the database that functions as the shard map manager (for SHARD_MAP_MANAGER ) or
the remote database (for RDBMS ).
SHARD_MAP_NAME = 'ShardMapName'
For SHARD_MAP_MANAGER only. The name of the shard map. For more information about creating a
shard map, see Getting started with Elastic Database query

PolyBase-specific notes
For a complete list of supported external data sources, see PolyBase Connectivity Configuration (Transact-SQL ).
To use PolyBase, you need to create these three objects:
An external data source.
An external file format, and
An external table that references the external data source and external file format.

Permissions
Requires CONTROL permission on database in SQL DW, SQL Server, APS 2016, and SQL DB.

IMPORTANT
In previous releases of PDW, create external data source required ALTER ANY EXTERNAL DATA SOURCE permissions.

Error Handling
A runtime error will occur if the external Hadoop data sources are inconsistent about having
RESOURCE_MANAGER_LOCATION defined. That is, you cannot specify two external data sources that
reference the same Hadoop cluster and then providing resource manager location for one and not for the other.
The SQL engine does not verify the existence of the external data source when it creates the external data
source object. If the data source does not exist during query execution, an error will occur.

General Remarks
For PolyBase, the external data source is database-scoped in SQL Server and SQL Data Warehouse. It is
server-scoped in Parallel Data Warehouse.
For PolyBase, when RESOURCE_MANAGER_LOCATION or JOB_TRACKER_LOCATION is defined, the query
optimizer will consider optimizing each query by initiating a map reduce job on the external Hadoop source and
pushing down computation. This is entirely a cost-based decision.
To ensure successful PolyBase queries in the event of Hadoop NameNode failover, consider using a virtual IP
address for the NameNode of the Hadoop cluster. If you do not use a virtual IP address for the Hadoop
NameNode, in the event of a Hadoop NameNode failover you will have to ALTER EXTERNAL DATA SOURCE
object to point to the new location.

Limitations and Restrictions


All data sources defined on the same Hadoop cluster location must use the same setting for
RESOURCE_MANAGER_LOCATION or JOB_TRACKER_LOCATION. If there is inconsistency, a runtime error
will occur.
If the Hadoop cluster is set up with a name and the external data source uses the IP address for the cluster
location, PolyBase must still be able to resolve the cluster name when the data source is used. To resolve the
name, you must enable a DNS forwarder.

Locking
Takes a shared lock on the EXTERNAL DATA SOURCE object.

Examples: SQL Server 2016


A. Create external data source to reference Hadoop
To create an external data source to reference your Hortonworks or Cloudera Hadoop cluster, specify the
machine name or IP address of the Hadoop Namenode and port.

CREATE EXTERNAL DATA SOURCE MyHadoopCluster


WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://10.10.10.10:8050'
);

B. Create external data source to reference Hadoop with pushdown enabled


Specify the RESOURCE_MANAGER_LOCATION option to enable push-down computation to Hadoop for
PolyBase queries. Once enabled, PolyBase uses a cost-based decision to determine whether the query
computation should be pushed to Hadoop or all the data should be moved to process the query in SQL Server.

CREATE EXTERNAL DATA SOURCE MyHadoopCluster


WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://10.10.10.10:8020',
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
);

C. Create external data source to reference Kerberos-secured Hadoop


To verify if the Hadoop cluster is Kerberos-secured, check the value of hadoop.security.authentication property
in Hadoop core-site.xml. To reference a Kerberos-secured Hadoop cluster, you must specify a database scoped
credential that contains your Kerberos username and password. The database master key is used to encrypt the
database scoped credential secret.

-- Create a database master key if one does not already exist, using your own password. This key is used to
encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo';

-- Create a database scoped credential with Kerberos user name and password.
CREATE DATABASE SCOPED CREDENTIAL HadoopUser1
WITH IDENTITY = '<hadoop_user_name>',
SECRET = '<hadoop_password>';

-- Create an external data source with CREDENTIAL option.


CREATE EXTERNAL DATA SOURCE MyHadoopCluster WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://10.10.10.10:8050',
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050',
CREDENTIAL = HadoopUser1
);

D. Create external data source to reference Azure blob storage


To create an external data source to reference your Azure blob storage container, specify the Azure blob storage
URI and a database scoped credential that contains your Azure storage account key.
In this example, the external data source is an Azure blob storage container called dailylogs under Azure
storage account named myaccount. The Azure storage external data source is for data transfer only; and it does
not support predicate pushdown.
This example shows how to create the database scoped credential for authentication to Azure storage. Specify
the Azure storage account key in the database credential secret. Specify any string in database scoped
credential identity, it is not used for authentication to Azure storage. Then, the credential is used in the
statement that creates an external data source.
-- Create a database master key if one does not already exist, using your own password. This key is used to
encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo';

-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH IDENTITY = 'myaccount',
SECRET = '<azure_storage_account_key>';

-- Create an external data source with CREDENTIAL option.


CREATE EXTERNAL DATA SOURCE MyAzureStorage WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'wasbs://dailylogs@myaccount.blob.core.windows.net/',
CREDENTIAL = AzureStorageCredential
);

Examples: Azure SQL Database


E. Create a Shard map manager external data source
To create an external data source to reference a SHARD_MAP_MANAGER, specify the logical server name that
hosts the shard map manager in Azure SQL Database or a SQL Server database on an Azure virtual machine.

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';

CREATE DATABASE SCOPED CREDENTIAL ElasticDBQueryCred


WITH IDENTITY = '<username>',
SECRET = '<password>';

CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc


WITH (
TYPE = SHARD_MAP_MANAGER,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = 'ElasticScaleStarterKit_ShardMapManagerDb',
CREDENTIAL = ElasticDBQueryCred,
SHARD_MAP_NAME = 'CustomerIDShardMap'
);

F. Create an RDBMS external data source


To create an external data source to reference a RDBMS, specifies the logical server name of the remote
database in Azure SQL Database.

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';

CREATE DATABASE SCOPED CREDENTIAL SQL_Credential


WITH IDENTITY = '<username>',
SECRET = '<password>';

CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc


WITH (
TYPE = RDBMS,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = 'Customers',
CREDENTIAL = SQL_Credential
);

Examples: Azure SQL Data Warehouse


G. Create external data source to reference Azure Data Lake Store
Azure Data lake Store connectivity is based on your ADLS URI and your Azure Acitve directory Application's
service principle. Documentation for creating this application can be found atData lake store authentication
using Active Directory.

-- If you do not have a Master Key on your DW you will need to create one.
CREATE MASTER KEY

-- These values come from your Azure Active Directory Application used to authenticate to ADLS
CREATE DATABASE SCOPED CREDENTIAL ADLUser
WITH IDENTITY = '<clientID>@<OAuth2.0TokenEndPoint>',
SECRET = '<KEY>' ;

CREATE EXTERNAL DATA SOURCE AzureDataLakeStore


WITH (TYPE = HADOOP,
LOCATION = '<ADLS URI>'
)

Examples: Parallel Data Warehouse


H. Create external data source to reference Hadoop with pushdown enabled
Specify the JOB_TRACKER_LOCATION option to enable push-down computation to Hadoop for PolyBase
queries. Once enabled, PolyBase uses a cost-based decision to determine whether the query computation
should be pushed to Hadoop or all the data should be moved to process the query in SQL Server.

CREATE EXTERNAL DATA SOURCE MyHadoopCluster


WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://10.10.10.10:8020',
JOB_TRACKER_LOCATION = '10.10.10.10:8050'
);

I. Create external data source to reference Azure blob storage


To create an external data source to reference your Azure blob storage container, specify the Azure blob storage
URI as the external data source LOCATION. Add your Azure storage account key to PDW core-site.xml file for
authentication.
In this example, the external data source is an Azure blob storage container called dailylogs under Azure
storage account named myaccount. The Azure storage external data source is for data transfer only and does
not support predicate pushdown.

CREATE EXTERNAL DATA SOURCE MyAzureStorage WITH (


TYPE = HADOOP,
LOCATION = 'wasbs://dailylogs@myaccount.blob.core.windows.net/'
);

Examples: Bulk Operations


J. Create an external data source for bulk operations retrieving data from Azure Blob storage.
Applies to: SQL Server 2017 (14.x).
Use the following data source for bulk operations using BULK INSERT or OPENROWSET. The credential used,
must be created using SHARED ACCESS SIGNATURE as the identity. For more information on shared access
signatures, see Using Shared Access Signatures (SAS ).
CREATE EXTERNAL DATA SOURCE MyAzureInvoices
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://newinvoices.blob.core.windows.net/week3',
CREDENTIAL = AccessAzureInvoices
);

To see this example in use, see BULK INSERT.

See Also
ALTER EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL )
CREATE EXTERNAL TABLE AS SELECT (Transact-SQL )
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
sys.external_data_sources (Transact-SQL )
CREATE EXTERNAL LIBRARY (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Uploads R packages to a database from the specified byte stream or file path.
This statement serves as a generic mechanism for the database administrator to upload artifacts needed for any
new external language runtimes (R, Python, Java, etc.) and OS platforms supported by SQL Server.
Currently only the R language and Windows platform are supported. Support for Python and Linux is planned for
a later release.

Syntax
CREATE EXTERNAL LIBRARY library_name
[ AUTHORIZATION owner_name ]
FROM <file_spec> [,…2]
WITH ( LANGUAGE = 'R' )
[ ; ]

<file_spec> ::=
{
(CONTENT = { <client_library_specifier> | <library_bits> }
[, PLATFORM = WINDOWS ])
}

<client_library_specifier> :: =
'[\\computer_name\]share_name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'
| '<relative_path_in_external_data_source>'

<library_bits> :: =
{ varbinary_literal | varbinary_expression }

Arguments
library_name
Libraries are added to the database scoped to the user. Library names must be unique within the context of a
specific user or owner. For example, two users RUser1 and RUser2 can both individually and separately upload
the R library ggplot2 . However, if RUser1 wanted to upload a newer version of ggplot2 , the second instance
must be named differently or must replace the existing library.
Library names cannot be arbitrarily assigned; the library name should be the same as the name required to load
the R library from R.
owner_name
Specifies the name of the user or role that owns the external library. If not specified, ownership is given to the
current user.
The libraries owned by database owner are considered global to the database and runtime. In other words,
database owners can create libraries that contain a common set of libraries or packages that are shared by many
users. When an external library is created by a user other than the dbo user, the external library is private to that
user only.
When the user RUser1 executes an R script, the value of libPath can contain multiple paths. The first path is
always the path to the shared library created by the database owner. The second part of libPath specifies the path
containing packages uploaded individually by RUser1.
file_spec
Specifies the content of the package for a specific platform. Only one file artifact per platform is supported.
The file can be specified in the form of a local path, or network path.
Optionally, an OS platform for the file can be specified. Only one file artifact or content is permitted for each OS
platform for a specific language or runtime.
library_bits
Specifies the content of the package as a hex literal, similar to assemblies.
This option is useful if you need to create a library or alter an existing library (and have the required permissions
to do so), but the file system on the server is restricted and you cannot copy the library files to a location that the
server can access.
PLATFORM = WINDOWS
Specifies the platform for the content of the library. The value defaults to the host platform on which SQL Server
is running. Therefore, the user doesn’t have to specify the value. It is required in case where multiple platforms are
supported, or the user needs to specify a different platform.
in SQL Server 2017, Windows is the only supported platform.

Remarks
For the R language, when using a file, packages must be prepared in the form of zipped archive files with the .ZIP
extension for Windows. Currently, only the Windows platform is supported.
The CREATE EXTERNAL LIBRARY statement uploads the library bits to the database. The library is installed when a
user runs an external script using sp_execute_external_script and calls the package or library.
Libraries uploaded to the instance can be either public or private. If the library is created by a member of dbo , the
library is public and can be shared with all users. Otherwise, the library is private to that user only.

Permissions
Requires the CREATE EXTERNAL LIBRARY permission. By default, any user who has dbo who is a member of the
db_owner role has permissions to create an external library. For all other users, you must explicitly give them
permission using a GRANT statement, specifying CREATE EXTERNAL LIBRARY as the privilege.
To modify a library requires the separate permission, ALTER ANY EXTERNAL LIBRARY .

Examples
A. Add an external library to a database
The following example adds an external library called customPackage to a database.

CREATE EXTERNAL LIBRARY customPackage


FROM (CONTENT = 'C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\customPackage.zip') WITH (LANGUAGE
= 'R');

After the library has been successfully uploaded to the instance, a user executes the sp_execute_external_script
procedure, to install the library.

EXEC sp_execute_external_script
@language =N'R',
@script=N'library(customPackage)'

B. Installing packages with dependencies


If the package you want to install has any dependencies, it is critical that you analyze both first-level and second-
level dependencies, and ensure that all required packages are available before you try to install the target package.
For example, assume you want to install a new package, packageA :
packageA has a dependency on packageB
packageB has a dependency on packageC

To succeed in installing packageA , you must create libraries for packageB and packageC at the same time that you
add packageA to SQL Server. Be sure to check the required package versions as well.
In practice, package dependencies for popular packages are usually much more complicated than this simple
example. For example, ggplot2 might require over 30 packages, and those packages might require additional
packages that are not available on the server. Any missing package or wrong package version can cause
installation to fail.
Because it can be difficult to determine all dependencies just from looking at the package manifest, we recommend
that you use a package such as miniCRAN to identify all packages that might be required to complete installation
successfully.
Upload the target package and its dependencies. All files must be in a folder that is accessible to the server.

CREATE EXTERNAL LIBRARY packageA


FROM (CONTENT = 'C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\packageA.zip')
WITH (LANGUAGE = 'R');
GO

CREATE EXTERNAL LIBRARY packageB FROM (CONTENT = 'C:\Program Files\Microsoft SQL


Server\MSSQL14.MSSQLSERVER\packageB.zip')
WITH (LANGUAGE = 'R');
GO

CREATE EXTERNAL LIBRARY packageC FROM (CONTENT = 'C:\Program Files\Microsoft SQL


Server\MSSQL14.MSSQLSERVER\packageC.zip')
WITH (LANGUAGE = 'R');
GO

Install the required packages first.


If a required package has already been uploaded to the instance, you need not add it again. Just be sure to
check whether the existing package is the correct version.
The required packages packageC and packageB are installed, in the correct order, when
sp_execute_external_script is first run to install package packageA .
However, if any required package is not available, installation of the target package packageA fails.
EXEC sp_execute_external_script
@language =N'R',
@script=N'
# load the desired package packageA
library(packageA)
print(packageVersion("packageA"))
'

C. Create a library from a byte stream


If you do not have the ability to save the package files in a location on the server, you can pass the package
contents in a variable. The following example creates a library by passing the bits as a hexidecimal literal.

CREATE EXTERNAL LIBRARY customLibrary FROM (CONTENT = 0xabc123) WITH (LANGUAGE = 'R');

NOTE
This code sample only demonstrates the syntax; the binary value in CONTENT = has been truncated for readability and does
not create a working library. The actual contents of the binary variable would be much longer.

D. Change an existing package library


The ALTER EXTERNAL LIBRARY DDL statement can be used to add new library content or modify existing library
content. To modify an existing library requires the ALTER ANY EXTERNAL LIBRARY permission.
For more information, see ALTER EXTERNAL LIBRARY.

See also
ALTER EXTERNAL LIBRARY (Transact-SQL )
DROP EXTERNAL LIBRARY (Transact-SQL )
sys.external_library_files
sys.external_libraries
CREATE EXTERNAL FILE FORMAT (Transact-SQL)
5/4/2018 • 12 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an External File Format object defining external data stored in Hadoop, Azure Blob Storage, or Azure
Data Lake Store. Creating an external file format is a prerequisite for creating an External Table. By creating an
External File Format, you specify the actual layout of the data referenced by an external table.
PolyBase supports the following file formats:
Delimited Text
Hive RCFile
Hive ORC
Parquet
To create an External Table, see CREATE EXTERNAL TABLE (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
-- Create an external file format for PARQUET files.
CREATE EXTERNAL FILE FORMAT file_format_name
WITH (
FORMAT_TYPE = PARQUET
[ , DATA_COMPRESSION = {
'org.apache.hadoop.io.compress.SnappyCodec'
| 'org.apache.hadoop.io.compress.GzipCodec' }
]);

--Create an external file format for ORC files.


CREATE EXTERNAL FILE FORMAT file_format_name
WITH (
FORMAT_TYPE = ORC
[ , DATA_COMPRESSION = {
'org.apache.hadoop.io.compress.SnappyCodec'
| 'org.apache.hadoop.io.compress.DefaultCodec' }
]);

--Create an external file format for RCFILE.


CREATE EXTERNAL FILE FORMAT file_format_name
WITH (
FORMAT_TYPE = RCFILE,
SERDE_METHOD = {
'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'
| 'org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe'
}
[ , DATA_COMPRESSION = 'org.apache.hadoop.io.compress.DefaultCodec' ]);

--Create an external file format for DELIMITED TEXT files.


CREATE EXTERNAL FILE FORMAT file_format_name
WITH (
FORMAT_TYPE = DELIMITEDTEXT
[ , FORMAT_OPTIONS ( <format_options> [ ,...n ] ) ]
[ , DATA_COMPRESSION = {
'org.apache.hadoop.io.compress.GzipCodec'
| 'org.apache.hadoop.io.compress.DefaultCodec'
}
]);

<format_options> ::=
{
FIELD_TERMINATOR = field_terminator
| STRING_DELIMITER = string_delimiter
| First_Row = integer -- ONLY AVAILABLE SQL DW
| DATE_FORMAT = datetime_format
| USE_TYPE_DEFAULT = { TRUE | FALSE }
| Encoding = {'UTF8' | 'UTF16'}
}

Arguments
file_format_name
Specifies a name for the external file format.
FORMAT_TYPE = [ PARQUET | ORC | RCFILE | PARQUET] Specifies the format of the external data.
PARQUET Specifies a Parquet format.
ORC
Specifies an Optimized Row Columnar (ORC ) format. This option requires Hive version 0.11 or higher on
the external Hadoop cluster. In Hadoop, the ORC file format offers better compression and performance
than the RCFILE file format.
RCFILE (in combination with SERDE_METHOD = SERDE_method) Specifies a Record Columnar file
format (RcFile). This option requires you to specify a Hive Serializer and Deserializer (SerDe) method. This
requirement is the same if you use Hive/HiveQL in Hadoop to query RC files. Note, the SerDe method is
case-sensitive.
Examples of specifying RCFile with the two SerDe methods that PolyBase supports.
FORMAT_TYPE = RCFILE, SERDE_METHOD =
'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'
FORMAT_TYPE = RCFILE, SERDE_METHOD =
'org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe'
DELIMITEDTEXT Specifies a text format with column delimiters, also called field terminators.
FIELD_TERMINATOR = field_terminator
Applies only to delimited text files. The field terminator specifies one or more characters that mark the end
of each field (column) in the text-delimited file. The default is the pipe character ꞌ|ꞌ. For guaranteed support,
we recommend using one or more ascii characters.
Examples:
FIELD_TERMINATOR = '|'
FIELD_TERMINATOR = ' '
FIELD_TERMINATOR = ꞌ\tꞌ
FIELD_TERMINATOR = '~|~'
STRING_DELIMITER = string_delimiter
Specifies the field terminator for data of type string in the text-delimited file. The string delimiter is one or
more characters in length and is enclosed with single quotes. The default is the empty string "". For
guaranteed support, we recommend using one or more ascii characters.
Examples:
STRING_DELIMITER = '"'
STRING_DELIMITER = '0x22' -- Double quote hex
STRING_DELIMITER = '*'
STRING_DELIMITER = ꞌ,ꞌ
STRING_DELIMITER = '0x7E0x7E' -- Two tildes (for example, ~~)
FIRST_ROW = First_row_int
Specifies the row number that is read first in all files during a PolyBase load. This parameter can take
values 1-15. If the value is set to two, the first row in every file (header row ) is skipped when the data is
loaded. Rows are skipped based on the existence of row terminators (/r/n, /r, /n). When this option is used
for export, rows are added to the data to make sure the file can be read with no data loss. If the value is set
to >2, the first row exported is the Column names of the external table.
DATE_FORMAT = datetime_format
Specifies a custom format for all date and time data that might appear in a delimited text file. If the source
file uses default datetime formats, this option isn't necessary. Only one custom datetime format is allowed
per file. You can't specify more than one custom datetime formats per file. However, you can use more than
one datetime formats if each one is the default format for its respective data type in the external table
definition.
PolyBase only uses the custom date format for importing the data. It doesn't use the custom format for writing
data to an external file.
When DATE_FORMAT isn't specified or is the empty string, PolyBase uses the following default formats:
DateTime: 'yyyy-MM -dd HH:mm:ss'
SmallDateTime: 'yyyy-MM -dd HH:mm'
Date: 'yyyy-MM -dd'
DateTime2: 'yyyy-MM -dd HH:mm:ss'
DateTimeOffset: 'yyyy-MM -dd HH:mm:ss'
Time: 'HH:mm:ss'
Example date formats are in the following table:
Notes about the table:
Year, month, and day can have a variety of formats and orders. The table shows only the ymd format.
Month can have one or two digits, or three characters. Day can have one or two digits. Year can have two
or four digits.
Milliseconds (fffffff ) are not required.
Am, pm (tt) isn't required. The default is AM.

DATE TYPE EXAMPLE DESCRIPTION

DateTime DATE_FORMAT = 'yyyy-MM-dd In addition to year, month and day, this


HH:mm:ss.fff' date format includes 00-24 hours, 00-
59 minutes, 00-59 seconds, and 3
digits for milliseconds.

DateTime DATE_FORMAT = 'yyyy-MM-dd In addition to year, month and day, this


hh:mm:ss.ffftt' date format includes 00-12 hours, 00-
59 minutes, 00-59 seconds, 3 digits for
milliseconds, and AM, am, PM, or pm.

SmallDateTime DATE_FORMAT = 'yyyy-MM-dd In addition to year, month, and day,


HH:mm' this date format includes 00-23 hours,
00-59 minutes.

SmallDateTime DATE_FORMAT = 'yyyy-MM-dd In addition to year, month, and day,


hh:mmtt' this date format includes 00-11 hours,
00-59 minutes, no seconds, and AM,
am, PM, or pm.

Date DATE_FORMAT = 'yyyy-MM-dd' Year, month, and day. No time element


is included.

Date DATE_FORMAT = 'yyyy-MMM-dd' Year, month, and day. When month is


specified with 3 M’s, the input value is
one or the strings Jan, Feb, Mar, Apr,
May, Jun, Jul, Aug, Sep, Oct, Nov, or
Dec.
DATE TYPE EXAMPLE DESCRIPTION

DateTime2 DATE_FORMAT = 'yyyy-MM-dd In addition to year, month, and day,


HH:mm:ss.fffffff' this date format includes 00-23 hours,
00-59 minutes, 00-59 seconds, and 7
digits for milliseconds.

DateTime2 DATE_FORMAT = 'yyyy-MM-dd In addition to year, month, and day,


hh:mm:ss.ffffffftt' this date format includes 00-11 hours,
00-59 minutes, 00-59 seconds, 7 digits
for milliseconds, and AM, am, PM, or
pm.

DateTimeOffset DATE_FORMAT = 'yyyy-MM-dd In addition to year, month, and day,


HH:mm:ss.fffffff zzz' this date format includes 00-23 hours,
00-59 minutes, 00-59 seconds, and 7
digits for milliseconds, and the
timezone offset which you put in the
input file as {+&#124;-}HH:ss . For
example, since Los Angeles time
without daylight savings is 8 hours
behind UTC, a value of -08:00 in the
input file specifies the timezone for Los
Angeles.

DateTimeOffset DATE_FORMAT = 'yyyy-MM-dd In addition to year, month, and day,


hh:mm:ss.ffffffftt zzz' this date format includes 00-11 hours,
00-59 minutes, 00-59 seconds, 7 digits
for milliseconds, (AM, am, PM, or pm),
and the timezone offset. See the
description in the previous row.

Time DATE_FORMAT = 'HH:mm:ss' There is no date value, only 00-23


hours, 00-59 minutes, and 00-59
seconds.

All supported date formats:

DATETIME SMALLDATETIME DATE DATETIME2 DATETIMEOFFSET

[M[M]]M-[d]d-[yy]yy [M[M]]M-[d]d-[yy]yy [M[M]]M-[d]d-[yy]yy [M[M]]M-[d]d-[yy]yy [M[M]]M-[d]d-[yy]yy


HH:mm:ss[.fff ] HH:mm[:00] HH:mm:ss[.fffffff ] HH:mm:ss[.fffffff ] zzz

[M[M]]M-[d]d-[yy]yy [M[M]]M-[d]d-[yy]yy [M[M]]M-[d]d-[yy]yy [M[M]]M-[d]d-[yy]yy


hh:mm:ss[.fff ][tt] hh:mm[:00][tt] hh:mm:ss[.fffffff ][tt] hh:mm:ss[.fffffff ][tt]
zzz

[M[M]]M-[yy]yy-[d]d [M[M]]M-[yy]yy-[d]d [M[M]]M-[yy]yy-[d]d [M[M]]M-[yy]yy-[d]d [M[M]]M-[yy]yy-[d]d


HH:mm:ss[.fff ] HH:mm[:00] HH:mm:ss[.fffffff ] HH:mm:ss[.fffffff ] zzz

[M[M]]M-[yy]yy-[d]d [M[M]]M-[yy]yy-[d]d [M[M]]M-[yy]yy-[d]d [M[M]]M-[yy]yy-[d]d


hh:mm:ss[.fff ][tt] hh:mm[:00][tt] hh:mm:ss[.fffffff ][tt] hh:mm:ss[.fffffff ][tt]
zzz

[yy]yy-[M[M]]M-[d]d [yy]yy-[M[M]]M-[d]d [yy]yy-[M[M]]M-[d]d [yy]yy-[M[M]]M-[d]d [yy]yy-[M[M]]M-[d]d


HH:mm:ss[.fff ] HH:mm[:00] HH:mm:ss[.fffffff ] HH:mm:ss[.fffffff ] zzz
DATETIME SMALLDATETIME DATE DATETIME2 DATETIMEOFFSET

[yy]yy-[M[M]]M-[d]d [yy]yy-[M[M]]M-[d]d [yy]yy-[M[M]]M-[d]d [yy]yy-[M[M]]M-[d]d


hh:mm:ss[.fff ][tt] hh:mm[:00][tt] hh:mm:ss[.fffffff ][tt] hh:mm:ss[.fffffff ][tt]
zzz

[yy]yy-[d]d-[M[M]]M [yy]yy-[d]d-[M[M]]M [yy]yy-[d]d-[M[M]]M [yy]yy-[d]d-[M[M]]M [yy]yy-[d]d-[M[M]]M


HH:mm:ss[.fff ] HH:mm[:00] HH:mm:ss[.fffffff ] HH:mm:ss[.fffffff ] zzz

[yy]yy-[d]d-[M[M]]M [yy]yy-[d]d-[M[M]]M [yy]yy-[d]d-[M[M]]M [yy]yy-[d]d-[M[M]]M


hh:mm:ss[.fff ][tt] hh:mm[:00][tt] hh:mm:ss[.fffffff ][tt] hh:mm:ss[.fffffff ][tt]
zzz

[d]d-[M[M]]M-[yy]yy [d]d-[M[M]]M-[yy]yy [d]d-[M[M]]M-[yy]yy [d]d-[M[M]]M-[yy]yy [d]d-[M[M]]M-[yy]yy


HH:mm:ss[.fff ] HH:mm[:00] HH:mm:ss[.fffffff ] HH:mm:ss[.fffffff ] zzz

[d]d-[M[M]]M-[yy]yy [d]d-[M[M]]M-[yy]yy [d]d-[M[M]]M-[yy]yy [d]d-[M[M]]M-[yy]yy


hh:mm:ss[.fff ][tt] hh:mm[:00][tt] hh:mm:ss[.fffffff ][tt] hh:mm:ss[.fffffff ][tt]
zzz

[d]d-[yy]yy-[M[M]]M [d]d-[yy]yy-[M[M]]M [d]d-[yy]yy-[M[M]]M [d]d-[yy]yy-[M[M]]M [d]d-[yy]yy-[M[M]]M


HH:mm:ss[.fff ] HH:mm[:00] HH:mm:ss[.fffffff ] HH:mm:ss[.fffffff ] zzz

[d]d-[yy]yy-[M[M]]M [d]d-[yy]yy-[M[M]]M [d]d-[yy]yy-[M[M]]M [d]d-[yy]yy-[M[M]]M


hh:mm:ss[.fff ][tt] hh:mm[:00][tt] hh:mm:ss[.fffffff ][tt] hh:mm:ss[.fffffff ][tt]
zzz

Details:
To separate month, day and year values, you can use '–', '/', or '.'. For simplicity, the table uses only the ' – '
separator.
To specify the month as text, use three or more characters. Months with one or two characters are
interpreted as a number.
To separate time values, use the ':' symbol.
Letters enclosed in square brackets are optional.
The letters 'tt' designate [AM|PM|am|pm]. AM is the default. When 'tt' is specified, the hour value (hh)
must be in the range of 0 to 12.
The letters 'zzz' designate the time zone offset for the system's current time zone in the format {+|-}HH:ss].
USE_TYPE_DEFAULT = { TRUE | FALSE }
Specifies how to handle missing values in delimited text files when PolyBase retrieves data from the text
file.
TRUE
When retrieving data from the text file, store each missing value by using the default value for the data
type of the corresponding column in the external table definition. For example, replace a missing value
with:
0 if the column is defined as a numeric column.
Empty string "" if the column is a string column.
1900-01-01 if the column is a date column.
FALSE
Store all missing values as NULL. Any NULL values that are stored by using the word NULL in the
delimited text file are imported as the string 'NULL'.
Encoding = {'UTF8' | 'UTF16'}
In Azure SQL Data Warehouse, PolyBase can read UTF8 and UTF16-LE encoded delimited text files. In
SQL Server and PDW, PolyBase doesn't support reading UTF16 encoded files.
DATA_COMPRESSION = data_compression_method
Specifies the data compression method for the external data. When DATA_COMPRESSION isn't specified,
the default is uncompressed data. To work properly, Gzip compressed files must have the ".gz" file
extension.
The DELIMITEDTEXT format type supports these compression methods:
DATA COMPRESSION = 'org.apache.hadoop.io.compress.DefaultCodec'
DATA COMPRESSION = 'org.apache.hadoop.io.compress.GzipCodec'
The RCFILE format type supports this compression method:
DATA COMPRESSION = 'org.apache.hadoop.io.compress.DefaultCodec'
The ORC file format type supports these compression methods:
DATA COMPRESSION = 'org.apache.hadoop.io.compress.DefaultCodec'
DATA COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
The PARQUET file format type supports the following compression methods:
DATA COMPRESSION = 'org.apache.hadoop.io.compress.GzipCodec'
DATA COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'

Permissions
Requires ALTER ANY EXTERNAL FILE FORMAT permission.

General Remarks
The external file format is database-scoped in SQL Server and SQL Data Warehouse. It is server-scoped in
Parallel Data Warehouse.
The format options are all optional and only apply to delimited text files.
When the data is stored in one of the compressed formats, PolyBase first decompresses the data before returning
the data records.

Limitations and Restrictions


The row delimiter in delimited-text files must be supported by Hadoop’s LineRecordReader. That is, it must be
either '\r', '\n', or '\r\n'. These delimiters are not user-configurable.
The combinations of supported SerDe methods with RCFiles, and the supported data compression methods are
listed previously in this article. Not all combinations are supported.
The maximum number of concurrent PolyBase queries is 32. When 32 concurrent queries are running, each
query can read a maximum of 33,000 files from the external file location. The root folder and each subfolder also
count as a file. If the degree of concurrency is less than 32, the external file location can contain more than 33,000
files.
Because of the limitation on number of files in the external table, we recommend storing less than 30,000 files in
the root and subfolders of the external file location. Also, we recommend keeping the number of subfolders under
the root directory to a small number. When too many files are referenced, a Java Virtual Machine out-of-memory
exception might occur.
When exporting data to Hadoop or Azure Blob Storage via PolyBase, only the data is exported, not the column
names(metadata) as defined in the CREATE EXTERNAL TABLE command.

Locking
Takes a shared lock on the EXTERNAL FILE FORMAT object.

Performance
Using compressed files always comes with the tradeoff between transferring less data between the external data
source and SQL Server while increasing the CPU usage to compress and decompress the data.
Gzip compressed text files are not splittable. To improve performance for Gzip compressed text files, we
recommend generating multiple files that are all stored in the same directory within the external data source. This
file structure allows PolyBase to read and decompress the data faster by using multiple reader and
decompression processes. The ideal number of compressed files is the maximum number of data reader
processes per compute node. In SQL Server and Parallel Data Warehouse, the maximum number of data reader
processes is 8 per node in the current release. In SQL Data Warehouse, the maximum number of data reader
processes per node varies by SLO. See Azure SQL Data Warehouse loading patterns and strategies for details.

Examples
A. Create a DELIMITEDTEXT external file format
This example creates an external file format named textdelimited1 for a text-delimited file. The options listed for
FORMAT_OPTIONS specify that the fields in the file should be separated using a pipe character '|'. The text file is
also compressed with the Gzip codec. If DATA_COMPRESSION isn't specified, the text file is uncompressed.
For a delimited text file, the data compression method can either be the default Codec,
'org.apache.hadoop.io.compress.DefaultCodec', or the Gzip Codec, 'org.apache.hadoop.io.compress.GzipCodec'.

CREATE EXTERNAL FILE FORMAT textdelimited1


WITH (
FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS (
FIELD_TERMINATOR = '|',
DATE_FORMAT = 'MM/dd/yyyy' ),
DATA_COMPRESSION = 'org.apache.hadoop.io.compress.GzipCodec'
);

B. Create an RCFile external file format


This example creates an external file format for a RCFile that uses the serialization/deserialization method
org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe. It also specifies to use the Default Codec for
the data compression method. If DATA_COMPRESSION isn't specified, the default is no compression.
CREATE EXTERNAL FILE FORMAT rcfile1
WITH (
FORMAT_TYPE = RCFILE,
SERDE_METHOD = 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe',
DATA_COMPRESSION = 'org.apache.hadoop.io.compress.DefaultCodec'
);

C. Create an ORC external file format


This example creates an external file format for an ORC file that compresses the data with the
org.apache.io.compress.SnappyCodec data compression method. If DATA_COMPRESSION isn't specified, the
default is no compression.

CREATE EXTERNAL FILE FORMAT orcfile1


WITH (
FORMAT_TYPE = ORC,
DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
);

D. Create a PARQUET external file format


This example creates an external file format for a Parquet file that compresses the data with the
org.apache.io.compress.SnappyCodec data compression method. If DATA_COMPRESSION isn't specified, the
default is no compression.

CREATE EXTERNAL FILE FORMAT parquetfile1


WITH (
FORMAT_TYPE = PARQUET,
DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
);

E. Create a Delimited Text File Skipping Header Row (Azure SQL DW Only)
This example creates an external file format for CSV file with a single header row.

CREATE EXTERNAL FILE FORMAT skipHeader_CSV


WITH (FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS(
FIELD_TERMINATOR = ',',
STRING_DELIMITER = '"',
FIRST_ROW = 2,
USE_TYPE_DEFAULT = True)
)

See Also
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL )
CREATE EXTERNAL TABLE AS SELECT (Transact-SQL )
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
sys.external_file_formats (Transact-SQL )
CREATE EXTERNAL RESOURCE POOL (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Applies to: SQL Server 2016 (13.x) R Services (In-Database) and SQL Server 2017 (14.x) Machine Learning
Services (In-Database)
Creates an external pool used to define resources for external processes. A resource pool represents a subset of
the physical resources (memory and CPUs) of an instance of the Database Engine. Resource Governor enables a
database administrator to distribute server resources among resource pools, up to a maximum of 64 pools.
For R Services (In-Database) in SQL Server 2016 (13.x), the external pool governs rterm.exe ,
BxlServer.exe , and other processes spawned by them.

For Machine Learning Services (In-Database) in SQL Server 2017 (14.x), the external pool governs the R
processes listed for SQL Server 2016, as well as python.exe , BxlServer.exe , and other processes spawned
by them.
Transact-SQL Syntax Conventions.

Syntax
CREATE EXTERNAL RESOURCE POOL pool_name
[ WITH (
[ MAX_CPU_PERCENT = value ]
[ [ , ] AFFINITY CPU =
{
AUTO
| ( <cpu_range_spec> )
| NUMANODE = ( <NUMA_node_id> )
} ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MAX_PROCESSES = value ]
)
]
[ ; ]

<CPU_range_spec> ::=
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]

Arguments
pool_name
Is the user-defined name for the external resource pool. pool_name is alphanumeric, can be up to 128 characters,
must be unique within an instance of SQL Server, and must comply with the rules for identifiers.
MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in the external resource pool can receive when
there is CPU contention. value is an integer with a default setting of 100. The allowed range for value is from 1
through 100.
AFFINITY {CPU = AUTO | ( <CPU_range_spec> ) | NUMANODE = (<NUMA_node_range_spec>)} Attach the
external resource pool to specific CPUs. The default value is AUTO.
AFFINITY CPU = ( <CPU_range_spec> ) maps the external resource pool to the SQL Server CPUs identified by
the given CPU_IDs.
When you use AFFINITY NUMANODE = ( <NUMA_node_range_spec> ), the external resource pool is affinitized
to the SQL Server physical CPUs that correspond to the given NUMA node or range of nodes.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this external resource pool. value is an integer
with a default setting of 100. The allowed range for value is from 1 through 100.
MAX_PROCESSES =value
Specifies the maximum number of processes allowed for the external resource pool. Specify 0 to set an unlimited
threshold for the pool, which is thereafter bound only by computer resources. The default is 0.

Remarks
The Database Engine implements the resource pool when you execute the ALTER RESOURCE GOVERNOR
RECONFIGURE statement.
For general information about resource pools, see Resource Governor Resource Pool,
sys.resource_governor_external_resource_pools (Transact-SQL ), and
sys.dm_resource_governor_external_resource_pool_affinity (Transact-SQL ).
For information specific to managing external resource pools used for machine learning, see Resource governance
for machine learning in SQL Server.

Permissions
Requires CONTROL SERVER permission.

Examples
The following statement defines an external pool that restricts CPU usage to 75 percent and the maximum
memory to 30 percent of the available memory on the computer.

CREATE EXTERNAL RESOURCE POOL ep_1


WITH (
MAX_CPU_PERCENT = 75
, AFFINITY CPU = AUTO
, MAX_MEMORY_PERCENT = 30
);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

See also
external scripts enabled Server Configuration Option
sp_execute_external_script (Transact-SQL )
ALTER EXTERNAL RESOURCE POOL (Transact-SQL )
DROP EXTERNAL RESOURCE POOL (Transact-SQL )
CREATE RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
Resource Governor Resource Pool
sys.resource_governor_external_resource_pools (Transact-SQL )
sys.dm_resource_governor_external_resource_pool_affinity (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL)
5/16/2018 • 17 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an external table for PolyBase, or Elastic Database queries. Depending on the scenario, the syntax differs
significantly. An external table created for PolyBase cannot be used for Elastic Database queries. Similarly, an
external table created for Elastic Database queries cannot be used for PolyBase, etc.

NOTE
PolyBase is supported only on SQL Server 2016 (or higher), Azure SQL Data Warehouse, and Parallel Data Warehouse.
Elastic Database queries are supported only on Azure SQL Database v12 or later.

SQL Server uses external tables to access data stored in a Hadoop cluster or Azure blob storagea PolyBase
external table that references data stored in a Hadoop cluster or Azure blob storage. Can also be used to
create an external table for Elastic Database query.
Use an external table to:
Query Hadoop or Azure blob storage data with Transact-SQL statements.
Import and store data from Hadoop or Azure blob storage into your SQL Server database.
Create an external table for use with an Elastic Database
query.
Import and store data from Azure Data Lake Store into Azure SQL Data Warehouse
See also CREATE EXTERNAL DATA SOURCE (Transact-SQL ) and DROP EXTERNAL TABLE (Transact-
SQL ).
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

-- Create a new external table


CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
LOCATION = 'folder_or_filepath',
DATA_SOURCE = external_data_source_name,
FILE_FORMAT = external_file_format_name
[ , <reject_options> [ ,...n ] ]
)
[;]

<reject_options> ::=
{
| REJECT_TYPE = value | percentage
| REJECT_VALUE = reject_value
| REJECT_SAMPLE_VALUE = reject_sample_value

-- Create a table for use with Elastic Database query


CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH ( <sharded_external_table_options> )
[;]

<sharded_external_table_options> ::=
DATA_SOURCE = external_data_source_name,
SCHEMA_NAME = N'nonescaped_schema_name',
OBJECT_NAME = N'nonescaped_object_name',
[DISTRIBUTION = SHARDED(sharding_column_name) | REPLICATED | ROUND_ROBIN]]
)
[;]

-- Syntax for Azure SQL Database

-- Create a table for use with Elastic Database query


CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH ( <sharded_external_table_options> )
[;]

<sharded_external_table_options> ::=
DATA_SOURCE = external_data_source_name,
SCHEMA_NAME = N'nonescaped_schema_name',
OBJECT_NAME = N'nonescaped_object_name',
[DISTRIBUTION = SHARDED(sharding_column_name) | REPLICATED | ROUND_ROBIN]]
)
[;]
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

-- Create a new external table in SQL Server PDW


CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
LOCATION = 'hdfs_folder_or_filepath',
DATA_SOURCE = external_data_source_name,
FILE_FORMAT = external_file_format_name
[ , <reject_options> [ ,...n ] ]
)
[;]

<reject_options> ::=
{
| REJECT_TYPE = value | percentage,
| REJECT_VALUE = reject_value,
| REJECT_SAMPLE_VALUE = reject_sample_value,
| REJECTED_ROW_LOCATION = '\REJECT_Directory'

Arguments
database_name . [ schema_name ] . | schema_name. ] table_name
The one to three-part name of the table to create. For an external table, only the table metadata is stored in SQL
along with basic statistics about the file and or folder referenced in Hadoop or Azure blob storage. No actual data
is moved or stored in SQL Server.
<column_definition> [ ,...n ] CREATE EXTERNAL TABLE allows one or more column definitions. Both CREATE
EXTERNAL TABLE and CREATE TABLE use the same syntax for defining a column. An exception to this, you
cannot use the DEFAULT CONSTRAINT on external tables. For the full details about column definitions and their
data types, see CREATE TABLE (Transact-SQL ) and CREATE TABLE on Azure SQL Database.
The column definitions, including the data types and number of columns must match the data in the external files.
If there is a mismatch, the file rows will be rejected when querying the actual data.
For external tables that reference files in external data sources, the column and type definitions must map to the
exact schema of the external file. When defining data types that reference data stored in Hadoop/Hive, use the
following mappings between SQL and Hive data types and cast the type into a SQL data type when selecting
from it. The types include all versions of Hive unless stated otherwise.

NOTE
SQL Server does not support the Hive infinity data value in any conversion. PolyBase will fail with a data type conversion
error.

HADOOP/JAVA DATA
SQL DATA TYPE .NET DATA TYPE HIVE DATA TYPE TYPE COMMENTS

tinyint Byte tinyint ByteWritable For unsigned


numbers only.

smallint Int16 smallint ShortWritable

int Int32 int IntWritable


HADOOP/JAVA DATA
SQL DATA TYPE .NET DATA TYPE HIVE DATA TYPE TYPE COMMENTS

bigint Int64 bigint LongWritable

bit Boolean boolean BooleanWritable

float Double double DoubleWritable

real Single float FloatWritable

money Decimal double DoubleWritable

smallmoney Decimal double DoubleWritable

nchar String string text

Char[]

nvarchar String string Text

Char[]

char String string Text

Char[]

varchar String string Text

Char[]

binary Byte[] binary BytesWritable Applies to Hive 0.8


and later.

varbinary Byte[] binary BytesWritable Applies to Hive 0.8


and later.

date DateTime timestamp TimestampWritable

smalldatetime DateTime timestamp TimestampWritable

datetime2 DateTime timestamp TimestampWritable

datetime DateTime timestamp TimestampWritable

time TimeSpan timestamp TimestampWritable

decimal Decimal decimal BigDecimalWritable Applies to Hive0.11


and later.

LOCATION = 'folder_or_filepath'
Specifies the folder or the file path and file name for the actual data in Hadoop or Azure blob storage. The location
starts from the root folder; the root folder is the data location specified in the external data source.
In SQL Server, the CREATE EXTERNAL TABLE statement creates the path and folder if it does not already exist.
You can then use INSERT INTO to export data from a local SQL Server table to the external data source. For
more information, see Polybase Queries.
In SQL Data Warehouse and Analytics Platform System, the CREATE EXTERNAL TABLE AS SELECT statement
creates the path and folder if it does not exist. In these two products, CREATE EXTERNAL TABLE does not create
the path and folder.
If you specify LOCATION to be a folder, a PolyBase query that selects from the external table will retrieve files
from the folder and all of its subfolders. Just like Hadoop, PolyBase does not return hidden folders. It also does
not return files for which the file name begins with an underline (_) or a period (.).
In this example, if LOCATION='/webdata/', a PolyBase query will return rows from mydata.txt and mydata2.txt. It
will not return mydata3.txt because it is a subfolder of a hidden folder. It will not return _hidden.txt because it is a
hidden file.

To change the default and only read from the root folder, set the attribute <polybase.recursive.traversal> to 'false'
in the core-site.xml configuration file. This file is located under
<SqlBinRoot>\Polybase\Hadoop\Conf with SqlBinRoot the bin root of SQl Server . For example,
C:\\Program Files\\Microsoft SQL Server\\MSSQL13.XD14\\MSSQL\\Binn .

DATA_SOURCE = external_data_source_name
Specifies the name of the external data source that contains the location of the external data. This location is either
a Hadoop or Azure blob storage. To create an external data source, use CREATE EXTERNAL DATA SOURCE
(Transact-SQL ).
FILE_FORMAT = external_file_format_name
Specifies the name of the external file format object that stores the file type and compression method for the
external data. To create an external file format, use CREATE EXTERNAL FILE FORMAT (Transact-SQL ).
Reject Options
You can specify reject parameters that determine how PolyBase will handle dirty records it retrieves from the
external data source. A data record is considered ‘dirty’ if it actual data types or the number of columns do not
match the column definitions of the external table.
When you do not specify or change reject values, PolyBase uses default values. This information about the reject
parameters is stored as additional metadata when you create an external table with CREATE EXTERNAL TABLE
statement. When a future SELECT statement or SELECT INTO SELECT statement selects data from the external
table , PolyBase will use the reject options to determine the number or percentage of rows that can be rejected
before the actual query fails. . The query will return (partial) results until the reject threshold is exceeded; it then
fails with the appropriate error message.
REJECT_TYPE = value | percentage
Clarifies whether the REJECT_VALUE option is specified as a literal value or a percentage.
value
REJECT_VALUE is a literal value, not a percentage. The PolyBase query will fail when the number of rejected rows
exceeds reject_value.
For example, if REJECT_VALUE = 5 and REJECT_TYPE = value, the PolyBase SELECT query will fail after 5 rows
have been rejected.
percentage
REJECT_VALUE is a percentage, not a literal value. A PolyBase query will fail when the percentage of failed rows
exceeds reject_value. The percentage of failed rows is calculated at intervals.
REJECT_VALUE = reject_value
Specifies the value or the percentage of rows that can be rejected before the query fails.
For REJECT_TYPE = value, reject_value must be an integer between 0 and 2,147,483,647.
For REJECT_TYPE = percentage, reject_value must be a float between 0 and 100.
REJECT_SAMPLE_VALUE = reject_sample_value
This attribute is required when you specify REJECT_TYPE = percentage. It determines the number of rows to
attempt to retrieve before the PolyBase recalculates the percentage of rejected rows.
The reject_sample_value parameter must be an integer between 0 and 2,147,483,647.
For example, if REJECT_SAMPLE_VALUE = 1000, PolyBase will calculate the percentage of failed rows after it
has attempted to import 1000 rows from the external data file. If the percentage of failed rows is less than
reject_value, PolyBase will attempt to retrieve another 1000 rows. It continues to recalculate the percentage of
failed rows after it attempts to import each additional 1000 rows.

NOTE
Since PolyBase computes the percentage of failed rows at intervals, the actual percentage of failed rows can exceed
reject_value.

Example:
This example shows how the three REJECT options interact with each other. For example, if REJECT_TYPE =
percentage, REJECT_VALUE = 30, and REJECT_SAMPLE_VALUE = 100, the following scenario could occur:
PolyBase attempts to retrieve the first 100 rows; 25 fail and 75 succeed.
Percent of failed rows is calculated as 25%, which is less than the reject value of 30%. Hence, PolyBase will
continue retrieving data from the external data source.
PolyBase attempts to load the next 100 rows; this time 25 succeed and 75 fail.
Percent of failed rows is recalculated as 50%. The percentage of failed rows has exceeded the 30% reject
value.
The PolyBase query fails with 50% rejected rows after attempting to return the first 200 rows. Note that
matching rows have been returned before the PolyBase query detects the reject threshold has been
exceeded.
REJECTED_ROW_LOCATION = Directory Location
Specifies the directory within the External Data Source that the rejected rows and the corresponding error file
should be written. If the specified path does not exist, PolyBase will create one on your behalf. A child directory is
created with the name “rejectedrows”. The “” character ensures that the directory is escaped for other data
processing unless explicitly named in the location parameter. Within this directory, there is a folder created based
on the time of load submission in the format YearMonthDay -HourMinuteSecond (Ex. 20180330-173205). In this
folder, two types of files are written, the _reason file and the data file.
The reason files and the data files both have the queryID associated with the CTAS statement. Because the data
and the reason are in separate files corresponding files have a matching suffix.
Sharded external table options
Specifies the external data source (a non-SQL Server data source) and a distribution method for the Elastic
Database query.
DATA_SOURCE
An external data source such as data stored in a Hadoop File System, Azure blob storage, or a shard map
manager.
SCHEMA_NAME
The SCHEMA_NAME clause provides the ability to map the external table definition to a table in a different
schema on the remote database. Use this to disambiguate between schemas that exist on both the local and
remote databases.
OBJECT_NAME
The OBJECT_NAME clause provides the ability to map the external table definition to a table with a different
name on the remote database. Use this to disambiguate between object names that exist on both the local and
remote databases.
DISTRIBUTION
Optional. This is only required only for databases of type SHARD_MAP_MANAGER. This controls whether a
table is treated as a sharded table or a replicated table. With SHARDED (column name) tables, the data from
different tables do not overlap. REPLICATED specifies that tables have the same data on every shard.
ROUND_ROBIN indicates that an application-specific method is used to distribute the data.

Permissions
Requires these user permissions:
CREATE TABLE
ALTER ANY SCHEMA
ALTER ANY EXTERNAL DATA SOURCE
ALTER ANY EXTERNAL FILE FORMAT
CONTROL DATABASE
Note, the login that creates the external data source must have permission to read and write to the external
data source, located in Hadoop or Azure blob storage.

IMPORTANT
The ALTER ANY EXTERNAL DATA SOURCE permission grants any principal the ability to create and modify any external data
source object, and therefore, it also grants the ability to access all database scoped credentials on the database. This
permission must be considered as highly privileged, and therefore must be granted only to trusted principals in the system.

Error Handling
While executing the CREATE EXTERNAL TABLE statement, PolyBase attempts to connect to the external data
source. If the attempt to connect fails, the statement will fail and the external table will not be created. It can take a
minute or more for the command to fail since PolyBase retries the connection before eventually failing the query.

General Remarks
In ad-hoc query scenarios, i.e. SELECT FROM EXTERNAL TABLE, PolyBase stores the rows retrieved from the
external data source in a temporary table. After the query completes, PolyBase removes and deletes the
temporary table. No permanent data is stored in SQL tables.
In contrast, in the import scenario, i.e. SELECT INTO FROM EXTERNAL TABLE, PolyBase stores the rows
retrieved from the external data source as permanent data in the SQL table. The new table is created during query
execution when Polybase retrieves the external data.
PolyBase can push some of the query computation to Hadoop to improve query performance. This is called
predicate pushdown. To enable this, specify the Hadoop resource manager location option in CREATE EXTERNAL
DATA SOURCE (Transact-SQL ).
You can create numerous external tables that reference the same or different external data sources.

Limitations and Restrictions


In CTP2, the export functionality is not supported, i.e. permanently storing SQL data into the external data source.
This functionality will be available in CTP3.
Since the data for an external table resides off the appliance, it is not under the control of PolyBase, and can be
changed or removed at any time by an external process. Because of this, uery results against an external table are
not guaranteed to be deterministic. The same query can return different results each time it runs against an
external table. Similarly, a query can fail if the external data is removed or relocated.
You can create multiple external tables that each reference different external data sources. However, if you
simultaneously run queries against different Hadoop data sources, then each Hadoop source must use the same
'hadoop connectivity' server configuration setting. For example, you can’t simultaneously run a query against a
Cloudera Hadoop cluster and a Hortonworks Hadoop cluster since these use different configuration settings. For
the configuration settings and supported combinations, see PolyBase Connectivity Configuration (Transact-SQL ).
Only these Data Definition Language (DDL ) statements are allowed on external tables:
CREATE TABLE and DROP TABLE
CREATE STATISTICS and DROP STATISTICS
Note: CREATE and DROP STATISTICS on external tables are not supported in Azure SQL Database.
CREATE VIEW and DROP VIEW
Constructs and operations not supported:
The DEFAULT constraint on external table columns
Data Manipulation Language (DML ) operations of delete, insert, and update
Query limitations:
PolyBase can consume a maximum of 33k files per folder when running 32 concurrent PolyBase queries.
This maximum number includes both files and subfolders in each HDFS folder. If the degree of
concurrency is less than 32, a user can run PolyBase queries against folders in HDFS which contain more
than 33k files. We recommend that you keep external file paths short and use no more than 30k files per
HDFS folder. When too many files are referenced, a Java Virtual Machine (JVM ) out-of-memory exception
might occur.
Table width limitations: PolyBase in SQL Server 2016 has a row width limit of 32KB based on the maximum size
of a single valid row by table definition. If the sum of the column schema is greater than 32KB, PolyBase will not
be able to query the data.
In SQL Data Warehouse, this limitation has been raised to 1MB.

Locking
Shared lock on the SCHEMARESOLUTION object.

Security
The data files for an external table is stored in Hadoop or Azure blob storage. These data files are created and
managed by your own processes. It is your responsibility to manage the security of the external data.

Examples
A. Create an external table with data in text-delimited format.
This example shows all the steps required to create an external table that has data formatted in text-delimited files.
It defines an external data source mydatasource and an external file format myfileformat. These database-level
objects are then referenced in the CREATE EXTERNAL TABLE statement. For more information, see CREATE
EXTERNAL DATA SOURCE (Transact-SQL ) and CREATE EXTERNAL FILE FORMAT (Transact-SQL ).

CREATE EXTERNAL DATA SOURCE mydatasource


WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://xxx.xxx.xxx.xxx:8020'
)

CREATE EXTERNAL FILE FORMAT myfileformat


WITH (
FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS (FIELD_TERMINATOR ='|')
);

CREATE EXTERNAL TABLE ClickStream (


url varchar(50),
event_date date,
user_IP varchar(50)
)
WITH (
LOCATION='/webdata/employee.tbl',
DATA_SOURCE = mydatasource,
FILE_FORMAT = myfileformat
)
;

B. Create an external table with data in RCFile format.


This example shows all the steps required to create an external table that has data formatted as RCFiles. It defines
an external data source mydatasource_rc and an external file format myfileformat_rc. These database-level objects
are then referenced in the CREATE EXTERNAL TABLE statement. For more information, see CREATE EXTERNAL
DATA SOURCE (Transact-SQL ) and CREATE EXTERNAL FILE FORMAT (Transact-SQL ).
CREATE EXTERNAL DATA SOURCE mydatasource_rc
WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://xxx.xxx.xxx.xxx:8020'
)

CREATE EXTERNAL FILE FORMAT myfileformat_rc


WITH (
FORMAT_TYPE = RCFILE,
SERDE_METHOD = 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'
)
;

CREATE EXTERNAL TABLE ClickStream_rc (


url varchar(50),
event_date date,
user_ip varchar(50)
)
WITH (
LOCATION='/webdata/employee_rc.tbl',
DATA_SOURCE = mydatasource_rc,
FILE_FORMAT = myfileformat_rc
)
;

C. Create an external table with data in ORC format.


This example shows all the steps required to create an external table that has data formatted as ORC files. It
defines an external data source mydatasource_orc and an external file format myfileformat_orc. These database-
level objects are then referenced in the CREATE EXTERNAL TABLE statement. For more information, see CREATE
EXTERNAL DATA SOURCE (Transact-SQL ) and CREATE EXTERNAL FILE FORMAT (Transact-SQL ).

CREATE EXTERNAL DATA SOURCE mydatasource_orc


WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://xxx.xxx.xxx.xxx:8020'
)

CREATE EXTERNAL FILE FORMAT myfileformat_orc


WITH (
FORMAT = ORC,
COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
)
;

CREATE EXTERNAL TABLE ClickStream_orc (


url varchar(50),
event_date date,
user_ip varchar(50)
)
WITH (
LOCATION='/webdata/',
DATA_SOURCE = mydatasource_orc,
FILE_FORMAT = myfileformat_orc
)
;

D. Querying Hadoop data


Clickstream is an external table that connects to the employee.tbl delimited text file on a Hadoop cluster. The
following query looks just like a query against a standard table. However, this query retrieves data from Hadoop
and then computes the restuls.
SELECT TOP 10 (url) FROM ClickStream WHERE user_ip = 'xxx.xxx.xxx.xxx'
;

E. Join Hadoop data with SQL data


This query looks just like a standard JOIN on two SQL tables. The difference is that PolyBase retrieves the
Clickstream data from Hadoop and then joins it to the UrlDescription table. One table is an external table and the
other is a standard SQL table.

SELECT url.description
FROM ClickStream cs
JOIN UrlDescription url ON cs.url = url.name
WHERE cs.url = 'msdn.microsoft.com'
;

F. Import data from Hadoop into a SQL table


This example creates a new SQL table ms_user that permanently stores the result of a join between the standard
SQL table user and the external table ClickStream.

SELECT DISTINCT user.FirstName, user.LastName


INTO ms_user
FROM user INNER JOIN (
SELECT * FROM ClickStream WHERE cs.url = 'www.microsoft.com'
) AS ms_user
ON user.user_ip = ms.user_ip
;

G. Create an external table for a sharded data source


This example remaps a remote DMV to an external table using the SCHEMA_NAME and OBJECT_NAME
clauses.

CREATE EXTERNAL TABLE [dbo].[all_dm_exec_requests]([session_id] smallint NOT NULL,


[request_id] int NOT NULL,
[start_time] datetime NOT NULL,
[status] nvarchar(30) NOT NULL,
[command] nvarchar(32) NOT NULL,
[sql_handle] varbinary(64),
[statement_start_offset] int,
[statement_end_offset] int,
[cpu_time] int NOT NULL)
WITH
(
DATA_SOURCE = MyExtSrc,
SCHEMA_NAME = 'sys',
OBJECT_NAME = 'dm_exec_requests',
DISTRIBUTION=ROUND_ROBIN
);

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


H. Importing Data from ADLS into Azure SQL Data Warehouse
-- These values come from your Azure Active Directory Application used to authenticate to ADLS
CREATE DATABASE SCOPED CREDENTIAL ADLUser
WITH IDENTITY = '<clientID>@\<OAuth2.0TokenEndPoint>',
SECRET = '<KEY>' ;

CREATE EXTERNAL DATA SOURCE AzureDataLakeStore


WITH (TYPE = HADOOP,
LOCATION = 'adl://pbasetr.azuredatalakestore.net'
)

CREATE EXTERNAL FILE FORMAT TextFileFormat


WITH ( FORMATTYPE = DELIMITEDTEXT
, FORMATOPTIONS ( FIELDTERMINATOR = '|'
, STRINGDELIMITER = ''
, DATEFORMAT = 'yyyy-MM-dd HH:mm:ss.fff'
, USETYPE_DEFAULT = FALSE
)
)

CREATE EXTERNAL TABLE [dbo].[DimProductexternal]


( [ProductKey] [int] NOT NULL,
[ProductLabel] nvarchar NULL,
[ProductName] nvarchar NULL )
WITH ( LOCATION='/DimProduct/' ,
DATA_SOURCE = AzureDataLakeStore ,
FILE_FORMAT = TextFileFormat ,
REJECT_TYPE = VALUE ,
REJECT_VALUE = 0 ) ;

CREATE TABLE [dbo].[DimProduct]


WITH (DISTRIBUTION = HASH([ProductKey] ) )
AS SELECT * FROM
[dbo].[DimProduct_external] ;

I. Join external tables

SELECT url.description
FROM ClickStream cs
JOIN UrlDescription url ON cs.url = url.name
WHERE cs.url = 'msdn.microsoft.com'
;

J. Join HDFS data with PDW data

SELECT cs.user_ip FROM ClickStream cs


JOIN User u ON cs.user_ip = u.user_ip
WHERE cs.url = 'www.microsoft.com'
;

K. Import row data from HDFS into a distributed PDW Table

CREATE TABLE ClickStream_PDW


WITH ( DISTRIBUTION = HASH (url) )
AS SELECT url, event_date, user_ip FROM ClickStream
;
L. Import row data from HDFS into a replicated PDW Table

CREATE TABLE ClickStream_PDW


WITH ( DISTRIBUTION = REPLICATE )
AS SELECT url, event_date, user_ip
FROM ClickStream
;

See Also
Common Metadata Query Examples (SQL Server PDW )
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
CREATE EXTERNAL TABLE AS SELECT (Transact-SQL )
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
CREATE EXTERNAL TABLE AS SELECT (Transact-
SQL)
5/16/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates an external table and then exports, in parallel, the results of a Transact-SQL SELECT statement to Hadoop
or Azure Storage Blob.
Transact-SQL Syntax Conventions (Transact-SQL )

Syntax
CREATE EXTERNAL TABLE [ [database_name . [ schema_name ] . ] | schema_name . ] table_name
WITH (
LOCATION = 'hdfs_folder',
DATA_SOURCE = external_data_source_name,
FILE_FORMAT = external_file_format_name
[ , <reject_options> [ ,...n ] ]
)
AS <select_statement>
[;]

<reject_options> ::=
{
| REJECT_TYPE = value | percentage
| REJECT_VALUE = reject_value
| REJECT_SAMPLE_VALUE = reject_sample_value
}

<select_statement> ::=
[ WITH <common_table_expression> [ ,...n ] ]
SELECT <select_criteria>

Arguments
[ [ database_name . [ schema_name ] . ] | schema_name . ] table_name
The one to three-part name of the table to create in the database. For an external table, only the table metadata is
stored in the relational database.
LOCATION = 'hdfs_folder'
Specifies where to write the results of the SELECT statement on the external data source. The location is a folder
name and can optionally include a path that is relative to the root folder of the Hadoop Cluster or Azure Storage
Blob. PolyBase will create the path and folder if it does not already exist.
The external files are written to hdfs_folder and named QueryID_date_time_ID.format, where ID is an incremental
identifier and format is the exported data format. For example, QID776_20160130_182739_0.orc.
DATA_SOURCE = external_data_source_name
Specifies the name of the external data source object that contains the location where the external data is stored
or will be stored. The location is either a Hadoop Cluster or an Azure Storage Blob. To create an external data
source, use CREATE EXTERNAL DATA SOURCE (Transact-SQL ).
FILE_FORMAT = external_file_format_name
Specifies the name of the external file format object that contains the format for the external data file. To create an
external file format, use CREATE EXTERNAL FILE FORMAT (Transact-SQL ).
Reject Options
The reject options do not apply at the time this CREATE EXTERNAL TABLE AS SELECT statement is run. Instead,
they are specified here so that the database can use them at a later time when it imports data from the external
table. Later, when the CREATE TABLE AS SELECT statement selects data from the external table, the database will
use the reject options to determine the number or percentage of rows that can fail to import before it stops the
import.
REJECT_VALUE = reject_value
Specifies the value or the percentage of rows that can fail to import before database halts the import.
REJECT_TYPE = value | percentage
Clarifies whether the REJECT_VALUE option is specified as a literal value or a percentage.
value
REJECT_VALUE is a literal value, not a percentage. The database will stop importing rows from the external data
file when the number of failed rows exceeds reject_value.
For example, if REJECT_VALUE = 5 and REJECT_TYPE = value, the database will stop importing rows after 5
rows have failed to import.
percentage
REJECT_VALUE is a percentage, not a literal value. The database will stop importing rows from the external data
file when the percentage of failed rows exceeds reject_value. The percentage of failed rows is calculated at
intervals.
REJECT_SAMPLE_VALUE = reject_sample_value
Required when REJECT_TYPE = percentage, this specifies the number of rows to attempt to import before the
database recalculates the percentage of failed rows.
For example, if REJECT_SAMPLE_VALUE = 1000, the database will calculate the percentage of failed rows after it
has attempted to import 1000 rows from the external data file. If the percentage of failed rows is less than
reject_value, the database will attempt to load another 1000 rows. The database continues to recalculate the
percentage of failed rows after it attempts to import each additional 1000 rows.

NOTE
Since the database computes the percentage of failed rows at intervals, the actual percentage of failed rows can exceed
reject_value.

Example:
This example shows how the three REJECT options interact with each other. For example, if REJECT_TYPE =
percentage, REJECT_VALUE = 30, and REJECT_SAMPLE_VALUE = 100, the following scenario could occur:
The database attempts to load the first 100 rows; 25 fail and 75 succeed.
Percent of failed rows is calculated as 25%, which is less than the reject value of 30%. So, no need to halt
the load.
The database attempts to load the next 100 rows; this time 25 succeed and 75 fail.
Percent of failed rows is recalculated as 50%. The percentage of failed rows has exceeded the 30% reject
value.
The load fails with 50% failed rows after attempting to load 200 rows, which is larger than the specified
30% limit.
WITH common_table_expression
Specifies a temporary named result set, known as a common table expression (CTE ). For more information,
see WITH common_table_expression (Transact-SQL ).
SELECT <select_criteria> Populates the new table with the results from a SELECT statement. select_criteria
is the body of the SELECT statement that determines which data to copy to the new table. For information
about SELECT statements, see SELECT (Transact-SQL ).

Permissions
To run this command the database user needs all of these permissions or memberships:
ALTER SCHEMA permission on the local schema that will contain the new table or membership in the
db_ddladmin fixed database role.
CREATE TABLE permission or membership in the db_ddladmin fixed database role.
SELECT permission on any objects referenced in the select_criteria.
The login needs all of these permissions:
ADMINISTER BULK OPERATIONS permission
ALTER ANY EXTERNAL DATA SOURCE permission
ALTER ANY EXTERNAL FILE FORMAT permission
The login must have write permission to read and write to the external folder on the Hadoop Cluster or
Azure Storage Blob.

IMPORTANT
The ALTER ANY EXTERNAL DATA SOURCE permission grants any principal the ability to create and modify any
external data source object, and therefore, it also grants the ability to access all database scoped credentials on the
database. This permission must be considered as highly privileged, and therefore must be granted only to trusted
principals in the system.

Error Handling
When CREATE EXTERNAL TABLE AS SELECT exports data to a text-delimited file, there is no rejection file for
rows that fail to export.
When creating the external table, the database attempts to connect to the external Hadoop cluster or Azure
Storage Blob. If the connection fails, the command will fail and the external table will not be created. It can take a
minute or more for the command to fail since the database retries the connection at least 3 times.
If CREATE EXTERNAL TABLE AS SELECT is cancelled or fails, the database will make a one-time attempt to
remove any new files and folders already created on the external data source.
The database will report any Java errors that occur on the external data source during the data export.

General Remarks
After the CETAS statement finishes, you can run Transact-SQL queries on the external table. These operations will
import data into the database for the duration of the query unless you import by using the CREATE TABLE AS
SELECT statement.
The external table name and definition are stored in the database metadata. The data is stored in the external data
source.
The external files are named QueryID_date_time_ID.format, where ID is an incremental identifier and format is
the exported data format. For example, QID776_20160130_182739_0.orc.
The CETAS statement always creates a non-partitioned table, even if the source table is partitioned.
For query plans, created with EXPL AIN, the databaseuses these query plan operations for external tables:
External shuffle move
External broadcast move
External partition move
APPLIES TO: Parallel Data WarehouseAs a prerequisite for creating an external table, the appliance
administrator needs to configure hadoop connectivity. For more information, see Configure Connectivity to
External Data (Analytics Platform System) in the APS documentation which you can download from here.

Limitations and Restrictions


Since external table data resides outside of the database, backup and restore operations will only operate on data
stored in the database. This means only the metadata will be backed up and restored.
The database does not verify the connection to the external data source when restoring a database backup that
contains an external table. If the original source is not accessible, the metadata restore of the external table will still
succeed, but SELECT operations on the external table will fail.
The database does not guarantee data consistency between the databaseand the external data. You, the customer,
are solely responsible to maintain consistency between the external data and the database.
Data manipulation language (DML ) operations are not supported on external tables. For example, you cannot use
the Transact-SQL update, insert, or delete Transact-SQLstatements to modify the external data.
CREATE TABLE, DROP TABLE, CREATE STATISTICS, DROP STATISTICS, CREATE VIEW, and DROP VIEW are
the only data definition language (DDL ) operations allowed on external tables.
PolyBase can consume a maximum of 33k files per folder when running 32 concurrent PolyBase queries. This
maximum number includes both files and subfolders in each HDFS folder. If the degree of concurrency is less
than 32, a user can run PolyBase queries against folders in HDFS which contain more than 33k files. Microsoft
recommends users of Hadoop and PolyBase keep file paths short and use no more than 30k files per HDFS
folder. When too many files are referenced a JVM out-of-memory exception occurs.
SET ROWCOUNT (Transact-SQL ) has no effect on this CREATE EXTERNAL TABLE AS SELECT. To achieve a
similar behavior, use TOP (Transact-SQL ).
When CREATE EXTERNAL TABLE AS SELECT selects from an RCFile, the column values in the RCFile must not
contain the pipe '|' character.

Locking
Takes a shared lock on the SCHEMARESOLUTION object.

Examples
A. Create a Hadoop table using CREATE EXTERNAL TABLE AS SELECT (CETAS )
The following example creates a new external table named hdfsCustomer , using the column definitions and data
from the source table dimCustomer .
The table definition is stored in the database, and the results of the SELECT statement are exported to the
'/pdwdata/customer.tbl' file on the Hadoop external data source customer_ds. The file is formatted according to
the external file format customer_ff.
The file name is generated by the database, and contains the query ID for ease of aligning the file with the query
that generated it.
The path hdfs://xxx.xxx.xxx.xxx:5000/files/ preceding the Customer directory must already exist. However, if
the Customer directory does not exist, the database will create the directory.

NOTE
This example specifies for 5000. If the port is not specified, the database uses 8020 as the default port.

The resulting Hadoop location and file name will be


hdfs:// xxx.xxx.xxx.xxx:5000/files/Customer/ QueryID_YearMonthDay_HourMinutesSeconds_FileIndex.txt. .

-- Example is based on AdventureWorks


CREATE EXTERNAL TABLE hdfsCustomer
WITH (
LOCATION='/pdwdata/customer.tbl',
DATA_SOURCE = customer_ds,
FILE_FORMAT = customer_ff
) AS SELECT * FROM dimCustomer;

B. Use a Query Hint with CREATE EXTERNAL TABLE AS SELECT (CETAS )


This query shows the basic syntax for using a query join hint with the CETAS statement. After the query is
submitted the database uses the hash join strategy to generate the query plan. For more information on join hints
and how to use the OPTION clause, see OPTION Clause (Transact-SQL ).

NOTE
This example specifies for 5000. If the port is not specified, the database uses 8020 as the default port.

-- Example is based on AdventureWorks


CREATE EXTERNAL TABLE dbo.FactInternetSalesNew
WITH
(
LOCATION = '/files/Customer',
DATA_SOURCE = customer_ds,
FILE_FORMAT = customer_ff
)
AS SELECT T1.* FROM dbo.FactInternetSales T1 JOIN dbo.DimCustomer T2
ON ( T1.CustomerKey = T2.CustomerKey )
OPTION ( HASH JOIN );

See Also
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL )
CREATE TABLE (Azure SQL Data Warehouse, Parallel Data Warehouse)
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
DROP TABLE (Transact-SQL )
ALTER TABLE (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a full-text catalog for a database. One full-text catalog can have several full-text indexes, but a full-text
index can only be part of one full-text catalog. Each database can contain zero or more full-text catalogs.
You cannot create full-text catalogs in the master, model, or tempdb databases.

IMPORTANT
Beginning with SQL Server 2008, a full-text catalog is a virtual object and does not belong to any filegroup. A full-text
catalog is a logical concept that refers to a group of full-text indexes.

Transact-SQL Syntax Conventions

Syntax
CREATE FULLTEXT CATALOG catalog_name
[ON FILEGROUP filegroup ]
[IN PATH 'rootpath']
[WITH <catalog_option>]
[AS DEFAULT]
[AUTHORIZATION owner_name ]

<catalog_option>::=
ACCENT_SENSITIVITY = {ON|OFF}

Arguments
catalog_name
Is the name of the new catalog. The catalog name must be unique among all catalog names in the current
database. Also, the name of the file that corresponds to the full-text catalog (see ON FILEGROUP ) must be unique
among all files in the database. If the name of the catalog is already used for another catalog in the database, SQL
Server returns an error.
The length of the catalog name cannot exceed 120 characters.
ON FILEGROUP filegroup
Beginning with SQL Server 2008, this clause has no effect.
IN PATH 'rootpath'

NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.

Beginning with SQL Server 2008, this clause has no effect.


ACCENT_SENSITIVITY = {ON|OFF }
Specifies that the catalog is accent sensitive or accent insensitive for full-text indexing. When this property is
changed, the index must be rebuilt. The default is to use the accent-sensitivity specified in the database collation.
To display the database collation, use the sys.databases catalog view.
To determine the current accent-sensitivity property setting of a full-text catalog, use the
FULLTEXTCATALOGPROPERTY function with the accentsensitivity property value against catalog_name. If the
value returned is '1', the full-text catalog is accent sensitive; if the value is '0', the catalog is not accent-sensitive.
AS DEFAULT
Specifies that the catalog is the default catalog. When full-text indexes are created without a full-text catalog
explicitly specified, the default catalog is used. If an existing full-text catalog is already marked AS DEFAULT,
setting this new catalog AS DEFAULT will make this catalog the default full-text catalog.
AUTHORIZATION owner_name
Sets the owner of the full-text catalog to the name of a database user or role. If owner_name is a role, the role
must be the name of a role that the current user is a member of, or the user running the statement must be the
database owner or system administrator.
If owner_name is a user name, the user name must be one of the following:
The name of the user running the statement.
The name of a user that the user executing the command has impersonate permissions for.
Or, the user executing the command must be the database owner or system administrator.
owner_name must also be granted TAKE OWNERSHIP permission on the specified full-text catalog.

Remarks
Full-text catalog IDs start at 00005 and are incremented by one for each new catalog created.

Permissions
User must have CREATE FULLTEXT CATALOG permission on the database, or be a member of the db_owner, or
db_ddladmin fixed database roles.

Examples
The following example creates a full-text catalog and also a full-text index.

USE AdventureWorks2012;
GO
CREATE FULLTEXT CATALOG ftCatalog AS DEFAULT;
GO
CREATE FULLTEXT INDEX ON HumanResources.JobCandidate(Resume) KEY INDEX PK_JobCandidate_JobCandidateID;
GO

See Also
sys.fulltext_catalogs (Transact-SQL )
ALTER FULLTEXT CATALOG (Transact-SQL )
DROP FULLTEXT CATALOG (Transact-SQL )
Full-Text Search
CREATE FULLTEXT INDEX (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a full-text index on a table or indexed view in a database in SQL Server. Only one full-text index is allowed
per table or indexed view, and each full-text index applies to a single table or indexed view. A full-text index can
contain up to 1024 columns.
Transact-SQL Syntax Conventions

Syntax
CREATE FULLTEXT INDEX ON table_name
[ ( { column_name
[ TYPE COLUMN type_column_name ]
[ LANGUAGE language_term ]
[ STATISTICAL_SEMANTICS ]
} [ ,...n]
) ]
KEY INDEX index_name
[ ON <catalog_filegroup_option> ]
[ WITH [ ( ] <with_option> [ ,...n] [ ) ] ]
[;]

<catalog_filegroup_option>::=
{
fulltext_catalog_name
| ( fulltext_catalog_name, FILEGROUP filegroup_name )
| ( FILEGROUP filegroup_name, fulltext_catalog_name )
| ( FILEGROUP filegroup_name )
}

<with_option>::=
{
CHANGE_TRACKING [ = ] { MANUAL | AUTO | OFF [, NO POPULATION ] }
| STOPLIST [ = ] { OFF | SYSTEM | stoplist_name }
| SEARCH PROPERTY LIST [ = ] property_list_name
}

Arguments
table_name
Is the name of the table or indexed view that contains the column or columns included in the full-text index.
column_name
Is the name of the column included in the full-text index. Only columns of type char, varchar, nchar, nvarchar,
text, ntext, image, xml, and varbinary(max) can be indexed for full-text search. To specify multiple columns,
repeat the column_name clause as follows:
CREATE FULLTEXT INDEX ON table_name (column_name1 […], column_name2 […]) …
TYPE COLUMN type_column_name
Specifies the name of a table column, type_column_name, that is used to hold the document type for a
varbinary(max) or image document. This column, known as the type column, contains a user-supplied file
extension (.doc, .pdf, .xls, and so forth). The type column must be of type char, nchar, varchar, or nvarchar.
Specify TYPE COLUMN type_column_name only if column_name specifies a varbinary(max) or image column,
in which data is stored as binary data; otherwise, SQL Server returns an error.

NOTE
At indexing time, the Full-Text Engine uses the abbreviation in the type column of each table row to identify which full-text
search filter to use for the document in column_name. The filter loads the document as a binary stream, removes the
formatting information, and sends the text from the document to the word-breaker component. For more information, see
Configure and Manage Filters for Search.

L ANGUAGE language_term
Is the language of the data stored in column_name.
language_term is optional and can be specified as a string, integer, or hexadecimal value corresponding to the
locale identifier (LCID ) of a language. If no value is specified, the default language of the SQL Server instance is
used.
If language_term is specified, the language it represents will be used to index data stored in char, nchar, varchar,
nvarchar, text, and ntext columns. This language is the default language used at query time if language_term is
not specified as part of a full-text predicate against the column.
When specified as a string, language_term corresponds to the alias column value in the syslanguages system
table. The string must be enclosed in single quotation marks, as in 'language_term'. When specified as an integer,
language_term is the actual LCID that identifies the language. When specified as a hexadecimal value,
language_term is 0x followed by the hex value of the LCID. The hex value must not exceed eight digits, including
leading zeros.
If the value is in double-byte character set (DBCS ) format, SQL Server will convert it to Unicode.
Resources, such as word breakers and stemmers, must be enabled for the language specified as language_term. If
such resources do not support the specified language, SQL Server returns an error.
Use the sp_configure stored procedure to access information about the default full-text language of the Microsoft
SQL Server instance. For more information, see sp_configure (Transact-SQL ).
For non-BLOB and non-XML columns containing text data in multiple languages, or for cases when the language
of the text stored in the column is unknown, it might be appropriate for you to use the neutral (0x0) language
resource. However, first you should understand the possible consequences of using the neutral (0x0) language
resource. For information about the possible solutions and consequences of using the neutral (0x0) language
resource, see Choose a Language When Creating a Full-Text Index.
For documents stored in XML - or BLOB -type columns, the language encoding within the document will be used at
indexing time. For example, in XML columns, the xml:lang attribute in XML documents will identify the language.
At query time, the value previously specified in language_term becomes the default language used for full-text
queries unless language_term is specified as part of a full-text query.
STATISTICAL_SEMANTICS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Creates the additional key phrase and document similarity indexes that are part of statistical semantic indexing.
For more information, see Semantic Search (SQL Server).
KEY INDEX index_name
Is the name of the unique key index on table_name. The KEY INDEX must be a unique, single-key, non-nullable
column. Select the smallest unique key index for the full-text unique key. For the best performance, we recommend
an integer data type for the full-text key.
fulltext_catalog_name
Is the full-text catalog used for the full-text index. The catalog must already exist in the database. This clause is
optional. If it is not specified, a default catalog is used. If no default catalog exists, SQL Server returns an error.
FILEGROUP filegroup_name
Creates the specified full-text index on the specified filegroup. The filegroup must already exist. If the FILEGROUP
clause is not specified, the full-text index is placed in the same filegroup as base table or view for a nonpartitioned
table or in the primary filegroup for a partitioned table.
CHANGE_TRACKING [ = ] { MANUAL | AUTO | OFF [ , NO POPUL ATION ] }
Specifies whether changes (updates, deletes or inserts) made to table columns that are covered by the full-text
index will be propagated by SQL Server to the full-text index. Data changes through WRITETEXT and
UPDATETEXT are not reflected in the full-text index, and are not picked up with change tracking.
MANUAL
Specifies that the tracked changes must be propagated manually by calling the ALTER FULLTEXT INDEX … START
UPDATE POPUL ATION Transact-SQL statement (manual population). You can use SQL Server Agent to call this
Transact-SQL statement periodically.
AUTO
Specifies that the tracked changes will be propagated automatically as data is modified in the base table
(automatic population). Although changes are propagated automatically, these changes might not be reflected
immediately in the full-text index. AUTO is the default.
OFF [ , NO POPUL ATION ]
Specifies that SQL Server does not keep a list of changes to the indexed data. When NO POPUL ATION is not
specified, SQL Server populates the index fully after it is created.
The NO POPUL ATION option can be used only when CHANGE_TRACKING is OFF. When NO POPUL ATION is
specified, SQL Server does not populate an index after it is created. The index is only populated after the user
executes the ALTER FULLTEXT INDEX command with the START FULL POPUL ATION or START INCREMENTAL
POPUL ATION clause.
STOPLIST [ = ] { OFF | SYSTEM | stoplist_name }
Associates a full-text stoplist with the index. The index is not populated with any tokens that are part of the
specified stoplist. If STOPLIST is not specified, SQL Server associates the system full-text stoplist with the index.
OFF
Specifies that no stoplist be associated with the full-text index.
SYSTEM
Specifies that the default full-text system STOPLIST should be used for this full-text index.
stoplist_name
Specifies the name of the stoplist to be associated with the full-text index.
SEARCH PROPERTY LIST [ = ] property_list_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Associates a search property list with the index.
OFF
Specifies that no property list be associated with the full-text index.
property_list_name
Specifies the name of the search property list to associate with the full-text index.
Remarks
For more information about full-text indexes, see Create and Manage Full-Text Indexes.
On xml columns, you can create a full-text index that indexes the content of the XML elements, but ignores the
XML markup. Attribute values are full-text indexed unless they are numeric values. Element tags are used as token
boundaries. Well-formed XML or HTML documents and fragments containing multiple languages are supported.
For more information, see Use Full-Text Search with XML Columns.
We recommend that the index key column is an integer data type. This provides optimizations at query execution
time.

Interactions of Change Tracking and NO POPULATION Parameter


Whether the full-text index is populated depends on whether change-tracking is enabled and whether WITH NO
POPUL ATION is specified in the ALTER FULLTEXT INDEX statement. The following table summarizes the result
of their interaction.

CHANGE TRACKING WITH NO POPULATION RESULT

Not Enabled Not specified A full population is performed on the


index.

Not Enabled Specified No population of the index occurs until


an ALTER FULLTEXT INDEX...START
POPULATION statement is issued.

Enabled Specified An error is raised, and the index is not


altered.

Enabled Not specified A full population is performed on the


index.

For more information about populating full-text indexes, see Populate Full-Text Indexes.

Permissions
User must have REFERENCES permission on the full-text catalog and have ALTER permission on the table or
indexed view, or be a member of the sysadmin fixed server role, or db_owner, or db_ddladmin fixed database roles.
If SET STOPLIST is specified, the user must have REFERENCES permission on the specified stoplist. The owner of
the STOPLIST can grant this permission.

NOTE
The public is granted REFERENCE permission to the default stoplist that is shipped with SQL Server.

Examples
A. Creating a unique index, a full-text catalog, and a full-text index
The following example creates a unique index on the JobCandidateID column of the HumanResources.JobCandidate
table of the AdventureWorks2012 sample database. The example then creates a default full-text catalog, ft .
Finally, the example creates a full-text index on the Resume column, using the ft catalog and the system stoplist.
CREATE UNIQUE INDEX ui_ukJobCand ON HumanResources.JobCandidate(JobCandidateID);
CREATE FULLTEXT CATALOG ft AS DEFAULT;
CREATE FULLTEXT INDEX ON HumanResources.JobCandidate(Resume)
KEY INDEX ui_ukJobCand
WITH STOPLIST = SYSTEM;
GO

B. Creating a full-text index on several table columns


The following example creates a full-text catalog, production_catalog , in the AdventureWorks sample database. The
example then creates a full-text index that uses this new catalog. The full-text index is on the on the ReviewerName ,
EmailAddress , and Comments columns of the Production.ProductReview . For each column, the example specifies
the LCID of English, 1033 , which is the language of the data in the columns. This full-text index uses an existing
unique key index, PK_ProductReview_ProductReviewID . As recommended, this index key is on an integer column,
ProductReviewID .

CREATE FULLTEXT CATALOG production_catalog;


GO
CREATE FULLTEXT INDEX ON Production.ProductReview
(
ReviewerName
Language 1033,
EmailAddress
Language 1033,
Comments
Language 1033
)
KEY INDEX PK_ProductReview_ProductReviewID
ON production_catalog;
GO

C. Creating a full-text index with a search property list without populating it


The following example creates a full-text index on the Title , DocumentSummary , and Document columns of the
Production.Document table. The example specifies the LCID of English, 1033 , which is the language of the data in
the columns. This full-text index uses the default full-text catalog and an existing unique key index,
PK_Document_DocumentID . As recommended, this index key is on an integer column, DocumentID .

The example specifies the SYSTEM stoplist. It also specifies a search property list, DocumentPropertyList ; for an
example that creates this property list, see CREATE SEARCH PROPERTY LIST (Transact-SQL ).
The example specifies that change tracking is off with no population. Later, during off-peak hours, the example uses
an ALTER FULLTEXT INDEX statement to start a full population on the new index and enable automatic change
tracking.

CREATE FULLTEXT INDEX ON Production.Document


(
Title
Language 1033,
DocumentSummary
Language 1033,
Document
TYPE COLUMN FileExtension
Language 1033
)
KEY INDEX PK_Document_DocumentID
WITH STOPLIST = SYSTEM, SEARCH PROPERTY LIST = DocumentPropertyList, CHANGE_TRACKING OFF, NO
POPULATION;
GO
Later, at an off-peak time, the index is populated:

ALTER FULLTEXT INDEX ON Production.Document SET CHANGE_TRACKING AUTO;


GO

See Also
Create and Manage Full-Text Indexes
ALTER FULLTEXT INDEX (Transact-SQL )
DROP FULLTEXT INDEX (Transact-SQL )
Full-Text Search
GRANT (Transact-SQL )
sys.fulltext_indexes (Transact-SQL )
Search Document Properties with Search Property Lists
CREATE FULLTEXT STOPLIST (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new full-text stoplist in the current database.
Stopwords are managed in databases by using objects called stoplists. A stoplist is a list of stopwords that, when
associated with a full-text index, is applied to full-text queries on that index. For more information, see Configure
and Manage Stopwords and Stoplists for Full-Text Search.

IMPORTANT
CREATE FULLTEXT STOPLIST, ALTER FULLTEXT STOPLIST, and DROP FULLTEXT STOPLIST are supported only under
compatibility level 100. Under compatibility levels 80 and 90, these statements are not supported. However, under all
compatibility levels the system stoplist is automatically associated with new full-text indexes.

Transact-SQL Syntax Conventions

Syntax
CREATE FULLTEXT STOPLIST stoplist_name
[ FROM { [ database_name.]source_stoplist_name } | SYSTEM STOPLIST ]
[ AUTHORIZATION owner_name ]
;

Arguments
stoplist_name
Is the name of the stoplist. stoplist_name can be a maximum of 128 characters. stoplist_name must be unique
among all stoplists in the current database, and conform to the rules for identifiers.
stoplist_name will be used when the full-text index is created.
database_name
Is the name of the database where the stoplist specified by source_stoplist_name is located. If not specified,
database_name defaults to the current database.
source_stoplist_name
Specifies that the new stoplist is created by copying an existing stoplist. If source_stoplist_name does not exist, or
the database user does not have correct permissions, CREATE FULLTEXT STOPLIST fails with an error. If any
languages specified in the stop words of the source stoplist are not registered in the current database, CREATE
FULLTEXT STOPLIST succeeds, but warning(s) are returned and the corresponding stop words are not added.
SYSTEM STOPLIST
Specifies that the new stoplist is created from the stoplist that exists by default in the Resource database.
AUTHORIZATION owner_name
Specifies the name of a database principal to own of the stoplist. owner_name must either be the name of a
principal of which the current user is a member, or the current user must have IMPERSONATE permission on
owner_name. If not specified, ownership is given to the current user.

Remarks
The creator of a stoplist is its owner.

Permissions
To create a STOPLIST requires CREATE FULLTEXT CATALOG permissions. The stoplist owner can grant
CONTROL permission explicitly on a stoplist to allow users to add and remove words and to drop the stoplist.

NOTE
Using a stoplist with a full-text index requires REFERENCE permission.

Examples
A. Creating a new full-text stoplist
The following example creates a new full-text stoplist named myStoplist .

CREATE FULLTEXT STOPLIST myStoplist;


GO

B. Copying a full-text stoplist from an existing full-text stoplist


The following example creates a new full-text stoplist named myStoplist2 by copying an existing
AdventureWorks stoplist named Customers.otherStoplist .

CREATE FULLTEXT STOPLIST myStoplist2 FROM AdventureWorks.otherStoplist;


GO

C. Copying a full-text stoplist from the system full-text stoplist


The following example creates a new full-text stoplist named myStoplist3 by copying from the system stoplist.

CREATE FULLTEXT STOPLIST myStoplist3 FROM SYSTEM STOPLIST;


GO

See Also
ALTER FULLTEXT STOPLIST (Transact-SQL )
DROP FULLTEXT STOPLIST (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
sys.fulltext_stoplists (Transact-SQL )
sys.fulltext_stopwords (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
CREATE FUNCTION (Transact-SQL)
5/3/2018 • 27 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a user-defined function in SQL Server and Azure SQL Database. A user-defined function is a Transact-
SQL or common language runtime (CLR ) routine that accepts parameters, performs an action, such as a complex
calculation, and returns the result of that action as a value. The return value can either be a scalar (single) value or
a table. Use this statement to create a reusable routine that can be used in these ways:
In Transact-SQL statements such as SELECT
In applications calling the function
In the definition of another user-defined function
To parameterize a view or improve the functionality of an indexed view
To define a column in a table
To define a CHECK constraint on a column
To replace a stored procedure
Use an inline function as a filter predicate for a security policy

NOTE
The integration of .NET Framework CLR into SQL Server is discussed in this topic. CLR integration does not apply to Azure
SQL Database.

Transact-SQL Syntax Conventions

Syntax
-- Transact-SQL Scalar Function Syntax
CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ = default ] [ READONLY ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
[ ; ]
-- Transact-SQL Inline Table-Valued Function Syntax
CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type
[ = default ] [ READONLY ] }
[ ,...n ]
]
)
RETURNS TABLE
[ WITH <function_option> [ ,...n ] ]
[ AS ]
RETURN [ ( ] select_stmt [ ) ]
[ ; ]

-- Transact-SQL Multi-Statement Table-Valued Function Syntax


CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type
[ = default ] [READONLY] }
[ ,...n ]
]
)
RETURNS @return_variable TABLE <table_type_definition>
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN
END
[ ; ]
-- Transact-SQL Function Clauses
<function_option>::=
{
[ ENCRYPTION ]
| [ SCHEMABINDING ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
| [ EXECUTE_AS_Clause ]
}

<table_type_definition>:: =
( { <column_definition> <column_constraint>
| <computed_column_definition> }
[ <table_constraint> ] [ ,...n ]
)
<column_definition>::=
{
{ column_name data_type }
[ [ DEFAULT constant_expression ]
[ COLLATE collation_name ] | [ ROWGUIDCOL ]
]
| [ IDENTITY [ (seed , increment ) ] ]
[ <column_constraint> [ ...n ] ]
}

<column_constraint>::=
{
[ NULL | NOT NULL ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
[ ON { filegroup | "default" } ]
| [ CHECK ( logical_expression ) ] [ ,...n ]
}

<computed_column_definition>::=
column_name AS computed_column_expression

<table_constraint>::=
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
( column_name [ ASC | DESC ] [ ,...n ] )
[ WITH FILLFACTOR = fillfactor
| WITH ( <index_option> [ , ...n ] )
| [ CHECK ( logical_expression ) ] [ ,...n ]
}

<index_option>::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS ={ ON | OFF }
}
-- CLR Scalar Function Syntax
CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( { @parameter_name [AS] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
)
RETURNS { return_data_type }
[ WITH <clr_function_option> [ ,...n ] ]
[ AS ] EXTERNAL NAME <method_specifier>
[ ; ]

-- CLR Table-Valued Function Syntax


CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( { @parameter_name [AS] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
)
RETURNS TABLE <clr_table_type_definition>
[ WITH <clr_function_option> [ ,...n ] ]
[ ORDER ( <order_clause> ) ]
[ AS ] EXTERNAL NAME <method_specifier>
[ ; ]

-- CLR Function Clauses


<order_clause> ::=
{
<column_name_in_clr_table_type_definition>
[ ASC | DESC ]
} [ ,...n]

<method_specifier>::=
assembly_name.class_name.method_name

<clr_function_option>::=
}
[ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
| [ EXECUTE_AS_Clause ]
}

<clr_table_type_definition>::=
( { column_name data_type } [ ,...n ] )
-- In-Memory OLTP: Syntax for natively compiled, scalar user-defined function
CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ NULL | NOT NULL ] [ = default ] [ READONLY ] }
[ ,...n ]
]
)
RETURNS return_data_type
WITH <function_option> [ ,...n ]
[ AS ]
BEGIN ATOMIC WITH (set_option [ ,... n ])
function_body
RETURN scalar_expression
END

<function_option>::=
{
| NATIVE_COMPILATION
| SCHEMABINDING
| [ EXECUTE_AS_Clause ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
}

Arguments
OR ALTER
Applies to: Azure SQL Database, SQL Server (starting with SQL Server 2016 (13.x) SP1).
Conditionally alters the function only if it already exists.

NOTE
Optional [OR ALTER] syntax for CLR is available starting with SQL Server 2016 (13.x) SP1 CU1.

schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the name of the user-defined function. Function names must comply with the rules for identifiers and must be
unique within the database and to its schema.

NOTE
Parentheses are required after the function name even if a parameter is not specified.

@parameter_name
Is a parameter in the user-defined function. One or more parameters can be declared.
A function can have a maximum of 2,100 parameters. The value of each declared parameter must be supplied by
the user when the function is executed, unless a default for the parameter is defined.
Specify a parameter name by using an at sign (@) as the first character. The parameter name must comply with
the rules for identifiers. Parameters are local to the function; the same parameter names can be used in other
functions. Parameters can take the place only of constants; they cannot be used instead of table names, column
names, or the names of other database objects.
NOTE
ANSI_WARNINGS is not honored when you pass parameters in a stored procedure, user-defined function, or when you
declare and set variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger
than three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.

[ type_schema_name. ] parameter_data_type
Is the parameter data type, and optionally the schema to which it belongs. For Transact-SQL functions, all data
types, including CLR user-defined types and user-defined table types, are allowed except the timestamp data
type. For CLR functions, all data types, including CLR user-defined types, are allowed except text, ntext, image,
user-defined table types and timestamp data types. The nonscalar types, cursor and table, cannot be specified as
a parameter data type in either Transact-SQL or CLR functions.
If type_schema_name is not specified, the Database Engine looks for the scalar_parameter_data_type in the
following order:
The schema that contains the names of SQL Server system data types.
The default schema of the current user in the current database.
The dbo schema in the current database.
[ =default ]
Is a default value for the parameter. If a default value is defined, the function can be executed without
specifying a value for that parameter.

NOTE
Default parameter values can be specified for CLR functions except for the varchar(max) and varbinary(max) data types.

When a parameter of the function has a default value, the keyword DEFAULT must be specified when the function
is called to retrieve the default value. This behavior is different from using parameters with default values in
stored procedures in which omitting the parameter also implies the default value. However, the DEFAULT
keyword is not required when invoking a scalar function by using the EXECUTE statement.
READONLY
Indicates that the parameter cannot be updated or modified within the definition of the function. If the parameter
type is a user-defined table type, READONLY should be specified.
return_data_type
Is the return value of a scalar user-defined function. For Transact-SQL functions, all data types, including CLR
user-defined types, are allowed except the timestamp data type. For CLR functions, all data types, including CLR
user-defined types, are allowed except the text, ntext, image, and timestamp data types. The nonscalar types,
cursor and table, cannot be specified as a return data type in either Transact-SQL or CLR functions.
function_body
Specifies that a series of Transact-SQL statements, which together do not produce a side effect such as modifying
a table, define the value of the function. function_body is used only in scalar functions and multistatement table-
valued functions.
In scalar functions, function_body is a series of Transact-SQL statements that together evaluate to a scalar value.
In multistatement table-valued functions, function_body is a series of Transact-SQL statements that populate a
TABLE return variable.
scalar_expression
Specifies the scalar value that the scalar function returns.
TABLE
Specifies that the return value of the table-valued function is a table. Only constants and @local_variables can be
passed to table-valued functions.
In inline table-valued functions, the TABLE return value is defined through a single SELECT statement. Inline
functions do not have associated return variables.
In multistatement table-valued functions, @return_variable is a TABLE variable, used to store and accumulate the
rows that should be returned as the value of the function. @return_variable can be specified only for Transact-
SQL functions and not for CLR functions.

WARNING
Joining to a multistatement table valued function in a FROM clause is possible, but can give poor performance. SQL Server
is unable to use all the optimized techniques against some statements that can be included in a multistatement function,
resulting in a suboptimal query plan. To obtain the best possible performance, whenever possible use joins between base
tables instead of functions.

select_stmt
Is the single SELECT statement that defines the return value of an inline table-valued function.
ORDER (<order_clause>) Specifies the order in which results are being returned from the table-valued function.
For more information, see the section, "Guidance on Using Sort Order," later in this topic.
EXTERNAL NAME <method_specifier> assembly_name.class_name.method_name Applies to: SQL Server
2008 through SQL Server 2017.
Specifies the assembly and method to which the created function name shall refer.
assembly_name - must match a value in the name column of
SELECT * FROM sys.assemblies; .
This is the name that was used on the CREATE ASSEMBLY statement.
class_name - must match a value in the assembly_name column of
SELECT * FROM sys.assembly_modules; .
Often the value contains an embedded period or dot. In such cases the Transact-SQL syntax requires that
the value be bounded with a pair of straight brackets [], or with a pair of double quotation marks "".
method_name - must match a value in the method_name column of
SELECT * FROM sys.assembly_modules; .
The method must be static.
In a typical example, for MyFood.DLL, in which all types are in the MyFood namespace, the EXTERNAL NAME
value could be:
MyFood.[MyFood.MyClass].MyStaticMethod

NOTE
By default, SQL Server cannot execute CLR code. You can create, modify, and drop database objects that reference common
language runtime modules; however, you cannot execute these references in SQL Server until you enable the clr enabled
option. To enable this option, use sp_configure.
NOTE
This option is not available in a contained database.

<table_type_definition> ( { <column_definition> <column_constraint> | <computed_column_definition> } [


<table_constraint> ] [ ,...n ] ) Defines the table data type for a Transact-SQL function. The table declaration
includes column definitions and column or table constraints. The table is always put in the primary filegroup.
< clr_table_type_definition > ( { column_namedata_type } [ ,...n ] ) Applies to: SQL Server 2008 through SQL
Server 2017, SQL Database (Preview in some regions).|
Defines the table data types for a CLR function. The table declaration includes only column names and data types.
The table is always put in the primary filegroup.
NULL|NOT NULL
Supported only for natively compiled, scalar user-defined functions. For more information, see Scalar User-
Defined Functions for In-Memory OLTP.
NATIVE_COMPIL ATION
Indicates whether a user-defined function is natively compiled. This argument is required for natively compiled,
scalar user-defined functions.
BEGIN ATOMIC WITH
Supported only for natively compiled, scalar user-defined functions, and is required. For more information, see
Atomic Blocks.
SCHEMABINDING
The SCHEMABINDING argument is required for natively compiled, scalar user-defined functions.
EXECUTE AS
EXECUTE AS is required for natively compiled, scalar user-defined functions.
<function_option>::= and <clr_function_option>::=
Specifies that the function will have one or more of the following options.
ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates that the Database Engine will convert the original text of the CREATE FUNCTION statement to an
obfuscated format. The output of the obfuscation is not directly visible in any catalog views. Users that have no
access to system tables or database files cannot retrieve the obfuscated text. However, the text will be available to
privileged users that can either access system tables over the DAC port or directly access database files. Also,
users that can attach a debugger to the server process can retrieve the original procedure from memory at
runtime. For more information about accessing system metadata, see Metadata Visibility Configuration.
Using this option prevents the function from being published as part of SQL Server replication. This option
cannot be specified for CLR functions.
SCHEMABINDING
Specifies that the function is bound to the database objects that it references. When SCHEMABINDING is
specified, the base objects cannot be modified in a way that would affect the function definition. The function
definition itself must first be modified or dropped to remove dependencies on the object that is to be modified.
The binding of the function to the objects it references is removed only when one of the following actions occurs:
The function is dropped.
The function is modified by using the ALTER statement with the SCHEMABINDING option not specified.
A function can be schema bound only if the following conditions are true:
The function is a Transact-SQL function.
The user-defined functions and views referenced by the function are also schema-bound.
The objects referenced by the function are referenced using a two-part name.
The function and the objects it references belong to the same database.
The user who executed the CREATE FUNCTION statement has REFERENCES permission on the database
objects that the function references.
RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT
Specifies the OnNULLCall attribute of a scalar-valued function. If not specified, CALLED ON NULL
INPUT is implied by default. This means that the function body executes even if NULL is passed as an
argument.
If RETURNS NULL ON NULL INPUT is specified in a CLR function, it indicates that SQL Server can
return NULL when any of the arguments it receives is NULL, without actually invoking the body of the
function. If the method of a CLR function specified in <method_specifier> already has a custom attribute
that indicates RETURNS NULL ON NULL INPUT, but the CREATE FUNCTION statement indicates
CALLED ON NULL INPUT, the CREATE FUNCTION statement takes precedence. The OnNULLCall
attribute cannot be specified for CLR table-valued functions.
EXECUTE AS Clause
Specifies the security context under which the user-defined function is executed. Therefore, you can control
which user account SQL Server uses to validate permissions on any database objects that are referenced
by the function.

NOTE
EXECUTE AS cannot be specified for inline user-defined functions.

For more information, see EXECUTE AS Clause (Transact-SQL ).


< column_definition >::=
Defines the table data type. The table declaration includes column definitions and constraints. For CLR functions,
only column_name and data_type can be specified.
column_name
Is the name of a column in the table. Column names must comply with the rules for identifiers and must be
unique in the table. column_name can consist of 1 through 128 characters.
data_type
Specifies the column data type. For Transact-SQL functions, all data types, including CLR user-defined types, are
allowed except timestamp. For CLR functions, all data types, including CLR user-defined types, are allowed
except text, ntext, image, char, varchar, varchar(max), and timestamp.The nonscalar type cursor cannot be
specified as a column data type in either Transact-SQL or CLR functions.
DEFAULT constant_expression
Specifies the value provided for the column when a value is not explicitly supplied during an insert.
constant_expression is a constant, NULL, or a system function value. DEFAULT definitions can be applied to any
column except those that have the IDENTITY property. DEFAULT cannot be specified for CLR table-valued
functions.
COLL ATE collation_name
Specifies the collation for the column. If not specified, the column is assigned the default collation of the database.
Collation name can be either a Windows collation name or a SQL collation name. For a list of and more
information about collations, see Windows Collation Name (Transact-SQL ) and SQL Server Collation Name
(Transact-SQL ).
The COLL ATE clause can be used to change the collations only of columns of the char, varchar, nchar, and
nvarchar data types.
COLL ATE cannot be specified for CLR table-valued functions.
ROWGUIDCOL
Indicates that the new column is a row globally unique identifier column. Only one uniqueidentifier column per
table can be designated as the ROWGUIDCOL column. The ROWGUIDCOL property can be assigned only to a
uniqueidentifier column.
The ROWGUIDCOL property does not enforce uniqueness of the values stored in the column. It also does not
automatically generate values for new rows inserted into the table. To generate unique values for each column,
use the NEWID function on INSERT statements. A default value can be specified; however, NEWID cannot be
specified as the default.
IDENTITY
Indicates that the new column is an identity column. When a new row is added to the table, SQL Server provides
a unique, incremental value for the column. Identity columns are typically used together with PRIMARY KEY
constraints to serve as the unique row identifier for the table. The IDENTITY property can be assigned to tinyint,
smallint, int, bigint, decimal(p,0), or numeric(p,0) columns. Only one identity column can be created per table.
Bound defaults and DEFAULT constraints cannot be used with an identity column. You must specify both the seed
and increment or neither. If neither is specified, the default is (1,1).
IDENTITY cannot be specified for CLR table-valued functions.
seed
Is the integer value to be assigned to the first row in the table.
increment
Is the integer value to add to the seed value for successive rows in the table.
< column_constraint >::= and < table_constraint>::=
Defines the constraint for a specified column or table. For CLR functions, the only constraint type allowed is
NULL. Named constraints are not allowed.
NULL | NOT NULL
Determines whether null values are allowed in the column. NULL is not strictly a constraint but can be specified
just like NOT NULL. NOT NULL cannot be specified for CLR table-valued functions.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column through a unique index. In table-valued user-
defined functions, the PRIMARY KEY constraint can be created on only one column per table. PRIMARY KEY
cannot be specified for CLR table-valued functions.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns through a unique index. A table can
have multiple UNIQUE constraints. UNIQUE cannot be specified for CLR table-valued functions.
CLUSTERED | NONCLUSTERED
Indicate that a clustered or a nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints use CLUSTERED, and UNIQUE constraints use NONCLUSTERED.
CLUSTERED can be specified for only one constraint. If CLUSTERED is specified for a UNIQUE constraint and a
PRIMARY KEY constraint is also specified, the PRIMARY KEY uses NONCLUSTERED.
CLUSTERED and NONCLUSTERED cannot be specified for CLR table-valued functions.
CHECK
Is a constraint that enforces domain integrity by limiting the possible values that can be entered into a column or
columns. CHECK constraints cannot be specified for CLR table-valued functions.
logical_expression
Is a logical expression that returns TRUE or FALSE.
<computed_column_definition>::=
Specifies a computed column. For more information about computed columns, see CREATE TABLE (Transact-
SQL ).
column_name
Is the name of the computed column.
computed_column_expression
Is an expression that defines the value of a computed column.
<index_option>::=
Specifies the index options for the PRIMARY KEY or UNIQUE index. For more information about index options,
see CREATE INDEX (Transact-SQL ).
PAD_INDEX = { ON | OFF }
Specifies index padding. The default is OFF.
FILLFACTOR = fillfactor
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or change. fillfactor must be an integer value from 1 to 100. The default is 0.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique index.
The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The default is
OFF.
STATISTICS_NORECOMPUTE = { ON | OFF }
Specifies whether distribution statistics are recomputed. The default is OFF.
ALLOW_ROW_LOCKS = { ON | OFF }
Specifies whether row locks are allowed. The default is ON.
ALLOW_PAGE_LOCKS = { ON | OFF }
Specifies whether page locks are allowed. The default is ON.

Best Practices
If a user-defined function is not created with the SCHEMABINDING clause, changes that are made to underlying
objects can affect the definition of the function and produce unexpected results when it is invoked. We
recommend that you implement one of the following methods to ensure that the function does not become
outdated because of changes to its underlying objects:
Specify the WITH SCHEMABINDING clause when you are creating the function. This ensures that the
objects referenced in the function definition cannot be modified unless the function is also modified.
Execute the sp_refreshsqlmodule stored procedure after modifying any object that is specified in the
definition of the function.

Data Types
If parameters are specified in a CLR function, they should be SQL Server types as defined previously for
scalar_parameter_data_type. For information about comparing SQL Server system data types to CLR integration
data types or .NET Framework common language runtime data types, see Mapping CLR Parameter Data.
For SQL Server to reference the correct method when it is overloaded in a class, the method indicated in
<method_specifier> must have the following characteristics:
Receive the same number of parameters as specified in [ ,...n ].
Receive all the parameters by value, not by reference.
Use parameter types that are compatible with those specified in the SQL Server function.
If the return data type of the CLR function specifies a table type (RETURNS TABLE ), the return data type of
the method in <method_specifier> should be of type IEnumerator or IEnumerable, and it is assumed
that the interface is implemented by the creator of the function. Unlike Transact-SQL functions, CLR
functions cannot include PRIMARY KEY, UNIQUE, or CHECK constraints in <table_type_definition>. The
data types of columns specified in <table_type_definition> must match the types of the corresponding
columns of the result set returned by the method in <method_specifier> at execution time. This type-
checking is not performed at the time the function is created.
For more information about how to program CLR functions, see CLR User-Defined Functions.

General Remarks
Scalar-valued functions can be invoked where scalar expressions are used. This includes computed columns and
CHECK constraint definitions. Scalar-valued functions can also be executed by using the EXECUTE statement.
Scalar-valued functions must be invoked by using at least the two-part name of the function. For more
information about multipart names, see Transact-SQL Syntax Conventions (Transact-SQL ). Table-valued functions
can be invoked where table expressions are allowed in the FROM clause of SELECT, INSERT, UPDATE, or
DELETE statements. For more information, see Execute User-defined Functions.

Interoperability
The following statements are valid in a function:
Assignment statements.
Control-of-Flow statements except TRY...CATCH statements.
DECL ARE statements defining local data variables and local cursors.
SELECT statements that contain select lists with expressions that assign values to local variables.
Cursor operations referencing local cursors that are declared, opened, closed, and deallocated in the
function. Only FETCH statements that assign values to local variables using the INTO clause are allowed;
FETCH statements that return data to the client are not allowed.
INSERT, UPDATE, and DELETE statements modifying local table variables.
EXECUTE statements calling extended stored procedures.
For more information, see Create User-defined Functions (Database Engine).
Computed Column Interoperability
Functions have the following properties. The values of these properties determine whether functions can be used
in computed columns that can be persisted or indexed.

PROPERTY DESCRIPTION NOTES

IsDeterministic Function is deterministic or Local data access is allowed in


nondeterministic. deterministic functions. For example,
functions that always return the same
result any time they are called by using
a specific set of input values and with
the same state of the database would
be labeled deterministic.

IsPrecise Function is precise or imprecise. Imprecise functions contain operations


such as floating point operations.

IsSystemVerified The precision and determinism


properties of the function can be
verified by SQL Server.

SystemDataAccess Function accesses system data (system


catalogs or virtual system tables) in the
local instance of SQL Server.

UserDataAccess Function accesses user data in the local Includes user-defined tables and temp
instance of SQL Server. tables, but not table variables.

The precision and determinism properties of Transact-SQL functions are determined automatically by SQL
Server. The data access and determinism properties of CLR functions can be specified by the user. For more
information, see Overview of CLR Integration Custom Attributes.
To display the current values for these properties, use OBJECTPROPERTYEX.
Functions must be created with schema binding to be deterministic.
A computed column that invokes a user-defined function can be used in an index when the user-defined function
has the following property values:
IsDeterministic = true
IsSystemVerified = true (unless the computed column is persisted)
UserDataAccess = false
SystemDataAccess = false
For more information, see Indexes on Computed Columns.
Calling Extended Stored Procedures from Functions
The extended stored procedure, when it is called from inside a function, cannot return result sets to the client. Any
ODS APIs that return result sets to the client will return FAIL. The extended stored procedure could connect back
to an instance of SQL Server; however, it should not try to join the same transaction as the function that invoked
the extended stored procedure.
Similar to invocations from a batch or stored procedure, the extended stored procedure will be executed in the
context of the Windows security account under which SQL Server is running. The owner of the stored procedure
should consider this when giving EXECUTE permission on it to users.
Limitations and Restrictions
User-defined functions cannot be used to perform actions that modify the database state.
User-defined functions cannot contain an OUTPUT INTO clause that has a table as its target.
The following Service Broker statements cannot be included in the definition of a Transact-SQL user-defined
function:
BEGIN DIALOG CONVERSATION
END CONVERSATION
GET CONVERSATION GROUP
MOVE CONVERSATION
RECEIVE
SEND
User-defined functions can be nested; that is, one user-defined function can call another. The nesting level is
incremented when the called function starts execution, and decremented when the called function finishes
execution. User-defined functions can be nested up to 32 levels. Exceeding the maximum levels of nesting
causes the whole calling function chain to fail. Any reference to managed code from a Transact-SQL user-
defined function counts as one level against the 32-level nesting limit. Methods invoked from within
managed code do not count against this limit.
Using Sort Order in CLR Table -valued Functions
When using the ORDER clause in CLR table-valued functions, follow these guidelines:
You must ensure that results are always ordered in the specified order. If the results are not in the specified
order, SQL Server will generate an error message when the query is executed.
If an ORDER clause is specified, the output of the table-valued function must be sorted according to the
collation of the column (explicit or implicit). For example, if the column collation is Chinese (either specified
in the DDL for the table-valued function or obtained from the database collation), the returned results must
be sorted according to Chinese sorting rules.
The ORDER clause, if specified, is always verified by SQL Server while returning results, whether or not it
is used by the query processor to perform further optimizations. Only use the ORDER clause if you know
it is useful to the query processor.
The SQL Server query processor takes advantage of the ORDER clause automatically in following cases:
Insert queries where the ORDER clause is compatible with an index.
ORDER BY clauses that are compatible with the ORDER clause.
Aggregates, where GROUP BY is compatible with ORDER clause.
DISTINCT aggregates where the distinct columns are compatible with the ORDER clause.
The ORDER clause does not guarantee ordered results when a SELECT query is executed, unless ORDER
BY is also specified in the query. See sys.function_order_columns (Transact-SQL ) for information on how to
query for columns included in the sort-order for table-valued functions.

Metadata
The following table lists the system catalog views that you can use to return metadata about user-defined
functions.

SYSTEM VIEW DESCRIPTION

sys.sql_modules See example E in the Examples section below.

sys.assembly_modules Displays information about CLR user-defined functions.

sys.parameters Displays information about the parameters defined in user-


defined functions.

sys.sql_expression_dependencies Displays the underlying objects referenced by a function.

Permissions
Requires CREATE FUNCTION permission in the database and ALTER permission on the schema in which the
function is being created. If the function specifies a user-defined type, requires EXECUTE permission on the type.

Examples
A. Using a scalar-valued user-defined function that calculates the ISO week
The following example creates the user-defined function ISOweek . This function takes a date argument and
calculates the ISO week number. For this function to calculate correctly, SET DATEFIRST 1 must be invoked before
the function is called.
The example also shows using the EXECUTE AS clause to specify the security context in which a stored procedure
can be executed. In the example, the option CALLER specifies that the procedure will be executed in the context of
the user that calls it. The other options that you can specify are SELF, OWNER, and user_name.
Here is the function call. Notice that DATEFIRST is set to 1 .

CREATE FUNCTION dbo.ISOweek (@DATE datetime)


RETURNS int
WITH EXECUTE AS CALLER
AS
BEGIN
DECLARE @ISOweek int;
SET @ISOweek= DATEPART(wk,@DATE)+1
-DATEPART(wk,CAST(DATEPART(yy,@DATE) as CHAR(4))+'0104');
--Special cases: Jan 1-3 may belong to the previous year
IF (@ISOweek=0)
SET @ISOweek=dbo.ISOweek(CAST(DATEPART(yy,@DATE)-1
AS CHAR(4))+'12'+ CAST(24+DATEPART(DAY,@DATE) AS CHAR(2)))+1;
--Special case: Dec 29-31 may belong to the next year
IF ((DATEPART(mm,@DATE)=12) AND
((DATEPART(dd,@DATE)-DATEPART(dw,@DATE))>= 28))
SET @ISOweek=1;
RETURN(@ISOweek);
END;
GO
SET DATEFIRST 1;
SELECT dbo.ISOweek(CONVERT(DATETIME,'12/26/2004',101)) AS 'ISO Week';

Here is the result set.


ISO Week
----------------
52

B. Creating an inline table -valued function


The following example returns an inline table-valued function in the AdventureWorks2012 database. It returns
three columns ProductID , Name and the aggregate of year-to-date totals by store as YTD Total for each product
sold to the store.

CREATE FUNCTION Sales.ufn_SalesByStore (@storeid int)


RETURNS TABLE
AS
RETURN
(
SELECT P.ProductID, P.Name, SUM(SD.LineTotal) AS 'Total'
FROM Production.Product AS P
JOIN Sales.SalesOrderDetail AS SD ON SD.ProductID = P.ProductID
JOIN Sales.SalesOrderHeader AS SH ON SH.SalesOrderID = SD.SalesOrderID
JOIN Sales.Customer AS C ON SH.CustomerID = C.CustomerID
WHERE C.StoreID = @storeid
GROUP BY P.ProductID, P.Name
);
GO

To invoke the function, run this query.

SELECT * FROM Sales.ufn_SalesByStore (602);

C. Creating a multi-statement table -valued function


The following example creates the table-valued function fn_FindReports(InEmpID) in the AdventureWorks2012
database. When supplied with a valid employee ID, the function returns a table that corresponds to all the
employees that report to the employee either directly or indirectly. The function uses a recursive common table
expression (CTE ) to produce the hierarchical list of employees. For more information about recursive CTEs, see
WITH common_table_expression (Transact-SQL ).
CREATE FUNCTION dbo.ufn_FindReports (@InEmpID INTEGER)
RETURNS @retFindReports TABLE
(
EmployeeID int primary key NOT NULL,
FirstName nvarchar(255) NOT NULL,
LastName nvarchar(255) NOT NULL,
JobTitle nvarchar(50) NOT NULL,
RecursionLevel int NOT NULL
)
--Returns a result set that lists all the employees who report to the
--specific employee directly or indirectly.*/
AS
BEGIN
WITH EMP_cte(EmployeeID, OrganizationNode, FirstName, LastName, JobTitle, RecursionLevel) -- CTE name and
columns
AS (
-- Get the initial list of Employees for Manager n
SELECT e.BusinessEntityID, e.OrganizationNode, p.FirstName, p.LastName, e.JobTitle, 0
FROM HumanResources.Employee e
INNER JOIN Person.Person p
ON p.BusinessEntityID = e.BusinessEntityID
WHERE e.BusinessEntityID = @InEmpID
UNION ALL
-- Join recursive member to anchor
SELECT e.BusinessEntityID, e.OrganizationNode, p.FirstName, p.LastName, e.JobTitle, RecursionLevel +
1
FROM HumanResources.Employee e
INNER JOIN EMP_cte
ON e.OrganizationNode.GetAncestor(1) = EMP_cte.OrganizationNode
INNER JOIN Person.Person p
ON p.BusinessEntityID = e.BusinessEntityID
)
-- copy the required columns to the result of the function
INSERT @retFindReports
SELECT EmployeeID, FirstName, LastName, JobTitle, RecursionLevel
FROM EMP_cte
RETURN
END;
GO
-- Example invocation
SELECT EmployeeID, FirstName, LastName, JobTitle, RecursionLevel
FROM dbo.ufn_FindReports(1);

GO

D. Creating a CLR function


The example creates CLR function len_s . Before the function is created, the assembly
SurrogateStringFunction.dll is registered in the local database.

Applies to: SQL Server 2008 through SQL Server 2017.


DECLARE @SamplesPath nvarchar(1024);
-- You may have to modify the value of this variable if you have
-- installed the sample in a location other than the default location.
SELECT @SamplesPath = REPLACE(physical_name, 'Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\master.mdf',
'Microsoft SQL Server\130\Samples\Engine\Programmability\CLR\')
FROM master.sys.database_files
WHERE name = 'master';

CREATE ASSEMBLY [SurrogateStringFunction]


FROM @SamplesPath + 'StringManipulate\CS\StringManipulate\bin\debug\SurrogateStringFunction.dll'
WITH PERMISSION_SET = EXTERNAL_ACCESS;
GO

CREATE FUNCTION [dbo].[len_s] (@str nvarchar(4000))


RETURNS bigint
AS EXTERNAL NAME [SurrogateStringFunction].[Microsoft.Samples.SqlServer.SurrogateStringFunction].[LenS];
GO

For an example of how to create a CLR table-valued function, see CLR Table-Valued Functions.
E. Displaying the definition of Transact-SQL user-defined functions

SELECT definition, type


FROM sys.sql_modules AS m
JOIN sys.objects AS o ON m.object_id = o.object_id
AND type IN ('FN', 'IF', 'TF');
GO

The definition of functions created by using the ENCRYPTION option cannot be viewed by using
sys.sql_modules; however, other information about the encrypted functions is displayed.

See Also
ALTER FUNCTION (Transact-SQL )
DROP FUNCTION (Transact-SQL )
OBJECTPROPERTYEX (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.assembly_modules (Transact-SQL )
EXECUTE (Transact-SQL )
CLR User-Defined Functions
EVENTDATA (Transact-SQL )
CREATE SECURITY POLICY (Transact-SQL )
CREATE FUNCTION (SQL Data Warehouse)
5/4/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a user-defined function in SQL Data Warehouse. A user-defined function is a Transact-SQL routine that
accepts parameters, performs an action, such as a complex calculation, and returns the result of that action as a
value. The return value must be a scalar (single) value. Use this statement to create a reusable routine that can be
used in these ways:
In Transact-SQL statements such as SELECT
In applications calling the function
In the definition of another user-defined function
To define a CHECK constraint on a column
To replace a stored procedure
Transact-SQL Syntax Conventions

Syntax
--Transact-SQL Scalar Function Syntax
CREATE FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
[ ; ]

<function_option>::=
{
[ SCHEMABINDING ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
}

Arguments
schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the name of the user-defined function. Function names must comply with the rules for identifiers and must be
unique within the database and to its schema.
NOTE
Parentheses are required after the function name even if a parameter is not specified.

@parameter_name
Is a parameter in the user-defined function. One or more parameters can be declared.
A function can have a maximum of 2,100 parameters. The value of each declared parameter must be supplied by
the user when the function is executed, unless a default for the parameter is defined.
Specify a parameter name by using an at sign (@) as the first character. The parameter name must comply with the
rules for identifiers. Parameters are local to the function; the same parameter names can be used in other functions.
Parameters can take the place only of constants; they cannot be used instead of table names, column names, or the
names of other database objects.

NOTE
ANSI_WARNINGS is not honored when you pass parameters in a stored procedure, user-defined function, or when you
declare and set variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger
than three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.

parameter_data_type
Is the parameter data type. For Transact-SQL functions, all scalar data types supported in SQL Data Warehouse are
allowed. The timestamp (rowversion) data type is not a supported type.
[ =default ]
Is a default value for the parameter. If a default value is defined, the function can be executed without specifying a
value for that parameter.
When a parameter of the function has a default value, the keyword DEFAULT must be specified when the function
is called to retrieve the default value. This behavior is different from using parameters with default values in stored
procedures in which omitting the parameter also implies the default value.
return_data_type
Is the return value of a scalar user-defined function. For Transact-SQL functions, all scalar data types supported in
SQL Data Warehouse are allowed. The timestamp (rowversion) data type is not a supported type. The cursor and
table nonscalar types are not allowed.
function_body
Series of Transact-SQL statements. The function_body cannot contain a SELECT statement and cannot reference
database data. The function_body cannot reference tables or views. The function body can call other deterministic
functions but cannot call nondeterministic functions.
In scalar functions, function_body is a series of Transact-SQL statements that together evaluate to a scalar value.
scalar_expression
Specifies the scalar value that the scalar function returns.
<function_option>::=
Specifies that the function will have one or more of the following options.
SCHEMABINDING
Specifies that the function is bound to the database objects that it references. When SCHEMABINDING is
specified, the base objects cannot be modified in a way that would affect the function definition. The function
definition itself must first be modified or dropped to remove dependencies on the object that is to be modified.
The binding of the function to the objects it references is removed only when one of the following actions occurs:
The function is dropped.
The function is modified by using the ALTER statement with the SCHEMABINDING option not specified.
A function can be schema bound only if the following conditions are true:
Any user-defined functions referenced by the function are also schema-bound.
The functions and other UDFs referenced by the function are referenced using a one-part or two-part name.
Only built-in functions and other UDFs in the same database can be referenced within the body of UDFs.
The user who executed the CREATE FUNCTION statement has REFERENCES permission on the database
objects that the function references.
To remove SCHEMABINDING use ALTER
RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT
Specifies the OnNULLCall attribute of a scalar-valued function. If not specified, CALLED ON NULL INPUT
is implied by default. This means that the function body executes even if NULL is passed as an argument.

Best Practices
If a user-defined function is not created with the SCHEMABINDING clause, changes that are made to underlying
objects can affect the definition of the function and produce unexpected results when it is invoked. We recommend
that you implement one of the following methods to ensure that the function does not become outdated because
of changes to its underlying objects:
Specify the WITH SCHEMABINDING clause when you are creating the function. This ensures that the objects
referenced in the function definition cannot be modified unless the function is also modified.

Interoperability
The following statements are valid in a function:
Assignment statements.
Control-of-Flow statements except TRY...CATCH statements.
DECL ARE statements defining local data variables.

Limitations and Restrictions


User-defined functions cannot be used to perform actions that modify the database state.
User-defined functions can be nested; that is, one user-defined function can call another. The nesting level is
incremented when the called function starts execution, and decremented when the called function finishes
execution. User-defined functions can be nested up to 32 levels. Exceeding the maximum levels of nesting causes
the whole calling function chain to fail.

Metadata

This section lists the system catalog views that you can use to return metadata about user-defined functions.
sys.sql_modules : Displays the definition of Transact-SQL user-defined functions. For example:
SELECT definition, type
FROM sys.sql_modules AS m
JOIN sys.objects AS o
ON m.object_id = o.object_id
AND type = ('FN');
GO

sys.parameters : Displays information about the parameters defined in user-defined functions.


sys.sql_expression_dependencies : Displays the underlying objects referenced by a function.

Permissions
Requires CREATE FUNCTION permission in the database and ALTER permission on the schema in which the
function is being created.

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


A. Using a scalar-valued user-defined function to change a data type
This simple function takes a int data type as an input, and returns a decimal(10,2) data type as an output.

CREATE FUNCTION dbo.ConvertInput (@MyValueIn int)


RETURNS decimal(10,2)
AS
BEGIN
DECLARE @MyValueOut int;
SET @MyValueOut= CAST( @MyValueIn AS decimal(10,2));
RETURN(@MyValueOut);
END;
GO

SELECT dbo.ConvertInput(15) AS 'ConvertedValue';

See Also
ALTER FUNCTION (SQL Server PDW )
DROP FUNCTION (SQL Server PDW )
CREATE INDEX (Transact-SQL)
5/30/2018 • 41 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a relational index on a table or view. Also called a rowstore index because it is either a clustered or
nonclustered B -tree index. You can create a rowstore index before there is data in the table. Use a rowstore
index to improve query performance, especially when the queries select from specific columns or require
values to be sorted in a particular order.

NOTE
SQL Data Warehouse and Parallel Data Warehouse currently do not support Unique constraints. Any examples
referencing Unique Constraints are only applicable to SQL Server and SQL Database.

TIP
For information on index design guidelines, refer to the SQL Server Index Design Guide.

Simple examples:

-- Create a nonclustered index on a table or view


CREATE INDEX i1 ON t1 (col1);

--Create a clustered index on a table and use a 3-part name for the table
CREATE CLUSTERED INDEX i1 ON d1.s1.t1 (col1);

-- Syntax for SQL Server and Azure SQL Database


-- Create a nonclustered index with a unique constraint
-- on 3 columns and specify the sort order for each column
CREATE UNIQUE INDEX i1 ON t1 (col1 DESC, col2 ASC, col3 DESC);

Key scenarios:
Starting with SQL Server 2016 (13.x) and SQL Database, use a nonclustered index on a columnstore index
to improve data warehousing query performance. For more information, see Columnstore Indexes - Data
Warehouse.
Need to create a different type of index?
CREATE XML INDEX (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE COLUMNSTORE INDEX (Transact-SQL )
Transact-SQL Syntax Conventions

Syntax
Syntax for SQL Server and Azure SQL Database

CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name


ON <object> ( column [ ASC | DESC ] [ ,...n ] )
[ INCLUDE ( column_name [ ,...n ] ) ]
[ WHERE <filter_predicate> ]
[ WITH ( <relational_index_option> [ ,...n ] ) ]
[ ON { partition_scheme_name ( column_name )
| filegroup_name
| default
}
]
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]

[ ; ]

<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}

<relational_index_option> ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| STATISTICS_INCREMENTAL = { ON | OFF }
| DROP_EXISTING = { ON | OFF }
| ONLINE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE | ROW | PAGE}
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]
}

<filter_predicate> ::=
<conjunct> [ AND <conjunct> ]

<conjunct> ::=
<disjunct> | <comparison>

<disjunct> ::=
column_name IN (constant ,...n)

<comparison> ::=
column_name <comparison_op> constant

<comparison_op> ::=
{ IS | IS NOT | = | <> | != | > | >= | !> | < | <= | !< }

<range> ::=
<partition_number_expression> TO <partition_number_expression>

Backward Compatible Relational Index

IMPORTANT
The backward compatible relational index syntax structure will be removed in a future version of SQL Server. Avoid using
this syntax structure in new development work, and plan to modify applications that currently use the feature. Use the
syntax structure specified in <relational_index_option> instead.
CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name
ON <object> ( column_name [ ASC | DESC ] [ ,...n ] )
[ WITH <backward_compatible_index_option> [ ,...n ] ]
[ ON { filegroup_name | "default" } ]

<object> ::=
{
[ database_name. [ owner_name ] . | owner_name. ]
table_or_view_name
}

<backward_compatible_index_option> ::=
{
PAD_INDEX
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB
| IGNORE_DUP_KEY
| STATISTICS_NORECOMPUTE
| DROP_EXISTING
}

Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

CREATE [ CLUSTERED | NONCLUSTERED ] INDEX index_name


ON [ database_name . [ schema ] . | schema . ] table_name
( { column [ ASC | DESC ] } [ ,...n ] )
WITH ( DROP_EXISTING = { ON | OFF } )
[;]

Arguments
UNIQUE
Creates a unique index on a table or view. A unique index is one in which no two rows are permitted to have
the same index key value. A clustered index on a view must be unique.
The Database Engine does not allow creating a unique index on columns that already include duplicate values,
whether or not IGNORE_DUP_KEY is set to ON. If this is tried, the Database Engine displays an error
message. Duplicate values must be removed before a unique index can be created on the column or columns.
Columns that are used in a unique index should be set to NOT NULL, because multiple null values are
considered duplicates when a unique index is created.
CLUSTERED
Creates an index in which the logical order of the key values determines the physical order of the
corresponding rows in a table. The bottom, or leaf, level of the clustered index contains the actual data rows of
the table. A table or view is allowed one clustered index at a time.
A view with a unique clustered index is called an indexed view. Creating a unique clustered index on a view
physically materializes the view. A unique clustered index must be created on a view before any other indexes
can be defined on the same view. For more information, see Create Indexed Views.
Create the clustered index before creating any nonclustered indexes. Existing nonclustered indexes on tables
are rebuilt when a clustered index is created.
If CLUSTERED is not specified, a nonclustered index is created.
NOTE
Because the leaf level of a clustered index and the data pages are the same by definition, creating a clustered index and
using the ON partition_scheme_name or ON filegroup_name clause effectively moves a table from the filegroup on
which the table was created to the new partition scheme or filegroup. Before creating tables or indexes on specific
filegroups, verify which filegroups are available and that they have enough empty space for the index.

In some cases creating a clustered index can enable previously disabled indexes. For more information, see
Enable Indexes and Constraints and Disable Indexes and Constraints.
NONCLUSTERED
Creates an index that specifies the logical ordering of a table. With a nonclustered index, the physical order of
the data rows is independent of their indexed order.
Each table can have up to 999 nonclustered indexes, regardless of how the indexes are created: either implicitly
with PRIMARY KEY and UNIQUE constraints, or explicitly with CREATE INDEX.
For indexed views, nonclustered indexes can be created only on a view that has a unique clustered index
already defined.
If not otherwise specified, the default index type is NONCLUSTERED.
index_name
Is the name of the index. Index names must be unique within a table or view but do not have to be unique
within a database. Index names must follow the rules of identifiers.
column
Is the column or columns on which the index is based. Specify two or more column names to create a
composite index on the combined values in the specified columns. List the columns to be included in the
composite index, in sort-priority order, inside the parentheses after table_or_view_name.
Up to 32 columns can be combined into a single composite index key. All the columns in a composite index key
must be in the same table or view. The maximum allowable size of the combined index values is 900 bytes for
a clustered index, or 1,700 for a nonclustered index. The limits are 16 columns and 900 bytes for versions
before SQL Database V12 and SQL Server 2016 (13.x).
Columns that are of the large object (LOB ) data types ntext, text, varchar(max), nvarchar(max),
varbinary(max), xml, or image cannot be specified as key columns for an index. Also, a view definition
cannot include ntext, text, or image columns, even if they are not referenced in the CREATE INDEX
statement.
You can create indexes on CLR user-defined type columns if the type supports binary ordering. You can also
create indexes on computed columns that are defined as method invocations off a user-defined type column,
as long as the methods are marked deterministic and do not perform data access operations. For more
information about indexing CLR user-defined type columns, see CLR User-defined Types.
[ ASC | DESC ]
Determines the ascending or descending sort direction for the particular index column. The default is ASC.
INCLUDE (column [ ,... n ] )
Specifies the non-key columns to be added to the leaf level of the nonclustered index. The nonclustered index
can be unique or non-unique.
Column names cannot be repeated in the INCLUDE list and cannot be used simultaneously as both key and
non-key columns. Nonclustered indexes always contain the clustered index columns if a clustered index is
defined on the table. For more information, see Create Indexes with Included Columns.
All data types are allowed except text, ntext, and image. The index must be created or rebuilt offline
(ONLINE = OFF ) if any one of the specified non-key columns are varchar(max), nvarchar(max), or
varbinary(max) data types.
Computed columns that are deterministic and either precise or imprecise can be included columns. Computed
columns derived from image, ntext, text, varchar(max), nvarchar(max), varbinary(max), and xml data
types can be included in non-key columns as long as the computed column data types is allowable as an
included column. For more information, see Indexes on Computed Columns.
For information on creating an XML index, see CREATE XML INDEX (Transact-SQL ).
WHERE <filter_predicate> Creates a filtered index by specifying which rows to include in the index. The
filtered index must be a nonclustered index on a table. Creates filtered statistics for the data rows in the filtered
index.
The filter predicate uses simple comparison logic and cannot reference a computed column, a UDT column, a
spatial data type column, or a hierarchyID data type column. Comparisons using NULL literals are not allowed
with the comparison operators. Use the IS NULL and IS NOT NULL operators instead.
Here are some examples of filter predicates for the Production.BillOfMaterials table:
WHERE StartDate > '20000101' AND EndDate <= '20000630'

WHERE ComponentID IN (533, 324, 753)

WHERE StartDate IN ('20000404', '20000905') AND EndDate IS NOT NULL

Filtered indexes do not apply to XML indexes and full-text indexes. For UNIQUE indexes, only the
selected rows must have unique index values. Filtered indexes do not allow the IGNORE_DUP_KEY
option.
ON partition_scheme_name ( column_name )
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the partition scheme that defines the filegroups onto which the partitions of a partitioned index will
be mapped. The partition scheme must exist within the database by executing either CREATE PARTITION
SCHEME or ALTER PARTITION SCHEME. column_name specifies the column against which a partitioned
index will be partitioned. This column must match the data type, length, and precision of the argument of the
partition function that partition_scheme_name is using. column_name is not restricted to the columns in the
index definition. Any column in the base table can be specified, except when partitioning a UNIQUE index,
column_name must be chosen from among those used as the unique key. This restriction allows the Database
Engine to verify uniqueness of key values within a single partition only.

NOTE
When you partition a non-unique, clustered index, the Database Engine by default adds the partitioning column to the
list of clustered index keys, if it is not already specified. When partitioning a non-unique, nonclustered index, the
Database Engine adds the partitioning column as a non-key (included) column of the index, if it is not already specified.

If partition_scheme_name or filegroup is not specified and the table is partitioned, the index is placed in the
same partition scheme, using the same partitioning column, as the underlying table.

NOTE
You cannot specify a partitioning scheme on an XML index. If the base table is partitioned, the XML index uses the same
partition scheme as the table.
For more information about partitioning indexes, Partitioned Tables and Indexes.
ON filegroup_name
Applies to: SQL Server 2008 through SQL Server 2017.
Creates the specified index on the specified filegroup. If no location is specified and the table or view is not
partitioned, the index uses the same filegroup as the underlying table or view. The filegroup must already exist.
ON "default"
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Creates the specified index on the default filegroup.
The term default, in this context, is not a keyword. It is an identifier for the default filegroup and must be
delimited, as in ON "default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must
be ON for the current session. This is the default setting. For more information, see SET
QUOTED_IDENTIFIER (Transact-SQL ).
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the placement of FILESTREAM data for the table when a clustered index is created. The
FILESTREAM_ON clause allows FILESTREAM data to be moved to a different FILESTREAM filegroup or
partition scheme.
filestream_filegroup_name is the name of a FILESTREAM filegroup. The filegroup must have one file defined
for the filegroup by using a CREATE DATABASE or ALTER DATABASE statement; otherwise, an error is raised.
If the table is partitioned, the FILESTREAM_ON clause must be included and must specify a partition scheme
of FILESTREAM filegroups that uses the same partition function and partition columns as the partition
scheme for the table. Otherwise, an error is raised.
If the table is not partitioned, the FILESTREAM column cannot be partitioned. FILESTREAM data for the table
must be stored in a single filegroup that is specified in the FILESTREAM_ON clause.
FILESTREAM_ON NULL can be specified in a CREATE INDEX statement if a clustered index is being created
and the table does not contain a FILESTREAM column.
For more information, see FILESTREAM (SQL Server).
<object>::=
Is the fully qualified or nonfully qualified object to be indexed.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or_view_name
Is the name of the table or view to be indexed.
The view must be defined with SCHEMABINDING to create an index on it. A unique clustered index must be
created on a view before any nonclustered index is created. For more information about indexed views, see the
Remarks section.
Beginning with SQL Server 2016 (13.x), the object can be a table stored with a clustered columnstore index.
Azure SQL Database supports the three-part name format database_name.[schema_name].object_name
when the database_name is the current database or the database_name is tempdb and the object_name starts
with #.
<relational_index_option>::=
Specifies the options to use when you create the index.
PAD_INDEX = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by fillfactor is applied to the intermediate-level pages of the
index.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity, leaving sufficient space for at least one row of the
maximum size the index can have, considering the set of keys on the intermediate pages.
The PAD_INDEX option is useful only when FILLFACTOR is specified, because PAD_INDEX uses the
percentage specified by FILLFACTOR. If the percentage specified for FILLFACTOR is not large enough to
allow for one row, the Database Engine internally overrides the percentage to allow for the minimum. The
number of rows on an intermediate index page is never less than two, regardless of how low the value of
fillfactor.
In backward compatible syntax, WITH PAD_INDEX is equivalent to WITH PAD_INDEX = ON.
FILLFACTOR =fillfactor
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index
page during index creation or rebuild. fillfactor must be an integer value from 1 to 100. If fillfactor is 100, the
Database Engine creates indexes with leaf pages filled to capacity.
The FILLFACTOR setting applies only when the index is created or rebuilt. The Database Engine does not
dynamically keep the specified percentage of empty space in the pages. To view the fill factor setting, use the
sys.indexes catalog view.

IMPORTANT
Creating a clustered index with a FILLFACTOR less than 100 affects the amount of storage space the data occupies
because the Database Engine redistributes the data when it creates the clustered index.

For more information, see Specify Fill Factor for an Index.


SORT_IN_TEMPDB = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether to store temporary sort results in tempdb. The default is OFF.
ON
The intermediate sort results that are used to build the index are stored in tempdb. This may reduce the time
required to create an index if tempdb is on a different set of disks than the user database. However, this
increases the amount of disk space that is used during the index build.
OFF
The intermediate sort results are stored in the same database as the index.
In addition to the space required in the user database to create the index, tempdb must have about the same
amount of additional space to hold the intermediate sort results. For more information, see
SORT_IN_TEMPDB Option For Indexes.
In backward compatible syntax, WITH SORT_IN_TEMPDB is equivalent to WITH SORT_IN_TEMPDB = ON.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique
index. The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The
option has no effect when executing CREATE INDEX, ALTER INDEX, or UPDATE. The default is OFF.
ON
A warning message will occur when duplicate key values are inserted into a unique index. Only the rows
violating the uniqueness constraint will fail.
OFF
An error message will occur when duplicate key values are inserted into a unique index. The entire INSERT
operation will be rolled back.
IGNORE_DUP_KEY cannot be set to ON for indexes created on a view, non-unique indexes, XML indexes,
spatial indexes, and filtered indexes.
To view IGNORE_DUP_KEY, use sys.indexes.
In backward compatible syntax, WITH IGNORE_DUP_KEY is equivalent to WITH IGNORE_DUP_KEY = ON.
STATISTICS_NORECOMPUTE = { ON | OFF}
Specifies whether distribution statistics are recomputed. The default is OFF.
ON
Out-of-date statistics are not automatically recomputed.
OFF
Automatic statistics updating are enabled.
To restore automatic statistics updating, set the STATISTICS_NORECOMPUTE to OFF, or execute UPDATE
STATISTICS without the NORECOMPUTE clause.

IMPORTANT
Disabling automatic recomputation of distribution statistics may prevent the query optimizer from picking optimal
execution plans for queries involving the table.

In backward compatible syntax, WITH STATISTICS_NORECOMPUTE is equivalent to WITH


STATISTICS_NORECOMPUTE = ON.
STATISTICS_INCREMENTAL = { ON | OFF }
When ON, the statistics created are per partition statistics. When OFF, the statistics tree is dropped and SQL
Server re-computes the statistics. The default is OFF.
If per partition statistics are not supported the option is ignored and a warning is generated. Incremental stats
are not supported for following statistics types:
Statistics created with indexes that are not partition-aligned with the base table.
Statistics created on Always On readable secondary databases.
Statistics created on read-only databases.
Statistics created on filtered indexes.
Statistics created on views.
Statistics created on internal tables.
Statistics created with spatial indexes or XML indexes.
DROP_EXISTING = { ON | OFF }
Is an option to drop and rebuild the existing clustered or nonclustered index with modified column
specifications, and keep the same name for the index. The default is OFF.
ON
Specifies to drop and rebuild the existing index, which must have the same name as the parameter
index_name.
OFF
Specifies not to drop and rebuild the existing index. SQL Server displays an error if the specified index name
already exists.
With DROP_EXISTING, you can change:
A nonclustered rowstore index to a clustered rowstore index.
With DROP_EXISTING, you cannot change:
A clustered rowstore index to a nonclustered rowstore index.
A clustered columnstore index to any type of rowstore index.
In backward compatible syntax, WITH DROP_EXISTING is equivalent to WITH DROP_EXISTING = ON.
ONLINE = { ON | OFF }
Specifies whether underlying tables and associated indexes are available for queries and data modification
during the index operation. The default is OFF.

NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are
supported by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016.

ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. This enables queries or updates to the
underlying table and indexes to proceed. At the start of the operation, a Shared (S ) lock is held on the source
object for a very short period of time. At the end of the operation, for a short period of time, an S (Shared) lock
is acquired on the source if a nonclustered index is being created; or an SCH-M (Schema Modification) lock is
acquired when a clustered index is created or dropped online and when a clustered or nonclustered index is
being rebuilt. ONLINE cannot be set to ON when an index is being created on a local temporary table.
OFF
Table locks are applied for the duration of the index operation. An offline index operation that creates, rebuilds,
or drops a clustered index, or rebuilds or drops a nonclustered index, acquires a Schema modification (Sch-M )
lock on the table. This prevents all user access to the underlying table for the duration of the operation. An
offline index operation that creates a nonclustered index acquires a Shared (S ) lock on the table. This prevents
updates to the underlying table but allows read operations, such as SELECT statements.
For more information, see How Online Index Operations Work.
Indexes, including indexes on global temp tables, can be created online with the following exceptions:
XML index
Index on a local temp table.
Initial unique clustered index on a view.
Disabled clustered indexes.
Clustered index if the underlying table contains LOB data types: image, ntext, text, and spatial types.
varchar(max) and varbinary(max) columns cannot be part of an index. In SQL Server (beginning with
SQL Server 2012 (11.x)) and in SQL Database, when a table contains varchar(max) or varbinary(max)
columns, a clustered index containing other columns, can be built or rebuilt using the ONLINE option. SQL
Database does not permit the ONLINE option when the base table contains varchar(max) or
varbinary(max) columns.
For more information, see Perform Index Operations Online.
ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when accessing the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Overrides the max degree of parallelism configuration option for the duration of the index operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP
to limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism can be:
1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number or
fewer based on the current system workload.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.
NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are
supported by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016 and Editions and
Supported Features for SQL Server 2017.

DATA_COMPRESSION
Specifies the data compression option for the specified index, partition number, or range of partitions. The
options are as follows:
NONE
Index or specified partitions are not compressed.
ROW
Index or specified partitions are compressed by using row compression.
PAGE
Index or specified partitions are compressed by using page compression.
For more information about compression, see Data Compression.
ON PARTITIONS ( { <partition_number_expression> | <range> } [ ,...n ] )
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the partitions to which the DATA_COMPRESSION setting applies. If the index is not partitioned, the
ON PARTITIONS argument will generate an error. If the ON PARTITIONS clause is not provided, the
DATA_COMPRESSION option applies to all partitions of a partitioned index.
<partition_number_expression> can be specified in the following ways:
Provide the number for a partition, for example: ON PARTITIONS (2).
Provide the partition numbers for several individual partitions separated by commas, for example: ON
PARTITIONS (1, 5).
Provide both ranges and individual partitions, for example: ON PARTITIONS (2, 4, 6 TO 8).
<range> can be specified as partition numbers separated by the word TO, for example: ON
PARTITIONS (6 TO 8).
To set different types of data compression for different partitions, specify the DATA_COMPRESSION
option more than once, for example:

REBUILD WITH
(
DATA_COMPRESSION = NONE ON PARTITIONS (1),
DATA_COMPRESSION = ROW ON PARTITIONS (2, 4, 6 TO 8),
DATA_COMPRESSION = PAGE ON PARTITIONS (3, 5)
);

Remarks
The CREATE INDEX statement is optimized like any other query. To save on I/O operations, the query
processor may choose to scan another index instead of performing a table scan. The sort operation may be
eliminated in some situations. On multiprocessor computers CREATE INDEX can use more processors to
perform the scan and sort operations associated with creating the index, in the same way as other queries do.
For more information, see Configure Parallel Index Operations.
The create index operation can be minimally logged if the database recovery model is set to either bulk-logged
or simple.
Indexes can be created on a temporary table. When the table is dropped or the session ends, the indexes are
dropped.
Indexes support extended properties.

Clustered Indexes
Creating a clustered index on a table (heap) or dropping and re-creating an existing clustered index requires
additional workspace to be available in the database to accommodate data sorting and a temporary copy of
the original table or existing clustered index data. For more information about clustered indexes, see Create
Clustered Indexes.

Nonclustered Indexes
Beginning with SQL Server 2016 (13.x) and in Azure SQL Database, you can create a nonclustered index on a
table stored as a clustered columnstore index. If you first create a nonclustered index on a table stored as a
heap or clustered index, the index will persist if you later convert the table to a clustered columnstore index. It
is also not necessary to drop the nonclustered index when you rebuild the clustered columnstore index.
Limitations and Restrictions:
The FILESTREAM_ON option is not valid when you create a nonclustered index on a table stored as a
clustered columnstore index.

Unique Indexes
When a unique index exists, the Database Engine checks for duplicate values each time data is added by a
insert operations. Insert operations that would generate duplicate key values are rolled back, and the Database
Engine displays an error message. This is true even if the insert operation changes many rows but causes only
one duplicate. If an attempt is made to enter data for which there is a unique index and the
IGNORE_DUP_KEY clause is set to ON, only the rows violating the UNIQUE index fail.

Partitioned Indexes
Partitioned indexes are created and maintained in a similar manner to partitioned tables, but like ordinary
indexes, they are handled as separate database objects. You can have a partitioned index on a table that is not
partitioned, and you can have a nonpartitioned index on a table that is partitioned.
If you are creating an index on a partitioned table, and do not specify a filegroup on which to place the index,
the index is partitioned in the same manner as the underlying table. This is because indexes, by default, are
placed on the same filegroups as their underlying tables, and for a partitioned table in the same partition
scheme that uses the same partitioning columns. When the index uses the same partition scheme and
partitioning column as the table, the index is aligned with the table.

WARNING
Creating and rebuilding nonaligned indexes on a table with more than 1,000 partitions is possible, but is not supported.
Doing so may cause degraded performance or excessive memory consumption during these operations. We recommend
using only aligned indexes when the number of partitions exceed 1,000.

When partitioning a non-unique, clustered index, the Database Engine by default adds any partitioning
columns to the list of clustered index keys, if not already specified.
Indexed views can be created on partitioned tables in the same manner as indexes on tables. For more
information about partitioned indexes, see Partitioned Tables and Indexes.
In SQL Server 2017, statistics are not created by scanning all the rows in the table when a partitioned index is
created or rebuilt. Instead, the query optimizer uses the default sampling algorithm to generate statistics. To
obtain statistics on partitioned indexes by scanning all the rows in the table, use CREATE STATISTICS or
UPDATE STATISTICS with the FULLSCAN clause.

Filtered Indexes
A filtered index is an optimized nonclustered index, suited for queries that select a small percentage of rows
from a table. It uses a filter predicate to index a portion of the data in the table. A well-designed filtered index
can improve query performance, reduce storage costs, and reduce maintenance costs.
Required SET Options for Filtered Indexes
The SET options in the Required Value column are required whenever any of the following conditions occur:
Create a filtered index.
INSERT, UPDATE, DELETE, or MERGE operation modifies the data in a filtered index.
The filtered index is used by the query optimizer to produce the query plan.

DEFAULT
DEFAULT
DEFAULT SERVER OLE DB AND ODBC
SET OPTIONS REQUIRED VALUE VALUE VALUE DB-LIBRARY VALUE

ANSI_NULLS ON ON ON OFF

ANSI_PADDING ON ON ON OFF

ANSI_WARNINGS* ON ON ON OFF

ARITHABORT ON ON OFF OFF

CONCAT_NULL_YIE ON ON ON OFF
LDS_NULL

NUMERIC_ROUND OFF OFF OFF OFF


ABORT

QUOTED_IDENTIFIE ON ON ON OFF
R

*Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the database


compatibility level is set to 90 or higher. If the database compatibility level is set to 80 or earlier, the
ARITHABORT option must explicitly be set to ON.
If the SET options are incorrect, the following conditions can occur:
The filtered index is not created.
The Database Engine generates an error and rolls back INSERT, UPDATE, DELETE, or MERGE statements
that change data in the index.
Query optimizer does not consider the index in the execution plan for any Transact-SQL statements.
For more information about Filtered Indexes, see Create Filtered Indexes.
Spatial Indexes
For information about spatial indexes, see CREATE SPATIAL INDEX (Transact-SQL ) and Spatial Indexes
Overview.

XML Indexes
For information about XML indexes see, CREATE XML INDEX (Transact-SQL ) and XML Indexes (SQL Server).

Index Key Size


The maximum size for an index key is 900 bytes for a clustered index and 1,700 bytes for a nonclustered index.
(Before SQL Database V12 and SQL Server 2016 (13.x) the limit was always 900 bytes.) Indexes on varchar
columns that exceed the byte limit can be created if the existing data in the columns do not exceed the limit at
the time the index is created; however, subsequent insert or update actions on the columns that cause the total
size to be greater than the limit will fail. The index key of a clustered index cannot contain varchar columns
that have existing data in the ROW_OVERFLOW_DATA allocation unit. If a clustered index is created on a
varchar column and the existing data is in the IN_ROW_DATA allocation unit, subsequent insert or update
actions on the column that would push the data off-row will fail.
Nonclustered indexes can include non-key columns in the leaf level of the index. These columns are not
considered by the Database Engine when calculating the index key size . For more information, see Create
Indexes with Included Columns.

NOTE
When tables are partitioned, if the partitioning key columns are not already present in a non-unique clustered index,
they are added to the index by the Database Engine. The combined size of the indexed columns (not counting included
columns), plus any added partitioning columns cannot exceed 1800 bytes in a non-unique clustered index.

Computed Columns
Indexes can be created on computed columns. In addition, computed columns can have the property
PERSISTED. This means that the Database Engine stores the computed values in the table, and updates them
when any other columns on which the computed column depends are updated. The Database Engine uses
these persisted values when it creates an index on the column, and when the index is referenced in a query.
To index a computed column, the computed column must deterministic and precise. However, using the
PERSISTED property expands the type of indexable computed columns to include:
Computed columns based on Transact-SQL and CLR functions and CLR user-defined type methods that
are marked deterministic by the user.
Computed columns based on expressions that are deterministic as defined by the Database Engine but
imprecise.
Persisted computed columns require the following SET options to be set as shown in the previous
section "Required SET Options for Indexed Views".
The UNIQUE or PRIMARY KEY constraint can contain a computed column as long as it satisfies all
conditions for indexing. Specifically, the computed column must be deterministic and precise or
deterministic and persisted. For more information about determinism, see Deterministic and
Nondeterministic Functions.
Computed columns derived from image, ntext, text, varchar(max), nvarchar(max),
varbinary(max), and xml data types can be indexed either as a key or included non-key column as
long as the computed column data type is allowable as an index key column or non-key column. For
example, you cannot create a primary XML index on a computed xml column. If the index key size
exceeds 900 bytes, a warning message is displayed.
Creating an index on a computed column may cause the failure of an insert or update operation that
previously worked. Such a failure may take place when the computed column results in arithmetic error.
For example, in the following table, although computed column c results in an arithmetic error, the
INSERT statement works.

CREATE TABLE t1 (a int, b int, c AS a/b);


INSERT INTO t1 VALUES (1, 0);

If, instead, after creating the table, you create an index on computed column c , the same INSERT statement
will now fail.

CREATE TABLE t1 (a int, b int, c AS a/b);


CREATE UNIQUE CLUSTERED INDEX Idx1 ON t1(c);
INSERT INTO t1 VALUES (1, 0);

For more information, see Indexes on Computed Columns.

Included Columns in Indexes


Non-key columns, called included columns, can be added to the leaf level of a nonclustered index to improve
query performance by covering the query. That is, all columns referenced in the query are included in the index
as either key or non-key columns. This allows the query optimizer to locate all the required information from
an index scan; the table or clustered index data is not accessed. For more information, see Create Indexes with
Included Columns.

Specifying Index Options


SQL Server 2005 introduced new index options and also modifies the way in which options are specified. In
backward compatible syntax, WITH option_name is equivalent to WITH ( <option_name> = ON ). When you
set index options, the following rules apply:
New index options can only be specified by using WITH (option_name = ON | OFF).
Options cannot be specified by using both the backward compatible and new syntax in the same statement.
For example, specifying WITH (DROP_EXISTING, ONLINE = ON ) causes the statement to fail.
When you create an XML index, the options must be specified by using WITH (option_name= ON | OFF).

DROP_EXISTING Clause
You can use the DROP_EXISTING clause to rebuild the index, add or drop columns, modify options, modify
column sort order, or change the partition scheme or filegroup.
If the index enforces a PRIMARY KEY or UNIQUE constraint and the index definition is not altered in any way,
the index is dropped and re-created preserving the existing constraint. However, if the index definition is
altered the statement fails. To change the definition of a PRIMARY KEY or UNIQUE constraint, drop the
constraint and add a constraint with the new definition.
DROP_EXISTING enhances performance when you re-create a clustered index, with either the same or
different set of keys, on a table that also has nonclustered indexes. DROP_EXISTING replaces the execution of
a DROP INDEX statement on the old clustered index followed by the execution of a CREATE INDEX statement
for the new clustered index. The nonclustered indexes are rebuilt once, and then only if the index definition has
changed. The DROP_EXISTING clause does not rebuild the nonclustered indexes when the index definition
has the same index name, key and partition columns, uniqueness attribute, and sort order as the original index.
Whether the nonclustered indexes are rebuilt or not, they always remain in their original filegroups or partition
schemes and use the original partition functions. If a clustered index is rebuilt to a different filegroup or
partition scheme, the nonclustered indexes are not moved to coincide with the new location of the clustered
index. Therefore, even the nonclustered indexes previously aligned with the clustered index, they may no
longer be aligned with it. For more information about partitioned index alignment, see.
The DROP_EXISTING clause will not sort the data again if the same index key columns are used in the same
order and with the same ascending or descending order, unless the index statement specifies a nonclustered
index and the ONLINE option is set to OFF. If the clustered index is disabled, the CREATE INDEX WITH
DROP_EXISTING operation must be performed with ONLINE set to OFF. If a nonclustered index is disabled
and is not associated with a disabled clustered index, the CREATE INDEX WITH DROP_EXISTING operation
can be performed with ONLINE set to OFF or ON.
When indexes with 128 extents or more are dropped or rebuilt, the Database Engine defers the actual page
deallocations, and their associated locks, until after the transaction commits.

ONLINE Option
The following guidelines apply for performing index operations online:
The underlying table cannot be altered, truncated, or dropped while an online index operation is in process.
Additional temporary disk space is required during the index operation.
Online operations can be performed on partitioned indexes and indexes that contain persisted
computed columns, or included columns.
For more information, see Perform Index Operations Online.

Row and Page Locks Options


When ALLOW_ROW_LOCKS = ON and ALLOW_PAGE_LOCK = ON, row -, page-, and table-level locks are
allowed when accessing the index. The Database Engine chooses the appropriate lock and can escalate the lock
from a row or page lock to a table lock.
When ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCK = OFF, only a table-level lock is allowed when
accessing the index.

Viewing Index Information


To return information about indexes, you can use catalog views, system functions, and system stored
procedures.

Data Compression
Data compression is described in the topic Data Compression. The following are key points to consider:
Compression can allow more rows to be stored on a page, but does not change the maximum row size.
Non-leaf pages of an index are not page compressed but can be row compressed.
Each nonclustered index has an individual compression setting, and does not inherit the compression
setting of the underlying table.
When a clustered index is created on a heap, the clustered index inherits the compression state of the
heap unless an alternative compression state is specified.
The following restrictions apply to partitioned indexes:
You cannot change the compression setting of a single partition if the table has nonaligned indexes.
The ALTER INDEX <index> ... REBUILD PARTITION ... syntax rebuilds the specified partition of the index.
The ALTER INDEX <index> ... REBUILD WITH ... syntax rebuilds all partitions of the index.
To evaluate how changing the compression state will affect a table, an index, or a partition, use the
sp_estimate_data_compression_savings stored procedure.

Permissions
Requires ALTER permission on the table or view. User must be a member of the sysadmin fixed server role or
the db_ddladmin and db_owner fixed database roles.

Limitations and Restrictions


SQL Data Warehouse and Parallel Data Warehouse, you cannot create:
A clustered or nonclustered rowstore index on a data warehouse table when a columnstore index already
exists. This behavior is different from SMP SQL Server which allows both rowstore and columnstore
indexes to co-exist on the same table.
You cannot create an index on a view.

Metadata

To view information on existing indexes, you can query the sys.indexes (Transact-SQL ) catalog view.

Version Notes
SQL Database does not support filegroup and filestream options.

Examples: All versions. Uses the AdventureWorks database.


A. Create a simple nonclustered rowstore index
The following examples create a nonclustered index on the VendorID column of the Purchasing.ProductVendor
table.

CREATE INDEX IX_VendorID ON ProductVendor (VendorID);


CREATE INDEX IX_VendorID ON dbo.ProductVendor (VendorID DESC, Name ASC, Address DESC);
CREATE INDEX IX_VendorID ON Purchasing..ProductVendor (VendorID);

B. Create a simple nonclustered rowstore composite index


The following example creates a nonclustered composite index on the SalesQuota and SalesYTD columns of
the Sales.SalesPerson table.

CREATE NONCLUSTERED INDEX IX_SalesPerson_SalesQuota_SalesYTD ON Sales.SalesPerson (SalesQuota, SalesYTD);

C. Create an index on a table in another database


The following example creates a non-clustered index on the VendorID column of the ProductVendor table in
the Purchasing database.
CREATE CLUSTERED INDEX IX_ProductVendor_VendorID ON Purchasing..ProductVendor (VendorID);

D. Add a column to an index


The following example creates index IX_FF with two columns from the dbo.FactFinance table. The next
statement rebuilds the index with one more column and keeps the existing name.

CREATE INDEX IX_FF ON dbo.FactFinance ( FinanceKey ASC, DateKey ASC );

--Rebuild and add the OrganizationKey


CREATE INDEX IX_FF ON dbo.FactFinance ( FinanceKey, DateKey, OrganizationKey DESC)
WITH ( DROP_EXISTING = ON );

Examples: SQL Server, Azure SQL Database


E. Create a unique nonclustered index
The following example creates a unique nonclustered index on the Name column of the
Production.UnitMeasure table in the AdventureWorks2012 database. The index will enforce uniqueness on the
data inserted into the Name column.

CREATE UNIQUE INDEX AK_UnitMeasure_Name


ON Production.UnitMeasure(Name);

The following query tests the uniqueness constraint by attempting to insert a row with the same value as that
in an existing row.

--Verify the existing value.


SELECT Name FROM Production.UnitMeasure WHERE Name = N'Ounces';
GO
INSERT INTO Production.UnitMeasure (UnitMeasureCode, Name, ModifiedDate)
VALUES ('OC', 'Ounces', GetDate());

The resulting error message is:

Server: Msg 2601, Level 14, State 1, Line 1


Cannot insert duplicate key row in object 'UnitMeasure' with unique index 'AK_UnitMeasure_Name'. The
statement has been terminated.

F. Use the IGNORE_DUP_KEY option


The following example demonstrates the effect of the IGNORE_DUP_KEY option by inserting multiple rows into a
temporary table first with the option set to ON and again with the option set to OFF . A single row is inserted
into the #Test table that will intentionally cause a duplicate value when the second multiple-row INSERT
statement is executed. A count of rows in the table returns the number of rows inserted.
CREATE TABLE #Test (C1 nvarchar(10), C2 nvarchar(50), C3 datetime);
GO
CREATE UNIQUE INDEX AK_Index ON #Test (C2)
WITH (IGNORE_DUP_KEY = ON);
GO
INSERT INTO #Test VALUES (N'OC', N'Ounces', GETDATE());
INSERT INTO #Test SELECT * FROM Production.UnitMeasure;
GO
SELECT COUNT(*)AS [Number of rows] FROM #Test;
GO
DROP TABLE #Test;
GO

Here are the results of the second INSERT statement.

Server: Msg 3604, Level 16, State 1, Line 5 Duplicate key was ignored.

Number of rows
--------------
38

Notice that the rows inserted from the Production.UnitMeasure table that did not violate the uniqueness
constraint were successfully inserted. A warning was issued and the duplicate row ignored, but the entire
transaction was not rolled back.
The same statements are executed again, but with IGNORE_DUP_KEY set to OFF .

CREATE TABLE #Test (C1 nvarchar(10), C2 nvarchar(50), C3 datetime);


GO
CREATE UNIQUE INDEX AK_Index ON #Test (C2)
WITH (IGNORE_DUP_KEY = OFF);
GO
INSERT INTO #Test VALUES (N'OC', N'Ounces', GETDATE());
INSERT INTO #Test SELECT * FROM Production.UnitMeasure;
GO
SELECT COUNT(*)AS [Number of rows] FROM #Test;
GO
DROP TABLE #Test;
GO

Here are the results of the second INSERT statement.

Server: Msg 2601, Level 14, State 1, Line 5


Cannot insert duplicate key row in object '#Test' with unique index
'AK_Index'. The statement has been terminated.

Number of rows
--------------
1

Notice that none of the rows from the Production.UnitMeasure table were inserted into the table even though
only one row in the table violated the UNIQUE index constraint.
G. Using DROP_EXISTING to drop and re -create an index
The following example drops and re-creates an existing index on the ProductID column of the
Production.WorkOrder table in the AdventureWorks2012 database by using the DROP_EXISTING option. The
options FILLFACTOR and PAD_INDEX are also set.
CREATE NONCLUSTERED INDEX IX_WorkOrder_ProductID
ON Production.WorkOrder(ProductID)
WITH (FILLFACTOR = 80,
PAD_INDEX = ON,
DROP_EXISTING = ON);
GO

H. Create an index on a view


The following example creates a view and an index on that view. Two queries are included that use the indexed
view.

--Set the options to support indexed views.


SET NUMERIC_ROUNDABORT OFF;
SET ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT,
QUOTED_IDENTIFIER, ANSI_NULLS ON;
GO
--Create view with schemabinding.
IF OBJECT_ID ('Sales.vOrders', 'view') IS NOT NULL
DROP VIEW Sales.vOrders ;
GO
CREATE VIEW Sales.vOrders
WITH SCHEMABINDING
AS
SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Revenue,
OrderDate, ProductID, COUNT_BIG(*) AS COUNT
FROM Sales.SalesOrderDetail AS od, Sales.SalesOrderHeader AS o
WHERE od.SalesOrderID = o.SalesOrderID
GROUP BY OrderDate, ProductID;
GO
--Create an index on the view.
CREATE UNIQUE CLUSTERED INDEX IDX_V1
ON Sales.vOrders (OrderDate, ProductID);
GO
--This query can use the indexed view even though the view is
--not specified in the FROM clause.
SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Rev,
OrderDate, ProductID
FROM Sales.SalesOrderDetail AS od
JOIN Sales.SalesOrderHeader AS o ON od.SalesOrderID=o.SalesOrderID
AND ProductID BETWEEN 700 and 800
AND OrderDate >= CONVERT(datetime,'05/01/2002',101)
GROUP BY OrderDate, ProductID
ORDER BY Rev DESC;
GO
--This query can use the above indexed view.
SELECT OrderDate, SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Rev
FROM Sales.SalesOrderDetail AS od
JOIN Sales.SalesOrderHeader AS o ON od.SalesOrderID=o.SalesOrderID
AND DATEPART(mm,OrderDate)= 3
AND DATEPART(yy,OrderDate) = 2002
GROUP BY OrderDate
ORDER BY OrderDate ASC;
GO

I. Create an index with included (non-key) columns


The following example creates a nonclustered index with one key column ( PostalCode ) and four non-key
columns ( AddressLine1 , AddressLine2 , City , StateProvinceID ). A query that is covered by the index follows.
To display the index that is selected by the query optimizer, on the Query menu in SQL Server Management
Studio, select Display Actual Execution Plan before executing the query.
CREATE NONCLUSTERED INDEX IX_Address_PostalCode
ON Person.Address (PostalCode)
INCLUDE (AddressLine1, AddressLine2, City, StateProvinceID);
GO
SELECT AddressLine1, AddressLine2, City, StateProvinceID, PostalCode
FROM Person.Address
WHERE PostalCode BETWEEN N'98000' and N'99999';
GO

J. Create a partitioned index


The following example creates a nonclustered partitioned index on TransactionsPS1 , an existing partition
scheme in the AdventureWorks2012 database. This example assumes the partitioned index sample has been
installed.
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.

CREATE NONCLUSTERED INDEX IX_TransactionHistory_ReferenceOrderID


ON Production.TransactionHistory (ReferenceOrderID)
ON TransactionsPS1 (TransactionDate);
GO

K. Creating a filtered index


The following example creates a filtered index on the Production.BillOfMaterials table in the
AdventureWorks2012 database. The filter predicate can include columns that are not key columns in the
filtered index. The predicate in this example selects only the rows where EndDate is non-NULL.

CREATE NONCLUSTERED INDEX "FIBillOfMaterialsWithEndDate"


ON Production.BillOfMaterials (ComponentID, StartDate)
WHERE EndDate IS NOT NULL;

L. Create a compressed index


The following example creates an index on a nonpartitioned table by using row compression.

CREATE NONCLUSTERED INDEX IX_INDEX_1


ON T1 (C2)
WITH ( DATA_COMPRESSION = ROW ) ;
GO

The following example creates an index on a partitioned table by using row compression on all partitions of
the index.

CREATE CLUSTERED INDEX IX_PartTab2Col1


ON PartitionTable1 (Col1)
WITH ( DATA_COMPRESSION = ROW ) ;
GO

The following example creates an index on a partitioned table by using page compression on partition 1 of
the index and row compression on partitions 2 through 4 of the index.
CREATE CLUSTERED INDEX IX_PartTab2Col1
ON PartitionTable1 (Col1)
WITH (DATA_COMPRESSION = PAGE ON PARTITIONS(1),
DATA_COMPRESSION = ROW ON PARTITIONS (2 TO 4 ) ) ;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


M. Basic syntax

CREATE INDEX IX_VendorID


ON ProductVendor (VendorID);
CREATE INDEX IX_VendorID
ON dbo.ProductVendor (VendorID DESC, Name ASC, Address DESC);
CREATE INDEX IX_VendorID
ON Purchasing..ProductVendor (VendorID);

N. Create a non-clustered index on a table in the current database


The following example creates a non-clustered index on the VendorID column of the ProductVendor table.

CREATE INDEX IX_ProductVendor_VendorID


ON ProductVendor (VendorID);

O. Create a clustered index on a table in another database


The following example creates a non-clustered index on the VendorID column of the ProductVendor table in
the Purchasing database.

CREATE CLUSTERED INDEX IX_ProductVendor_VendorID


ON Purchasing..ProductVendor (VendorID);

See Also
SQL Server Index Design Guide
Indexes and ALTER TABLE
ALTER INDEX (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
CREATE TABLE (Transact-SQL )
CREATE XML INDEX (Transact-SQL )
Data Types (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP INDEX (Transact-SQL )
XML Indexes (SQL Server)
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
sys.xml_indexes (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE LOGIN (Transact-SQL)
5/25/2018 • 20 min to read • Edit Online

Creates a login for SQL Server, SQL Database, SQL Data Warehouse, or Parallel Data Warehouse databases.
Click one of the following tabs for the syntax, arguments, remarks, permissions, and examples for a particular
version.
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.
SQL Server
SQL Database
SQL Data Warehouse
SQL Parallel Data Warehouse

Syntax
-- Syntax for SQL Server
CREATE LOGIN login_name { WITH <option_list1> | FROM <sources> }

<option_list1> ::=
PASSWORD = { 'password' | hashed_password HASHED } [ MUST_CHANGE ]
[ , <option_list2> [ ,... ] ]

<option_list2> ::=
SID = sid
| DEFAULT_DATABASE = database
| DEFAULT_LANGUAGE = language
| CHECK_EXPIRATION = { ON | OFF}
| CHECK_POLICY = { ON | OFF}
| CREDENTIAL = credential_name

<sources> ::=
WINDOWS [ WITH <windows_options>[ ,... ] ]
| CERTIFICATE certname
| ASYMMETRIC KEY asym_key_name

<windows_options> ::=
DEFAULT_DATABASE = database
| DEFAULT_LANGUAGE = language

Arguments
login_name
Specifies the name of the login that is created. There are four types of logins: SQL Server logins, Windows logins,
certificate-mapped logins, and asymmetric key-mapped logins. When you are creating logins that are mapped
from a Windows domain account, you must use the pre-Windows 2000 user logon name in the format
[<domainName>\<login_name>]. You cannot use a UPN in the format login_name@DomainName. For an
example, see example D later in this article. Authentication logins are type sysname and must conform to the
rules for Identifiers and cannot contain a '\'. Windows logins can contain a '\'. Logins based on Active Directory
users, are limited to names of less than 21 characters.
PASSWORD ='password*' Applies to SQL Server logins only. Specifies the password for the login that is being
created. You should use a strong password. For more information, see Strong Passwords and Password Policy.
Beginning with SQL Server 2012 (11.x),, stored password information is calculated using SHA-512 of the salted
password.
Passwords are case-sensitive. Passwords should always be at least 8 characters long, and cannot exceed 128
characters. Passwords can include a-z, A-Z, 0-9, and most non-alphanumeric characters. Passwords cannot
contain single quotes, or the login_name.
PASSWORD =hashed_password
Applies to the HASHED keyword only. Specifies the hashed value of the password for the login that is being
created.
HASHED Applies to SQL Server logins only. Specifies that the password entered after the PASSWORD
argument is already hashed. If this option is not selected, the string entered as password is hashed before it is
stored in the database. This option should only be used for migrating databases from one server to another. Do
not use the HASHED option to create new logins. The HASHED option cannot be used with hashes created by
SQL 7 or earlier.
MUST_CHANGE Applies to SQL Server logins only. If this option is included, SQL Server prompts the user for a
new password the first time the new login is used.
CREDENTIAL =credential_name
The name of a credential to be mapped to the new SQL Server login. The credential must already exist in the
server. Currently this option only links the credential to a login. A credential cannot be mapped to the System
Administrator (sa) login.
SID = sid
Used to recreate a login. Applies to SQL Server authentication logins only, not Windows authentication logins.
Specifies the SID of the new SQL Server authentication login. If this option is not used, SQL Server
automatically assigns a SID. The SID structure depends on the SQL Server version. SQL Server login SID: a 16
byte (binary(16)) literal value based on a GUID. For example, SID = 0x14585E90117152449347750164BA00A7 .
DEFAULT_DATABASE =database
Specifies the default database to be assigned to the login. If this option is not included, the default database is set
to master.
DEFAULT_L ANGUAGE =language
Specifies the default language to be assigned to the login. If this option is not included, the default language is set
to the current default language of the server. If the default language of the server is later changed, the default
language of the login remains unchanged.
CHECK_EXPIRATION = { ON | OFF }
Applies to SQL Server logins only. Specifies whether password expiration policy should be enforced on this
login. The default value is OFF.
CHECK_POLICY = { ON | OFF }
Applies to SQL Server logins only. Specifies that the Windows password policies of the computer on which SQL
Server is running should be enforced on this login. The default value is ON.
If the Windows policy requires strong passwords, passwords must contain at least three of the following four
characteristics:
An uppercase character (A-Z ).
A lowercase character (a-z).
A digit (0-9).
One of the non-alphanumeric characters, such as a space, _, @, *, ^, %, !, $, #, or &.
WINDOWS
Specifies that the login be mapped to a Windows login.
CERTIFICATE certname
Specifies the name of a certificate to be associated with this login. This certificate must already occur in the
master database.
ASYMMETRIC KEY asym_key_name
Specifies the name of an asymmetric key to be associated with this login. This key must already occur in the
master database.

Remarks
Passwords are case-sensitive.
Prehashing of passwords is supported only when you are creating SQL Server logins.
If MUST_CHANGE is specified, CHECK_EXPIRATION and CHECK_POLICY must be set to ON. Otherwise,
the statement will fail.
A combination of CHECK_POLICY = OFF and CHECK_EXPIRATION = ON is not supported.
When CHECK_POLICY is set to OFF, lockout_time is reset and CHECK_EXPIRATION is set to OFF.

IMPORTANT
CHECK_EXPIRATION and CHECK_POLICY are only enforced on Windows Server 2003 and later. For more information, see
Password Policy.

Logins created from certificates or asymmetric keys are used only for code signing. They cannot be used to
connect to SQL Server. You can create a login from a certificate or asymmetric key only when the certificate or
asymmetric key already exists in master.
For a script to transfer logins, see How to transfer the logins and the passwords between instances of SQL
Server 2005 and SQL Server 2008.
Creating a login automatically enables the new login and grants the login the server level CONNECT SQL
permission.
The server's authentication mode must match the login type to permit access.
For information about designing a permissions system, see Getting Started with Database Engine
Permissions.

Permissions
Only users with ALTER ANY LOGIN permission on the server or membership in the securityadmin fixed
server role can create logins. For more information, see Server-Level Roles and ALTER SERVER
ROLE.https://docs.microsoft.com/en-us/azure/sql-database/sql-database-manage-logins#additional-server-
level-administrative-roles.
If the CREDENTIAL option is used, also requires ALTER ANY CREDENTIAL permission on the server.

After creating a login


After creating a login, the login can connect to SQL Server, but only has the permissions granted to the public
role. Consider performing some of the following activities.
To connect to a database, create a database user for the login. For more information, see CREATE USER.
Create a user-defined server role by using CREATE SERVER ROLE. Use ALTER SERVER ROLE … ADD
MEMBER to add the new login to the user-defined server role. For more information, see CREATE
SERVER ROLE and ALTER SERVER ROLE.
Use sp_addsrvrolemember to add the login to a fixed server role. For more information, see Server-
Level Roles and sp_addsrvrolemember.
Use the GRANT statement, to grant server-level permissions to the new login or to a role containing the
login. For more information, see GRANT.

Examples
A. Creating a login with a password
The following example creates a login for a particular user and assigns a password.

CREATE LOGIN <login_name> WITH PASSWORD = '<enterStrongPasswordHere>';


GO

B. Creating a login with a password that must be changed


The following example creates a login for a particular user and assigns a password. The MUST_CHANGE option
requires users to change this password the first time they connect to the server.
Applies to: SQL Server 2008 through SQL Server 2017.

CREATE LOGIN <login_name> WITH PASSWORD = '<enterStrongPasswordHere>'


MUST_CHANGE, CHECK_EXPIRATION = ON;
GO

NOTE
The MUST_CHANGE option cannot be used when CHECK_EXPIRATION is OFF.

C. Creating a login mapped to a credential


The following example creates the login for a particular user, using the user. This login is mapped to the
credential.
Applies to: SQL Server 2008 through SQL Server 2017.

CREATE LOGIN <login_name> WITH PASSWORD = '<enterStrongPasswordHere>',


CREDENTIAL = <credentialName>;
GO

D. Creating a login from a certificate


The following example creates login for a particular user from a certificate in master.
Applies to: SQL Server 2008 through SQL Server 2017.

USE MASTER;
CREATE CERTIFICATE <certificateName>
WITH SUBJECT = '<login_name> certificate in master database',
EXPIRY_DATE = '12/05/2025';
GO
CREATE LOGIN <login_name> FROM CERTIFICATE <certificateName>;
GO

E. Creating a login from a Windows domain account


The following example creates a login from a Windows domain account.
Applies to: SQL Server 2008 through SQL Server 2017.
CREATE LOGIN [<domainName>\<login_name>] FROM WINDOWS;
GO

F. Creating a login from a SID


The following example first creates a SQL Server authentication login and determines the SID of the login.

CREATE LOGIN TestLogin WITH PASSWORD = 'SuperSecret52&&';

SELECT name, sid FROM sys.sql_logins WHERE name = 'TestLogin';


GO

My query returns 0x241C11948AEEB749B0D22646DB1A19F2 as the SID. Your query will return a different
value. The following statements delete the login, and then recreate the login. Use the SID from your previous
query.

DROP LOGIN TestLogin;


GO

CREATE LOGIN TestLogin


WITH PASSWORD = 'SuperSecret52&&', SID = 0x241C11948AEEB749B0D22646DB1A19F2;

SELECT * FROM sys.sql_logins WHERE name = 'TestLogin';


GO

See Also
Getting Started with Database Engine Permissions
Principals (Database Engine)
Password Policy
ALTER LOGIN
DROP LOGIN
EVENTDATA
Create a Login
CREATE MASTER KEY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a database master key.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Parallel Data Warehouse

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password'


[ ; ]

-- Syntax for Azure SQL Database and Azure SQL Data Warehouse

CREATE MASTER KEY [ ENCRYPTION BY PASSWORD ='password' ]


[ ; ]

Arguments
PASSWORD ='password'
Is the password that is used to encrypt the master key in the database. password must meet the Windows
password policy requirements of the computer that is running the instance of SQL Server. password is optional
in SQL Database and SQL Data Warehouse.

Remarks
The database master key is a symmetric key used to protect the private keys of certificates and asymmetric keys
that are present in the database. When it is created, the master key is encrypted by using the AES_256 algorithm
and a user-supplied password. In SQL Server 2008 and SQL Server 2008 R2, the Triple DES algorithm is used.
To enable the automatic decryption of the master key, a copy of the key is encrypted by using the service master
key and stored in both the database and in master. Typically, the copy stored in master is silently updated
whenever the master key is changed. This default can be changed by using the DROP ENCRYPTION BY
SERVICE MASTER KEY option of ALTER MASTER KEY. A master key that is not encrypted by the service
master key must be opened by using the OPEN MASTER KEY statement and a password.
The is_master_key_encrypted_by_server column of the sys.databases catalog view in master indicates whether
the database master key is encrypted by the service master key.
Information about the database master key is visible in the sys.symmetric_keys catalog view.
For SQL Server and Parallel Data Warehouse, the Master Key is typically protected by the Service Master Key
and at least one password. In case of the database being physically moved to a different server (log shipping,
restoring backup, etc.), the database will contain a copy of the master Key encrypted by the original server
Service Master Key (unless this encryption was explicitly removed using ALTER MASTER KEY DDL ), and a copy
of it encrypted by each password specified during either CREATE MASTER KEY or subsequent ALTER MASTER
KEY DDL operations. In order to recover the Master Key, and all the data encrypted using the Master Key as the
root in the key hierarchy after the database has been moved, the user will have either use OPEN MASTER KEY
statement using one of the password used to protect the Master Key, restore a backup of the Master Key, or
restore a backup of the original Service Master Key on the new server.
For SQL Database and SQL Data Warehouse, the password protection is not considered to be a safety
mechanism to prevent a data loss scenario in situations where the database may be moved from one server to
another, as the Service Master Key protection on the Master Key is managed by Microsoft Azure platform.
Therefore, the Maser Key password is optional in SQL Database and SQL Data Warehouse.

IMPORTANT
You should back up the master key by using BACKUP MASTER KEY and store the backup in a secure, off-site location.

The service master key and database master keys are protected by using the AES -256 algorithm.

Permissions
Requires CONTROL permission on the database.

Examples
The following example creates a database master key for the current database. The key is encrypted using the
password 23987hxJ#KL95234nl0zBe .

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '23987hxJ#KL95234nl0zBe';


GO

See Also
sys.symmetric_keys (Transact-SQL )
sys.databases (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
DROP MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
Encryption Hierarchy
CREATE MESSAGE TYPE (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new message type. A message type defines the name of a message and the validation that Service
Broker performs on messages that have that name. Both sides of a conversation must define the same message
types.
Transact-SQL Syntax Conventions

Syntax
CREATE MESSAGE TYPE message_type_name
[ AUTHORIZATION owner_name ]
[ VALIDATION = { NONE
| EMPTY
| WELL_FORMED_XML
| VALID_XML WITH SCHEMA COLLECTION schema_collection_name
} ]
[ ; ]

Arguments
message_type_name
Is the name of the message type to create. A new message type is created in the current database and owned by
the principal specified in the AUTHORIZATION clause. Server, database, and schema names cannot be specified.
The message_type_name can be up to 128 characters.
AUTHORIZATION owner_name
Sets the owner of the message type to the specified database user or role. When the current user is dbo or sa,
owner_name can be the name of any valid user or role. Otherwise, owner_name must be the name of the current
user, the name of a user who the current user has IMPERSONATE permission for, or the name of a role to which
the current user belongs. When this clause is omitted, the message type belongs to the current user.
VALIDATION
Specifies how Service Broker validates the message body for messages of this type. When this clause is not
specified, validation defaults to NONE.
NONE
Specifies that no validation is performed. The message body can contain data, or it can be NULL.
EMPTY
Specifies that the message body must be NULL.
WELL_FORMED_XML
Specifies that the message body must contain well-formed XML.
VALID_XML WITH SCHEMA COLLECTION schema_collection_name
Specifies that the message body must contain XML that complies with a schema in the specified schema collection
The schema_collection_name must be the name of an existing XML schema collection.
Remarks
Service Broker validates incoming messages. When a message contains a message body that does not comply
with the validation type specified, Service Broker discards the invalid message and returns an error message to the
service that sent the message.
Both sides of a conversation must define the same name for a message type. To help troubleshooting, both sides
of a conversation typically specify the same validation for the message type, although Service Broker does not
require that both sides of the conversation use the same validation.
A message type can not be a temporary object. Message type names starting with # are allowed, but are
permanent objects.

Permissions
Permission for creating a message type defaults to members of the db_ddladmin or db_owner fixed database
roles and the sysadmin fixed server role.
REFERENCES permission for a message type defaults to the owner of the message type, members of the
db_owner fixed database role, and members of the sysadmin fixed server role.
When the CREATE MESSAGE TYPE statement specifies a schema collection, the user executing the statement
must have REFERENCES permission on the schema collection specified.

Examples
A. Creating a message type containing well-formed XML
The following example creates a new message type that contains well-formed XML.

CREATE MESSAGE TYPE


[//Adventure-Works.com/Expenses/SubmitExpense]
VALIDATION = WELL_FORMED_XML ;

B. Creating a message type containing typed XML


The following example creates a message type for an expense report encoded in XML. The example creates an
XML schema collection that holds the schema for a simple expense report. The example then creates a new
message type that validates messages against the schema.
CREATE XML SCHEMA COLLECTION ExpenseReportSchema AS
N'<?xml version="1.0" encoding="UTF-16" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://Adventure-Works.com/schemas/expenseReport"
xmlns:expense="http://Adventure-Works.com/schemas/expenseReport"
elementFormDefault="qualified"
>
<xsd:complexType name="expenseReportType">
<xsd:sequence>
<xsd:element name="EmployeeName" type="xsd:string"/>
<xsd:element name="EmployeeID" type="xsd:string"/>
<xsd:element name="ItemDetail"
type="expense:ItemDetailType" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>

<xsd:complexType name="ItemDetailType">
<xsd:sequence>
<xsd:element name="Date" type="xsd:date"/>
<xsd:element name="CostCenter" type="xsd:string"/>
<xsd:element name="Total" type="xsd:decimal"/>
<xsd:element name="Currency" type="xsd:string"/>
<xsd:element name="Description" type="xsd:string"/>
</xsd:sequence>
</xsd:complexType>

<xsd:element name="ExpenseReport" type="expense:expenseReportType"/>

</xsd:schema>' ;

CREATE MESSAGE TYPE


[//Adventure-Works.com/Expenses/SubmitExpense]
VALIDATION = VALID_XML WITH SCHEMA COLLECTION ExpenseReportSchema ;

C. Creating a message type for an empty message


The following example creates a new message type with empty encoding.

CREATE MESSAGE TYPE


[//Adventure-Works.com/Expenses/SubmitExpense]
VALIDATION = EMPTY ;

D. Creating a message type containing binary data


The following example creates a new message type to hold binary data. Because the message will contain data that
is not XML, the message type specifies a validation type of NONE . Notice that, in this case, the application that
receives a message of this type must verify that the message contains data, and that the data is of the type
expected.

CREATE MESSAGE TYPE


[//Adventure-Works.com/Expenses/ReceiptImage]
VALIDATION = NONE ;

See Also
ALTER MESSAGE TYPE (Transact-SQL )
DROP MESSAGE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a function in the current database that maps the rows of a table or index into partitions based on the
values of a specified column. Using CREATE PARTITION FUNCTION is the first step in creating a partitioned
table or index. In SQL Server 2017, a table or index can have a maximum of 15,000 partitions.
Transact-SQL Syntax Conventions

Syntax
CREATE PARTITION FUNCTION partition_function_name ( input_parameter_type )
AS RANGE [ LEFT | RIGHT ]
FOR VALUES ( [ boundary_value [ ,...n ] ] )
[ ; ]

Arguments
partition_function_name
Is the name of the partition function. Partition function names must be unique within the database and comply
with the rules for identifiers.
input_parameter_type
Is the data type of the column used for partitioning. All data types are valid for use as partitioning columns,
except text, ntext, image, xml, timestamp, varchar(max), nvarchar(max), varbinary(max), alias data types,
or CLR user-defined data types.
The actual column, known as a partitioning column, is specified in the CREATE TABLE or CREATE INDEX
statement.
boundary_value
Specifies the boundary values for each partition of a partitioned table or index that uses partition_function_name.
If boundary_value is empty, the partition function maps the whole table or index using partition_function_name
into a single partition. Only one partitioning column, specified in a CREATE TABLE or CREATE INDEX statement,
can be used.
boundary_value is a constant expression that can reference variables. This includes user-defined type variables, or
functions and user-defined functions. It cannot reference Transact-SQL expressions. boundary_value must either
match or be implicitly convertible to the data type supplied in input_parameter_type, and cannot be truncated
during implicit conversion in a way that the size and scale of the value does not match that of its corresponding
input_parameter_type.
NOTE
If boundary_value consists of datetime or smalldatetime literals, these literals are evaluated assuming that us_english is
the session language. This behavior is deprecated. To make sure the partition function definition behaves as expected for all
session languages, we recommend that you use constants that are interpreted the same way for all language settings, such
as the yyyymmdd format; or explicitly convert literals to a specific style. To determine the language session of your server,
run SELECT @@LANGUAGE .

...n
Specifies the number of values supplied by boundary_value, not to exceed 14,999. The number of partitions
created is equal to n + 1. The values do not have to be listed in order. If the values are not in order, the Database
Engine sorts them, creates the function, and returns a warning that the values are not provided in order. The
Database Engine returns an error if n includes any duplicate values.
LEFT | RIGHT
Specifies to which side of each boundary value interval, left or right, the boundary_value [ ,...n ] belongs, when
interval values are sorted by the Database Engine in ascending order from left to right. If not specified, LEFT is
the default.

Remarks
The scope of a partition function is limited to the database that it is created in. Within the database, partition
functions reside in a separate namespace from the other functions.
Any rows whose partitioning column has null values are placed in the left-most partition, unless NULL is
specified as a boundary value and RIGHT is indicated. In this case, the left-most partition is an empty partition,
and NULL values are placed in the following partition.

Permissions
Any one of the following permissions can be used to execute CREATE PARTITION FUNCTION:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition function is being created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition function is being created.

Examples
A. Creating a RANGE LEFT partition function on an int column
The following partition function will partition a table or index into four partitions.

CREATE PARTITION FUNCTION myRangePF1 (int)


AS RANGE LEFT FOR VALUES (1, 100, 1000);

The following table shows how a table that uses this partition function on partitioning column col1 would be
partitioned.

PARTITION 1 2 3 4
PARTITION 1 2 3 4

Values col1 <= 1 col1 > 1 AND col1 col1 > 100 AND col1 > 1000
<= 100 col1 <= 1000

B. Creating a RANGE RIGHT partition function on an int column


The following partition function uses the same values for boundary_value [ ,...n ] as the previous example, except
it specifies RANGE RIGHT.

CREATE PARTITION FUNCTION myRangePF2 (int)


AS RANGE RIGHT FOR VALUES (1, 100, 1000);

The following table shows how a table that uses this partition function on partitioning column col1 would be
partitioned.

PARTITION 1 2 3 4

Values col1 < 1 col1 >= 1 AND col1 >= 100 AND col1 >= 1000
col1 < 100 col1 < 1000

C. Creating a RANGE RIGHT partition function on a datetime column


The following partition function partitions a table or index into 12 partitions, one for each month of a year's
worth of values in a datetime column.

CREATE PARTITION FUNCTION [myDateRangePF1] (datetime)


AS RANGE RIGHT FOR VALUES ('20030201', '20030301', '20030401',
'20030501', '20030601', '20030701', '20030801',
'20030901', '20031001', '20031101', '20031201');

The following table shows how a table or index that uses this partition function on partitioning column datecol
would be partitioned.

PARTITION 1 2 ... 11 12

Values datecol < datecol >= datecol >= datecol >=


February 1, February 1, November 1, December 1,
2003 2003 2003 2003
AND datecol < AND col1 <
March 1, 2003 December 1,
2003

D. Creating a partition function on a char column


The following partition function partitions a table or index into four partitions.

CREATE PARTITION FUNCTION myRangePF3 (char(20))


AS RANGE RIGHT FOR VALUES ('EX', 'RXE', 'XR');

The following table shows how a table that uses this partition function on partitioning column col1 would be
partitioned.
PARTITION 1 2 3 4

Values col1 < EX ... col1 >= EX AND col1 >= RXE AND col1 >= XR
col1 < RXE ... col1 < XR ...

E. Creating 15,000 partitions


The following partition function partitions a table or index into 15,000 partitions.

--Create integer partition function for 15,000 partitions.


DECLARE @IntegerPartitionFunction nvarchar(max) =
N'CREATE PARTITION FUNCTION IntegerPartitionFunction (int)
AS RANGE RIGHT FOR VALUES (';
DECLARE @i int = 1;
WHILE @i < 14999
BEGIN
SET @IntegerPartitionFunction += CAST(@i as nvarchar(10)) + N', ';
SET @i += 1;
END
SET @IntegerPartitionFunction += CAST(@i as nvarchar(10)) + N');';
EXEC sp_executesql @IntegerPartitionFunction;
GO

F. Creating partitions for multiple years


The following partition function partitions a table or index into 50 partitions on a datetime2 column. There is
one partitions for each month between January 2007 and January 2011.

--Create date partition function with increment by month.


DECLARE @DatePartitionFunction nvarchar(max) =
N'CREATE PARTITION FUNCTION DatePartitionFunction (datetime2)
AS RANGE RIGHT FOR VALUES (';
DECLARE @i datetime2 = '20070101';
WHILE @i < '20110101'
BEGIN
SET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10)) + '''' + N', ';
SET @i = DATEADD(MM, 1, @i);
END
SET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10))+ '''' + N');';
EXEC sp_executesql @DatePartitionFunction;
GO

See Also
Partitioned Tables and Indexes
$PARTITION (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL )
DROP PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE TABLE (Transact-SQL )
CREATE INDEX (Transact-SQL )
ALTER INDEX (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.partition_functions (Transact-SQL )
sys.partition_parameters (Transact-SQL )
sys.partition_range_values (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a scheme in the current database that maps the partitions of a partitioned table or index to filegroups.
The number and domain of the partitions of a partitioned table or index are determined in a partition function.
A partition function must first be created in a CREATE PARTITION FUNCTION statement before creating a
partition scheme.

NOTE
In Azure SQL Database only primary filegroups are supported.

Transact-SQL Syntax Conventions

Syntax
CREATE PARTITION SCHEME partition_scheme_name
AS PARTITION partition_function_name
[ ALL ] TO ( { file_group_name | [ PRIMARY ] } [ ,...n ] )
[ ; ]

Arguments
partition_scheme_name
Is the name of the partition scheme. Partition scheme names must be unique within the database and comply
with the rules for identifiers.
partition_function_name
Is the name of the partition function using the partition scheme. Partitions created by the partition function are
mapped to the filegroups specified in the partition scheme. partition_function_name must already exist in the
database. A single partition cannot contain both FILESTREAM and non-FILESTREAM filegroups.
ALL
Specifies that all partitions map to the filegroup provided in file_group_name, or to the primary filegroup if
[PRIMARY ] is specified. If ALL is specified, only one file_group_name can be specified.
file_group_name | [ PRIMARY ] [ ,...n]
Specifies the names of the filegroups to hold the partitions specified by partition_function_name.
file_group_name must already exist in the database.
If [PRIMARY ] is specified, the partition is stored on the primary filegroup. If ALL is specified, only one
file_group_name can be specified. Partitions are assigned to filegroups, starting with partition 1, in the order in
which the filegroups are listed in [,...n]. The same file_group_name can be specified more than one time in [,...n].
If n is not sufficient to hold the number of partitions specified in partition_function_name, CREATE PARTITION
SCHEME fails with an error.
If partition_function_name generates less partitions than filegroups, the first unassigned filegroup is marked
NEXT USED, and an information message displays naming the NEXT USED filegroup. If ALL is specified, the
sole file_group_name maintains its NEXT USED property for this partition_function_name. The NEXT USED
filegroup will receive an additional partition if one is created in an ALTER PARTITION FUNCTION statement.
To create additional unassigned filegroups to hold new partitions, use ALTER PARTITION SCHEME.
When you specify the primary filegroup in file_group_name [ 1,...n], PRIMARY must be delimited, as in
[PRIMARY ], because it is a keyword.
Only PRIMARY is supported for SQL Database. See example E below.

Permissions
The following permissions can be used to execute CREATE PARTITION SCHEME:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed
server role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition scheme is being created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition scheme is being created.

Examples
A. Creating a partition scheme that maps each partition to a different filegroup
The following example creates a partition function to partition a table or index into four partitions. A partition
scheme is then created that specifies the filegroups to hold each one of the four partitions. This example
assumes the filegroups already exist in the database.

CREATE PARTITION FUNCTION myRangePF1 (int)


AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS1
AS PARTITION myRangePF1
TO (test1fg, test2fg, test3fg, test4fg);

The partitions of a table that uses partition function myRangePF1 on partitioning column col1 would be assigned
as shown in the following table.

Filegroup test1fg test2fg test3fg test4fg

Partition 1 2 3 4

Values col1 <= 1 col1 > 1 AND col1 col1 > 100 AND col1 > 1000
<= 100 col1 <= 1000

B. Creating a partition scheme that maps multiple partitions to the same filegroup
If all the partitions map to the same filegroup, use the ALL keyword. But if multiple, but not all, partitions are
mapped to the same filegroup, the filegroup name must be repeated, as shown in the following example.
CREATE PARTITION FUNCTION myRangePF2 (int)
AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS2
AS PARTITION myRangePF2
TO ( test1fg, test1fg, test1fg, test2fg );

The partitions of a table that uses partition function myRangePF2 on partitioning column col1 would be assigned
as shown in the following table.

Filegroup test1fg test1fg test1fg test2fg

Partition 1 2 3 4

Values col1 <= 1 col1 > 1 AND col1 col1 > 100 AND col1 > 1000
<= 100 col1 <= 1000

C. Creating a partition scheme that maps all partitions to the same filegroup
The following example creates the same partition function as in the previous examples, and a partition scheme
is created that maps all partitions to the same filegroup.

CREATE PARTITION FUNCTION myRangePF3 (int)


AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS3
AS PARTITION myRangePF3
ALL TO ( test1fg );

D. Creating a partition scheme that specifies a 'NEXT USED' filegroup


The following example creates the same partition function as in the previous examples, and a partition scheme
is created that lists more filegroups than there are partitions created by the associated partition function.

CREATE PARTITION FUNCTION myRangePF4 (int)


AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS4
AS PARTITION myRangePF4
TO (test1fg, test2fg, test3fg, test4fg, test5fg)

Executing the statement returns the following message.


Partition scheme 'myRangePS4' has been created successfully. 'test5fg' is marked as the next used filegroup in
partition scheme 'myRangePS4'.
If partition function myRangePF4 is changed to add a partition, filegroup test5fg receives the newly created
partition.
E. Creating a partition schema only on PRIMARY - only PRIMARY is supported for SQL Database
The following example creates a partition function to partition a table or index into four partitions. A partition
scheme is then created that specifies that all partitions are created in the PRIMARY filegroup.
CREATE PARTITION FUNCTION myRangePF1 (int)
AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS1
AS PARTITION myRangePF1
ALL TO ( [PRIMARY] );

See Also
CREATE PARTITION FUNCTION (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
DROP PARTITION SCHEME (Transact-SQL )
EVENTDATA (Transact-SQL )
Create Partitioned Tables and Indexes
sys.partition_schemes (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.destination_data_spaces (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
CREATE PROCEDURE (Transact-SQL)
5/3/2018 • 33 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a Transact-SQL or common language runtime (CLR ) stored procedure in SQL Server, Azure SQL
Database, Azure SQL Data Warehouse and Parallel Data Warehouse. Stored procedures are similar to procedures
in other programming languages in that they can:
Accept input parameters and return multiple values in the form of output parameters to the calling
procedure or batch.
Contain programming statements that perform operations in the database, including calling other
procedures.
Return a status value to a calling procedure or batch to indicate success or failure (and the reason for
failure).
Use this statement to create a permanent procedure in the current database or a temporary procedure in
the tempdb database.

NOTE
The integration of .NET Framework CLR into SQL Server is discussed in this topic. CLR integration does not apply to Azure
SQL Database.

Jump to Simple Examples to skip the details of the syntax and get to a quick example of a basic stored procedure.
Transact-SQL Syntax Conventions

Syntax
-- Transact-SQL Syntax for Stored Procedures in SQL Server and Azure SQL Database

CREATE [ OR ALTER ] { PROC | PROCEDURE }


[schema_name.] procedure_name [ ; number ]
[ { @parameter [ type_schema_name. ] data_type }
[ VARYING ] [ = default ] [ OUT | OUTPUT | [READONLY]
] [ ,...n ]
[ WITH <procedure_option> [ ,...n ] ]
[ FOR REPLICATION ]
AS { [ BEGIN ] sql_statement [;] [ ...n ] [ END ] }
[;]

<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE AS Clause ]
-- Transact-SQL Syntax for CLR Stored Procedures

CREATE [ OR ALTER ] { PROC | PROCEDURE }


[schema_name.] procedure_name [ ; number ]
[ { @parameter [ type_schema_name. ] data_type }
[ = default ] [ OUT | OUTPUT ] [READONLY]
] [ ,...n ]
[ WITH EXECUTE AS Clause ]
AS { EXTERNAL NAME assembly_name.class_name.method_name }
[;]

-- Transact-SQL Syntax for Natively Compiled Stored Procedures

CREATE [ OR ALTER ] { PROC | PROCEDURE } [schema_name.] procedure_name


[ { @parameter data_type } [ NULL | NOT NULL ] [ = default ]
[ OUT | OUTPUT ] [READONLY]
] [ ,... n ]
WITH NATIVE_COMPILATION, SCHEMABINDING [ , EXECUTE AS clause ]
AS
{
BEGIN ATOMIC WITH (set_option [ ,... n ] )
sql_statement [;] [ ... n ]
[ END ]
}
[;]

<set_option> ::=
LANGUAGE = [ N ] 'language'
| TRANSACTION ISOLATION LEVEL = { SNAPSHOT | REPEATABLE READ | SERIALIZABLE }
| [ DATEFIRST = number ]
| [ DATEFORMAT = format ]
| [ DELAYED_DURABILITY = { OFF | ON } ]

-- Transact-SQL Syntax for Stored Procedures in Azure SQL Data Warehouse


-- and Parallel Data Warehouse

-- Create a stored procedure


CREATE { PROC | PROCEDURE } [ schema_name.] procedure_name
[ { @parameterdata_type } [ OUT | OUTPUT ] ] [ ,...n ]
AS { [ BEGIN ] sql_statement [;][ ,...n ] [ END ] }
[;]

Arguments
OR ALTER
Applies to: Azure SQL Database, SQL Server (starting with SQL Server 2016 (13.x) SP1).
Alters the procedure if it already exists.
schema_name
The name of the schema to which the procedure belongs. Procedures are schema-bound. If a schema name is not
specified when the procedure is created, the default schema of the user who is creating the procedure is
automatically assigned.
procedure_name
The name of the procedure. Procedure names must comply with the rules for identifiers and must be unique
within the schema.
Avoid the use of the sp_ prefix when naming procedures. This prefix is used by SQL Server to designate system
procedures. Using the prefix can cause application code to break if there is a system procedure with the same
name.
Local or global temporary procedures can be created by using one number sign (#) before procedure_name
(#procedure_name) for local temporary procedures, and two number signs for global temporary procedures
(##procedure_name). A local temporary procedure is visible only to the connection that created it and is dropped
when that connection is closed. A global temporary procedure is available to all connections and is dropped at the
end of the last session using the procedure. Temporary names cannot be specified for CLR procedures.
The complete name for a procedure or a global temporary procedure, including ##, cannot exceed 128 characters.
The complete name for a local temporary procedure, including #, cannot exceed 116 characters.
; number
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
An optional integer that is used to group procedures of the same name. These grouped procedures can be
dropped together by using one DROP PROCEDURE statement.

NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.

Numbered procedures cannot use the xml or CLR user-defined types and cannot be used in a plan guide.
@ parameter
A parameter declared in the procedure. Specify a parameter name by using the at sign (@) as the first character.
The parameter name must comply with the rules for identifiers. Parameters are local to the procedure; the same
parameter names can be used in other procedures.
One or more parameters can be declared; the maximum is 2,100. The value of each declared parameter must be
supplied by the user when the procedure is called unless a default value for the parameter is defined or the value
is set to equal another parameter. If a procedure contains table-valued parameters, and the parameter is missing
in the call, an empty table is passed in. Parameters can take the place only of constant expressions; they cannot be
used instead of table names, column names, or the names of other database objects. For more information, see
EXECUTE (Transact-SQL ).
Parameters cannot be declared if FOR REPLICATION is specified.
[ type_schema_name. ] data_type
The data type of the parameter and the schema to which the data type belongs.
Guidelines for Transact-SQL procedures:
All Transact-SQL data types can be used as parameters.
You can use the user-defined table type to create table-valued parameters. Table-valued parameters can
only be INPUT parameters and must be accompanied by the READONLY keyword. For more information,
see Use Table-Valued Parameters (Database Engine)
cursor data types can only be OUTPUT parameters and must be accompanied by the VARYING keyword.
Guidelines for CLR procedures:
All of the native SQL Server data types that have an equivalent in managed code can be used as
parameters. For more information about the correspondence between CLR types and SQL Server system
data types, see Mapping CLR Parameter Data. For more information about SQL Server system data types
and their syntax, see Data Types (Transact-SQL ).
Table-valued or cursor data types cannot be used as parameters.
If the data type of the parameter is a CLR user-defined type, you must have EXECUTE permission on the
type.
VARYING
Specifies the result set supported as an output parameter. This parameter is dynamically constructed by the
procedure and its contents may vary. Applies only to cursor parameters. This option is not valid for CLR
procedures.
default
A default value for a parameter. If a default value is defined for a parameter, the procedure can be executed
without specifying a value for that parameter. The default value must be a constant or it can be NULL. The
constant value can be in the form of a wildcard, making it possible to use the LIKE keyword when passing the
parameter into the procedure.
Default values are recorded in the sys.parameters.default column only for CLR procedures. That column is
NULL for Transact-SQL procedure parameters.
OUT | OUTPUT
Indicates that the parameter is an output parameter. Use OUTPUT parameters to return values to the caller of the
procedure. text, ntext, and image parameters cannot be used as OUTPUT parameters, unless the procedure is a
CLR procedure. An output parameter can be a cursor placeholder, unless the procedure is a CLR procedure. A
table-value data type cannot be specified as an OUTPUT parameter of a procedure.
READONLY
Indicates that the parameter cannot be updated or modified within the body of the procedure. If the parameter
type is a table-value type, READONLY must be specified.
RECOMPILE
Indicates that the Database Engine does not cache a query plan for this procedure, forcing it to be compiled each
time it is executed. For more information regarding the reasons for forcing a recompile, see Recompile a Stored
Procedure. This option cannot be used when FOR REPLICATION is specified or for CLR procedures.
To instruct the Database Engine to discard query plans for individual queries inside a procedure, use the
RECOMPILE query hint in the definition of the query. For more information, see Query Hints (Transact-SQL ).
ENCRYPTION
Applies to: SQL Server ( SQL Server 2008 through SQL Server 2017), Azure SQL Database.
Indicates that SQL Server converts the original text of the CREATE PROCEDURE statement to an obfuscated
format. The output of the obfuscation is not directly visible in any of the catalog views in SQL Server. Users who
have no access to system tables or database files cannot retrieve the obfuscated text. However, the text is available
to privileged users who can either access system tables over the DAC port or directly access database files. Also,
users who can attach a debugger to the server process can retrieve the decrypted procedure from memory at
runtime. For more information about accessing system metadata, see Metadata Visibility Configuration.
This option is not valid for CLR procedures.
Procedures created with this option cannot be published as part of SQL Server replication.
EXECUTE AS clause
Specifies the security context under which to execute the procedure.
For natively compiled stored procedures, starting SQL Server 2016 (13.x) and in Azure SQL Database, there are
no limitations on the EXECUTE AS clause. In SQL Server 2014 (12.x) the SELF, OWNER, and ‘user_name’ clauses
are supported with natively compiled stored procedures.
For more information, see EXECUTE AS Clause (Transact-SQL ).
FOR REPLICATION
Applies to: SQL Server ( SQL Server 2008 through SQL Server 2017), Azure SQL Database.
Specifies that the procedure is created for replication. Consequently, it cannot be executed on the Subscriber. A
procedure created with the FOR REPLICATION option is used as a procedure filter and is executed only during
replication. Parameters cannot be declared if FOR REPLICATION is specified. FOR REPLICATION cannot be
specified for CLR procedures. The RECOMPILE option is ignored for procedures created with FOR
REPLICATION.
A FOR REPLICATION procedure has an object type RF in sys.objects and sys.procedures.
{ [ BEGIN ] sql_statement [;] [ ...n ] [ END ] }
One or more Transact-SQL statements comprising the body of the procedure. You can use the optional BEGIN
and END keywords to enclose the statements. For information, see the Best Practices, General Remarks, and
Limitations and Restrictions sections that follow.
EXTERNAL NAME assembly_name.class_name.method_name
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies the method of a .NET Framework assembly for a CLR procedure to reference. class_name must be a
valid SQL Server identifier and must exist as a class in the assembly. If the class has a namespace-qualified name
that uses a period (.) to separate namespace parts, the class name must be delimited by using brackets ([]) or
quotation marks (""). The specified method must be a static method of the class.
By default, SQL Server cannot execute CLR code. You can create, modify, and drop database objects that
reference common language runtime modules; however, you cannot execute these references in SQL Server until
you enable the clr enabled option. To enable the option, use sp_configure.

NOTE
CLR procedures are not supported in a contained database.

ATOMIC WITH
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates atomic stored procedure execution. Changes are either committed or all of the changes rolled back by
throwing an exception. The ATOMIC WITH block is required for natively compiled stored procedures.
If the procedure RETURNs (explicitly through the RETURN statement, or implicitly by completing execution), the
work performed by the procedure is committed. If the procedure THROWs, the work performed by the
procedure is rolled back.
XACT_ABORT is ON by default inside an atomic block and cannot be changed. XACT_ABORT specifies whether
SQL Server automatically rolls back the current transaction when a Transact-SQL statement raises a run-time
error.
The following SET options are always ON in the ATOMIC block; the options cannot be changed.
CONCAT_NULL_YIELDS_NULL
QUOTED_IDENTIFIER, ARITHABORT
NOCOUNT
ANSI_NULLS
ANSI_WARNINGS
SET options cannot be changed inside ATOMIC blocks. The SET options in the user session are not used in the
scope of natively compiled stored procedures. These options are fixed at compile time.
BEGIN, ROLLBACK, and COMMIT operations cannot be used inside an atomic block.
There is one ATOMIC block per natively compiled stored procedure, at the outer scope of the procedure. The
blocks cannot be nested. For more information about atomic blocks, see Natively Compiled Stored Procedures.
NULL | NOT NULL
Determines whether null values are allowed in a parameter. NULL is the default.
NATIVE_COMPIL ATION
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates that the procedure is natively compiled. NATIVE_COMPIL ATION, SCHEMABINDING, and EXECUTE
AS can be specified in any order. For more information, see Natively Compiled Stored Procedures.
SCHEMABINDING
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Ensures that tables that are referenced by a procedure cannot be dropped or altered. SCHEMABINDING is
required in natively compiled stored procedures. (For more information, see Natively Compiled Stored
Procedures.) The SCHEMABINDING restrictions are the same as they are for user-defined functions. For more
information, see the SCHEMABINDING section in CREATE FUNCTION (Transact-SQL ).
L ANGUAGE = [N ] 'language'
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Equivalent to SET L ANGUAGE (Transact-SQL ) session option. L ANGUAGE = [N ] 'language' is required.
TRANSACTION ISOL ATION LEVEL
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Required for natively compiled stored procedures. Specifies the transaction isolation level for the stored
procedure. The options are as follows:
For more information about these options, see SET TRANSACTION ISOL ATION LEVEL (Transact-SQL ).
REPEATABLE READ
Specifies that statements cannot read data that has been modified but not yet committed by other transactions. If
another transaction modifies data that has been read by the current transaction, the current transaction fails.
SERIALIZABLE
Specifies the following:
Statements cannot read data that has been modified but not yet committed by other transactions.
If another transactions modifies data that has been read by the current transaction, the current transaction
fails.
If another transaction inserts new rows with key values that would fall in the range of keys read by any
statements in the current transaction, the current transaction fails.
SNAPSHOT
Specifies that data read by any statement in a transaction is the transactionally consistent version of the data that
existed at the start of the transaction.
DATEFIRST = number
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies the first day of the week to a number from 1 through 7. DATEFIRST is optional. If it is not specified, the
setting is inferred from the specified language.
For more information, see SET DATEFIRST (Transact-SQL ).
DATEFORMAT = format
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies the order of the month, day, and year date parts for interpreting date, smalldatetime, datetime,
datetime2 and datetimeoffset character strings. DATEFORMAT is optional. If it is not specified, the setting is
inferred from the specified language.
For more information, see SET DATEFORMAT (Transact-SQL ).
DEL AYED_DURABILITY = { OFF | ON }
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
SQL Server transaction commits can be either fully durable, the default, or delayed durable.
For more information, see Control Transaction Durability.

Simple Examples
To help you get started, here are two quick examples:
SELECT DB_NAME() AS ThisDB; returns the name of the current database.
You can wrap that statement in a stored procedure, such as:

CREATE PROC What_DB_is_this


AS
SELECT DB_NAME() AS ThisDB;

Call the store procedure with statement: EXEC What_DB_is_this;

Slightly more complex, is to provide an input parameter to make the procedure more flexible. For example:

CREATE PROC What_DB_is_that @ID int


AS
SELECT DB_NAME(@ID) AS ThatDB;

Provide a database id number when you call the procedure. For example, EXEC What_DB_is_that 2; returns
tempdb .

See Examples towards the end of this topic for many more examples.

Best Practices
Although this is not an exhaustive list of best practices, these suggestions may improve procedure performance.
Use the SET NOCOUNT ON statement as the first statement in the body of the procedure. That is, place it
just after the AS keyword. This turns off messages that SQL Server sends back to the client after any
SELECT, INSERT, UPDATE, MERGE, and DELETE statements are executed. Overall performance of the
database and application is improved by eliminating this unnecessary network overhead. For information,
see SET NOCOUNT (Transact-SQL ).
Use schema names when creating or referencing database objects in the procedure. It takes less
processing time for the Database Engine to resolve object names if it does not have to search multiple
schemas. It also prevents permission and access problems caused by a user’s default schema being
assigned when objects are created without specifying the schema.
Avoid wrapping functions around columns specified in the WHERE and JOIN clauses. Doing so makes the
columns non-deterministic and prevents the query processor from using indexes.
Avoid using scalar functions in SELECT statements that return many rows of data. Because the scalar
function must be applied to every row, the resulting behavior is like row -based processing and degrades
performance.
Avoid the use of SELECT * . Instead, specify the required column names. This can prevent some Database
Engine errors that stop procedure execution. For example, a SELECT * statement that returns data from a
12 column table and then inserts that data into a 12 column temporary table succeeds until the number or
order of columns in either table is changed.
Avoid processing or returning too much data. Narrow the results as early as possible in the procedure
code so that any subsequent operations performed by the procedure are done using the smallest data set
possible. Send just the essential data to the client application. It is more efficient than sending extra data
across the network and forcing the client application to work through unnecessarily large result sets.
Use explicit transactions by using BEGIN/COMMIT TRANSACTION and keep transactions as short as
possible. Longer transactions mean longer record locking and a greater potential for deadlocking.
Use the Transact-SQL TRY…CATCH feature for error handling inside a procedure. TRY…CATCH can
encapsulate an entire block of Transact-SQL statements. This not only creates less performance overhead,
it also makes error reporting more accurate with significantly less programming.
Use the DEFAULT keyword on all table columns that are referenced by CREATE TABLE or ALTER TABLE
Transact-SQL statements in the body of the procedure. This prevents passing NULL to columns that do
not allow null values.
Use NULL or NOT NULL for each column in a temporary table. The ANSI_DFLT_ON and
ANSI_DFLT_OFF options control the way the Database Engine assigns the NULL or NOT NULL attributes
to columns when these attributes are not specified in a CREATE TABLE or ALTER TABLE statement. If a
connection executes a procedure with different settings for these options than the connection that created
the procedure, the columns of the table created for the second connection can have different nullability and
exhibit different behavior. If NULL or NOT NULL is explicitly stated for each column, the temporary tables
are created by using the same nullability for all connections that execute the procedure.
Use modification statements that convert nulls and include logic that eliminates rows with null values from
queries. Be aware that in Transact-SQL, NULL is not an empty or "nothing" value. It is a placeholder for an
unknown value and can cause unexpected behavior, especially when querying for result sets or using
AGGREGATE functions.
Use the UNION ALL operator instead of the UNION or OR operators, unless there is a specific need for
distinct values. The UNION ALL operator requires less processing overhead because duplicates are not
filtered out of the result set.

General Remarks
There is no predefined maximum size of a procedure.
Variables specified in the procedure can be user-defined or system variables, such as @@SPID.
When a procedure is executed for the first time, it is compiled to determine an optimal access plan to retrieve the
data. Subsequent executions of the procedure may reuse the plan already generated if it still remains in the plan
cache of the Database Engine.
One or more procedures can execute automatically when SQL Server starts. The procedures must be created by
the system administrator in the master database and executed under the sysadmin fixed server role as a
background process. The procedures cannot have any input or output parameters. For more information, see
Execute a Stored Procedure.
Procedures are nested when one procedure call another or executes managed code by referencing a CLR routine,
type, or aggregate. Procedures and managed code references can be nested up to 32 levels. The nesting level
increases by one when the called procedure or managed code reference begins execution and decreases by one
when the called procedure or managed code reference completes execution. Methods invoked from within the
managed code do not count against the nesting level limit. However, when a CLR stored procedure performs data
access operations through the SQL Server managed provider, an additional nesting level is added in the
transition from managed code to SQL.
Attempting to exceed the maximum nesting level causes the entire calling chain to fail. You can use the
@@NESTLEVEL function to return the nesting level of the current stored procedure execution.

Interoperability
The Database Engine saves the settings of both SET QUOTED_IDENTIFIER and SET ANSI_NULLS when a
Transact-SQL procedure is created or modified. These original settings are used when the procedure is executed.
Therefore, any client session settings for SET QUOTED_IDENTIFIER and SET ANSI_NULLS are ignored when
the procedure is running.
Other SET options, such as SET ARITHABORT, SET ANSI_WARNINGS, or SET ANSI_PADDINGS are not saved
when a procedure is created or modified. If the logic of the procedure depends on a particular setting, include a
SET statement at the start of the procedure to guarantee the appropriate setting. When a SET statement is
executed from a procedure, the setting remains in effect only until the procedure has finished running. The setting
is then restored to the value the procedure had when it was called. This enables individual clients to set the
options they want without affecting the logic of the procedure.
Any SET statement can be specified inside a procedure, except SET SHOWPL AN_TEXT and SET
SHOWPL AN_ALL. These must be the only statements in the batch. The SET option chosen remains in effect
during the execution of the procedure and then reverts to its former setting.

NOTE
SET ANSI_WARNINGS is not honored when passing parameters in a procedure, user-defined function, or when declaring
and setting variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger
than three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.

Limitations and Restrictions


The CREATE PROCEDURE statement cannot be combined with other Transact-SQL statements in a single batch.
The following statements cannot be used anywhere in the body of a stored procedure.

CREATE AGGREGATE CREATE SCHEMA SET SHOWPLAN_TEXT

CREATE DEFAULT CREATE or ALTER TRIGGER SET SHOWPLAN_XML

CREATE or ALTER FUNCTION CREATE or ALTER VIEW USE database_name

CREATE or ALTER PROCEDURE SET PARSEONLY

CREATE RULE SET SHOWPLAN_ALL


A procedure can reference tables that do not yet exist. At creation time, only syntax checking is performed. The
procedure is not compiled until it is executed for the first time. Only during compilation are all objects referenced
in the procedure resolved. Therefore, a syntactically correct procedure that references tables that do not exist can
be created successfully; however, the procedure fails at execution time if the referenced tables do not exist.
You cannot specify a function name as a parameter default value or as the value passed to a parameter when
executing a procedure. However, you can pass a function as a variable as shown in the following example.

-- Passing the function value as a variable.


DECLARE @CheckDate datetime = GETDATE();
EXEC dbo.uspGetWhereUsedProductID 819, @CheckDate;
GO

If the procedure makes changes on a remote instance of SQL Server, the changes cannot be rolled back. Remote
procedures do not take part in transactions.
For the Database Engine to reference the correct method when it is overloaded in the .NET Framework, the
method specified in the EXTERNAL NAME clause must have the following characteristics:
Be declared as a static method.
Receive the same number of parameters as the number of parameters of the procedure.
Use parameter types that are compatible with the data types of the corresponding parameters of the SQL
Server procedure. For information about matching SQL Server data types to the .NET Framework data
types, see Mapping CLR Parameter Data.

Metadata

The following table lists the catalog views and dynamic management views that you can use to return
information about stored procedures.

VIEW DESCRIPTION

sys.sql_modules Returns the definition of a Transact-SQL procedure. The text


of a procedure created with the ENCRYPTION option cannot
be viewed by using the sys.sql_modules catalog view.

sys.assembly_modules Returns information about a CLR procedure.

sys.parameters Returns information about the parameters that are defined in


a procedure

sys.sql_expression_dependencies Returns the objects that are referenced by a procedure.


sys.dm_sql_referenced_entities
sys.dm_sql_referencing_entities

To estimate the size of a compiled procedure, use the following Performance Monitor Counters.

PERFORMANCE MONITOR OBJECT NAME PERFORMANCE MONITOR COUNTER NAME

SQLServer: Plan Cache Object Cache Hit Ratio

Cache Pages
PERFORMANCE MONITOR OBJECT NAME PERFORMANCE MONITOR COUNTER NAME

Cache Object Counts*

*These counters are available for various categories of cache objects including ad hoc Transact-SQL, prepared
Transact-SQL, procedures, triggers, and so on. For more information, see SQL Server, Plan Cache Object.

Security
Permissions
Requires CREATE PROCEDURE permission in the database and ALTER permission on the schema in which the
procedure is being created, or requires membership in the db_ddladmin fixed database role.
For CLR stored procedures, requires ownership of the assembly referenced in the EXTERNAL NAME clause, or
REFERENCES permission on that assembly.

CREATE PROCEDURE and Memory-Optimized Tables


Memory-optimized tables can be accessed through both traditional and natively compiled stored procedures.
Native procedures are in most cases the more efficient way. For more information, see Natively Compiled Stored
Procedures.
The following sample shows how to create a natively compiled stored procedure that accesses a memory-
optimized table dbo.Departments :

CREATE PROCEDURE dbo.usp_add_kitchen @dept_id int, @kitchen_count int NOT NULL


WITH EXECUTE AS OWNER, SCHEMABINDING, NATIVE_COMPILATION
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')

UPDATE dbo.Departments
SET kitchen_count = ISNULL(kitchen_count, 0) + @kitchen_count
WHERE id = @dept_id
END;
GO

A procedure created without NATIVE_COMPIL ATION cannot be altered to a natively compiled stored procedure.
For a discussion of programmability in natively compiled stored procedures, supported query surface area, and
operators see Supported Features for Natively Compiled T-SQL Modules.

Examples
CATEGORY FEATURED SYNTAX ELEMENTS

Basic Syntax CREATE PROCEDURE

Passing parameters @parameter


• = default
• OUTPUT
• table-valued parameter type
• CURSOR VARYING

Modifying data by using a stored procedure UPDATE


CATEGORY FEATURED SYNTAX ELEMENTS

Error Handling TRY…CATCH

Obfuscating the procedure definition WITH ENCRYPTION

Forcing the Procedure to Recompile WITH RECOMPILE

Setting the Security Context EXECUTE AS

Basic Syntax
Examples in this section demonstrate the basic functionality of the CREATE PROCEDURE statement using the
minimum required syntax.
A. Creating a simple Transact-SQL procedure
The following example creates a stored procedure that returns all employees (first and last names supplied), their
job titles, and their department names from a view in the AdventureWorks2012 database. This procedure does
not use any parameters. The example then demonstrates three methods of executing the procedure.

CREATE PROCEDURE HumanResources.uspGetAllEmployees


AS
SET NOCOUNT ON;
SELECT LastName, FirstName, JobTitle, Department
FROM HumanResources.vEmployeeDepartment;
GO

SELECT * FROM HumanResources.vEmployeeDepartment;

The uspGetEmployees procedure can be executed in the following ways:

EXECUTE HumanResources.uspGetAllEmployees;
GO
-- Or
EXEC HumanResources.uspGetAllEmployees;
GO
-- Or, if this procedure is the first statement within a batch:
HumanResources.uspGetAllEmployees;

B. Returning more than one result set


The following procedure returns two result sets.

CREATE PROCEDURE dbo.uspMultipleResults


AS
SELECT TOP(10) BusinessEntityID, Lastname, FirstName FROM Person.Person;
SELECT TOP(10) CustomerID, AccountNumber FROM Sales.Customer;
GO

C. Creating a CLR stored procedure


The following example creates the GetPhotoFromDB procedure that references the GetPhotoFromDB method of the
LargeObjectBinary class in the HandlingLOBUsingCLR assembly. Before the procedure is created, the
HandlingLOBUsingCLR assembly is registered in the local database.

Applies to: SQL Server 2008 through SQL Server 2017, SQL Database (if using an assembly created from
assembly_bits.
CREATE ASSEMBLY HandlingLOBUsingCLR
FROM '\\MachineName\HandlingLOBUsingCLR\bin\Debug\HandlingLOBUsingCLR.dll';
GO
CREATE PROCEDURE dbo.GetPhotoFromDB
(
@ProductPhotoID int,
@CurrentDirectory nvarchar(1024),
@FileName nvarchar(1024)
)
AS EXTERNAL NAME HandlingLOBUsingCLR.LargeObjectBinary.GetPhotoFromDB;
GO

Passing Parameters
Examples in this section demonstrate how to use input and output parameters to pass values to and from a
stored procedure.
D. Creating a procedure with input parameters
The following example creates a stored procedure that returns information for a specific employee by passing
values for the employee's first name and last name. This procedure accepts only exact matches for the parameters
passed.

IF OBJECT_ID ( 'HumanResources.uspGetEmployees', 'P' ) IS NOT NULL


DROP PROCEDURE HumanResources.uspGetEmployees;
GO
CREATE PROCEDURE HumanResources.uspGetEmployees
@LastName nvarchar(50),
@FirstName nvarchar(50)
AS

SET NOCOUNT ON;


SELECT FirstName, LastName, JobTitle, Department
FROM HumanResources.vEmployeeDepartment
WHERE FirstName = @FirstName AND LastName = @LastName;
GO

The uspGetEmployees procedure can be executed in the following ways:

EXECUTE HumanResources.uspGetEmployees N'Ackerman', N'Pilar';


-- Or
EXEC HumanResources.uspGetEmployees @LastName = N'Ackerman', @FirstName = N'Pilar';
GO
-- Or
EXECUTE HumanResources.uspGetEmployees @FirstName = N'Pilar', @LastName = N'Ackerman';
GO
-- Or, if this procedure is the first statement within a batch:
HumanResources.uspGetEmployees N'Ackerman', N'Pilar';

E. Using a procedure with wildcard parameters


The following example creates a stored procedure that returns information for employees by passing full or
partial values for the employee's first name and last name. This procedure pattern matches the parameters
passed or, if not supplied, uses the preset default (last names that start with the letter D ).
IF OBJECT_ID ( 'HumanResources.uspGetEmployees2', 'P' ) IS NOT NULL
DROP PROCEDURE HumanResources.uspGetEmployees2;
GO
CREATE PROCEDURE HumanResources.uspGetEmployees2
@LastName nvarchar(50) = N'D%',
@FirstName nvarchar(50) = N'%'
AS
SET NOCOUNT ON;
SELECT FirstName, LastName, JobTitle, Department
FROM HumanResources.vEmployeeDepartment
WHERE FirstName LIKE @FirstName AND LastName LIKE @LastName;

The uspGetEmployees2 procedure can be executed in many combinations. Only a few possible combinations are
shown here.

EXECUTE HumanResources.uspGetEmployees2;
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'Wi%';
-- Or
EXECUTE HumanResources.uspGetEmployees2 @FirstName = N'%';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'[CK]ars[OE]n';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'Hesse', N'Stefen';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'H%', N'S%';

F. Using OUTPUT parameters


The following example creates the uspGetList procedure. This procedures returns a list of products that have
prices that do not exceed a specified amount. The example shows using multiple SELECT statements and multiple
OUTPUT parameters. OUTPUT parameters enable an external procedure, a batch, or more than one Transact-SQL
statement to access a value set during the procedure execution.

IF OBJECT_ID ( 'Production.uspGetList', 'P' ) IS NOT NULL


DROP PROCEDURE Production.uspGetList;
GO
CREATE PROCEDURE Production.uspGetList @Product varchar(40)
, @MaxPrice money
, @ComparePrice money OUTPUT
, @ListPrice money OUT
AS
SET NOCOUNT ON;
SELECT p.[Name] AS Product, p.ListPrice AS 'List Price'
FROM Production.Product AS p
JOIN Production.ProductSubcategory AS s
ON p.ProductSubcategoryID = s.ProductSubcategoryID
WHERE s.[Name] LIKE @Product AND p.ListPrice < @MaxPrice;
-- Populate the output variable @ListPprice.
SET @ListPrice = (SELECT MAX(p.ListPrice)
FROM Production.Product AS p
JOIN Production.ProductSubcategory AS s
ON p.ProductSubcategoryID = s.ProductSubcategoryID
WHERE s.[Name] LIKE @Product AND p.ListPrice < @MaxPrice);
-- Populate the output variable @compareprice.
SET @ComparePrice = @MaxPrice;
GO

Execute uspGetList to return a list of Adventure Works products (Bikes) that cost less than $700 . The OUTPUT
parameters @Cost and @ComparePrices are used with control-of-flow language to return a message in the
Messages window.
NOTE
The OUTPUT variable must be defined when the procedure is created and also when the variable is used. The parameter
name and variable name do not have to match; however, the data type and parameter positioning must match, unless
@ListPrice = variable is used.

DECLARE @ComparePrice money, @Cost money ;


EXECUTE Production.uspGetList '%Bikes%', 700,
@ComparePrice OUT,
@Cost OUTPUT
IF @Cost <= @ComparePrice
BEGIN
PRINT 'These products can be purchased for less than
$'+RTRIM(CAST(@ComparePrice AS varchar(20)))+'.'
END
ELSE
PRINT 'The prices for all products in this category exceed
$'+ RTRIM(CAST(@ComparePrice AS varchar(20)))+'.';

Here is the partial result set:

Product List Price


-------------------------- ----------
Road-750 Black, 58 539.99
Mountain-500 Silver, 40 564.99
Mountain-500 Silver, 42 564.99
...
Road-750 Black, 48 539.99
Road-750 Black, 52 539.99

(14 row(s) affected)

These items can be purchased for less than $700.00.

G. Using a Table-Valued Parameter


The following example uses a table-valued parameter type to insert multiple rows into a table. The example
creates the parameter type, declares a table variable to reference it, fills the parameter list, and then passes the
values to a stored procedure. The stored procedure uses the values to insert multiple rows into a table.
/* Create a table type. */
CREATE TYPE LocationTableType AS TABLE
( LocationName VARCHAR(50)
, CostRate INT );
GO

/* Create a procedure to receive data for the table-valued parameter. */


CREATE PROCEDURE usp_InsertProductionLocation
@TVP LocationTableType READONLY
AS
SET NOCOUNT ON
INSERT INTO [AdventureWorks2012].[Production].[Location]
([Name]
,[CostRate]
,[Availability]
,[ModifiedDate])
SELECT *, 0, GETDATE()
FROM @TVP;
GO

/* Declare a variable that references the type. */


DECLARE @LocationTVP
AS LocationTableType;

/* Add data to the table variable. */


INSERT INTO @LocationTVP (LocationName, CostRate)
SELECT [Name], 0.00
FROM
[AdventureWorks2012].[Person].[StateProvince];

/* Pass the table variable data to a stored procedure. */


EXEC usp_InsertProductionLocation @LocationTVP;
GO

H . U si n g a n O U T P U T c u r so r p a r a m e t e r

The following example uses the OUTPUT cursor parameter to pass a cursor that is local to a procedure back to
the calling batch, procedure, or trigger.
First, create the procedure that declares and then opens a cursor on the Currency table:

CREATE PROCEDURE dbo.uspCurrencyCursor


@CurrencyCursor CURSOR VARYING OUTPUT
AS
SET NOCOUNT ON;
SET @CurrencyCursor = CURSOR
FORWARD_ONLY STATIC FOR
SELECT CurrencyCode, Name
FROM Sales.Currency;
OPEN @CurrencyCursor;
GO

Next, run a batch that declares a local cursor variable, executes the procedure to assign the cursor to the local
variable, and then fetches the rows from the cursor.
DECLARE @MyCursor CURSOR;
EXEC dbo.uspCurrencyCursor @CurrencyCursor = @MyCursor OUTPUT;
WHILE (@@FETCH_STATUS = 0)
BEGIN;
FETCH NEXT FROM @MyCursor;
END;
CLOSE @MyCursor;
DEALLOCATE @MyCursor;
GO

Modifying Data by using a Stored Procedure


Examples in this section demonstrate how to insert or modify data in tables or views by including a Data
Manipulation Language (DML ) statement in the definition of the procedure.
I. Using UPDATE in a stored procedure
The following example uses an UPDATE statement in a stored procedure. The procedure takes one input
parameter, @NewHours and one output parameter @RowCount . The @NewHours parameter value is used in the
UPDATE statement to update the column VacationHours in the table HumanResources.Employee . The @RowCount
output parameter is used to return the number of rows affected to a local variable. A CASE expression is used in
the SET clause to conditionally determine the value that is set for VacationHours . When the employee is paid
hourly ( SalariedFlag = 0), VacationHours is set to the current number of hours plus the value specified in
@NewHours ; otherwise, VacationHours is set to the value specified in @NewHours .

CREATE PROCEDURE HumanResources.Update_VacationHours


@NewHours smallint
AS
SET NOCOUNT ON;
UPDATE HumanResources.Employee
SET VacationHours =
( CASE
WHEN SalariedFlag = 0 THEN VacationHours + @NewHours
ELSE @NewHours
END
)
WHERE CurrentFlag = 1;
GO

EXEC HumanResources.Update_VacationHours 40;

Error Handling
Examples in this section demonstrate methods to handle errors that might occur when the stored procedure is
executed.
J. Using TRY…CATCH
The following example using the TRY…CATCH construct to return error information caught during the execution
of a stored procedure.
CREATE PROCEDURE Production.uspDeleteWorkOrder ( @WorkOrderID int )
AS
SET NOCOUNT ON;
BEGIN TRY
BEGIN TRANSACTION
-- Delete rows from the child table, WorkOrderRouting, for the specified work order.
DELETE FROM Production.WorkOrderRouting
WHERE WorkOrderID = @WorkOrderID;

-- Delete the rows from the parent table, WorkOrder, for the specified work order.
DELETE FROM Production.WorkOrder
WHERE WorkOrderID = @WorkOrderID;

COMMIT

END TRY
BEGIN CATCH
-- Determine if an error occurred.
IF @@TRANCOUNT > 0
ROLLBACK

-- Return the error information.


DECLARE @ErrorMessage nvarchar(4000), @ErrorSeverity int;
SELECT @ErrorMessage = ERROR_MESSAGE(),@ErrorSeverity = ERROR_SEVERITY();
RAISERROR(@ErrorMessage, @ErrorSeverity, 1);
END CATCH;

GO
EXEC Production.uspDeleteWorkOrder 13;

/* Intentionally generate an error by reversing the order in which rows


are deleted from the parent and child tables. This change does not
cause an error when the procedure definition is altered, but produces
an error when the procedure is executed.
*/
ALTER PROCEDURE Production.uspDeleteWorkOrder ( @WorkOrderID int )
AS

BEGIN TRY
BEGIN TRANSACTION
-- Delete the rows from the parent table, WorkOrder, for the specified work order.
DELETE FROM Production.WorkOrder
WHERE WorkOrderID = @WorkOrderID;

-- Delete rows from the child table, WorkOrderRouting, for the specified work order.
DELETE FROM Production.WorkOrderRouting
WHERE WorkOrderID = @WorkOrderID;

COMMIT TRANSACTION

END TRY
BEGIN CATCH
-- Determine if an error occurred.
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION

-- Return the error information.


DECLARE @ErrorMessage nvarchar(4000), @ErrorSeverity int;
SELECT @ErrorMessage = ERROR_MESSAGE(),@ErrorSeverity = ERROR_SEVERITY();
RAISERROR(@ErrorMessage, @ErrorSeverity, 1);
END CATCH;
GO
-- Execute the altered procedure.
EXEC Production.uspDeleteWorkOrder 15;

DROP PROCEDURE Production.uspDeleteWorkOrder;


Obfuscating the Procedure Definition
Examples in this section show how to obfuscate the definition of the stored procedure.
K. Using the WITH ENCRYPTION option
The following example creates the HumanResources.uspEncryptThis procedure.
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.

CREATE PROCEDURE HumanResources.uspEncryptThis


WITH ENCRYPTION
AS
SET NOCOUNT ON;
SELECT BusinessEntityID, JobTitle, NationalIDNumber,
VacationHours, SickLeaveHours
FROM HumanResources.Employee;
GO

The WITH ENCRYPTION option obfuscates the definition of the procedure when querying the system catalog or
using metadata functions, as shown by the following examples.
Run sp_helptext :

EXEC sp_helptext 'HumanResources.uspEncryptThis';

Here is the result set.


The text for object 'HumanResources.uspEncryptThis' is encrypted.

Directly query the sys.sql_modules catalog view:

SELECT definition FROM sys.sql_modules


WHERE object_id = OBJECT_ID('HumanResources.uspEncryptThis');

Here is the result set.

definition
--------------------------------
NULL

Forcing the Procedure to Recompile


Examples in this section use the WITH RECOMPILE clause to force the procedure to recompile every time it is
executed.
L. Using the WITH RECOMPILE option
The WITH RECOMPILE clause is helpful when the parameters supplied to the procedure are not typical, and when a
new execution plan should not be cached or stored in memory.
IF OBJECT_ID ( 'dbo.uspProductByVendor', 'P' ) IS NOT NULL
DROP PROCEDURE dbo.uspProductByVendor;
GO
CREATE PROCEDURE dbo.uspProductByVendor @Name varchar(30) = '%'
WITH RECOMPILE
AS
SET NOCOUNT ON;
SELECT v.Name AS 'Vendor name', p.Name AS 'Product name'
FROM Purchasing.Vendor AS v
JOIN Purchasing.ProductVendor AS pv
ON v.BusinessEntityID = pv.BusinessEntityID
JOIN Production.Product AS p
ON pv.ProductID = p.ProductID
WHERE v.Name LIKE @Name;

Setting the Security Context


Examples in this section use the EXECUTE AS clause to set the security context in which the stored procedure
executes.
M. Using the EXECUTE AS clause
The following example shows using the EXECUTE AS clause to specify the security context in which a procedure
can be executed. In the example, the option CALLER specifies that the procedure can be executed in the context of
the user that calls it.

CREATE PROCEDURE Purchasing.uspVendorAllInfo


WITH EXECUTE AS CALLER
AS
SET NOCOUNT ON;
SELECT v.Name AS Vendor, p.Name AS 'Product name',
v.CreditRating AS 'Rating',
v.ActiveFlag AS Availability
FROM Purchasing.Vendor v
INNER JOIN Purchasing.ProductVendor pv
ON v.BusinessEntityID = pv.BusinessEntityID
INNER JOIN Production.Product p
ON pv.ProductID = p.ProductID
ORDER BY v.Name ASC;
GO

N. Creating custom permission sets


The following example uses EXECUTE AS to create custom permissions for a database operation. Some
operations such as TRUNCATE TABLE, do not have grantable permissions. By incorporating the TRUNCATE
TABLE statement within a stored procedure and specifying that procedure execute as a user that has permissions
to modify the table, you can extend the permissions to truncate the table to the user that you grant EXECUTE
permissions on the procedure.

CREATE PROCEDURE dbo.TruncateMyTable


WITH EXECUTE AS SELF
AS TRUNCATE TABLE MyDB..MyTable;

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


O. Create a Stored Procedure that runs a SELECT statement
This example shows the basic syntax for creating and running a procedure. When running a batch, CREATE
PROCEDURE must be the first statement. For example, to create the following stored procedure in
AdventureWorksPDW2012, set the database context first, and then run the CREATE PROCEDURE statement.
-- Uses AdventureWorksDW database

--Run CREATE PROCEDURE as the first statement in a batch.


CREATE PROCEDURE Get10TopResellers
AS
BEGIN
SELECT TOP (10) r.ResellerName, r.AnnualSales
FROM DimReseller AS r
ORDER BY AnnualSales DESC, ResellerName ASC;
END
;

--Show 10 Top Resellers


EXEC Get10TopResellers;

See Also
ALTER PROCEDURE (Transact-SQL )
Control-of-Flow Language (Transact-SQL )
Cursors
Data Types (Transact-SQL )
DECL ARE @local_variable (Transact-SQL )
DROP PROCEDURE (Transact-SQL )
EXECUTE (Transact-SQL )
EXECUTE AS (Transact-SQL )
Stored Procedures (Database Engine)
sp_procoption (Transact-SQL )
sp_recompile (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.parameters (Transact-SQL )
sys.procedures (Transact-SQL )
sys.sql_expression_dependencies (Transact-SQL )
sys.assembly_modules (Transact-SQL )
sys.numbered_procedures (Transact-SQL )
sys.numbered_procedure_parameters (Transact-SQL )
OBJECT_DEFINITION (Transact-SQL )
Create a Stored Procedure
Use Table-Valued Parameters (Database Engine)
sys.dm_sql_referenced_entities (Transact-SQL )
sys.dm_sql_referencing_entities (Transact-SQL )
CREATE QUEUE (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new queue in a database. Queues store messages. When a message arrives for a service, Service Broker
puts the message on the queue associated with the service.
Transact-SQL Syntax Conventions

Syntax
CREATE QUEUE <object>
[ WITH
[ STATUS = { ON | OFF } [ , ] ]
[ RETENTION = { ON | OFF } [ , ] ]
[ ACTIVATION (
[ STATUS = { ON | OFF } , ]
PROCEDURE_NAME = <procedure> ,
MAX_QUEUE_READERS = max_readers ,
EXECUTE AS { SELF | 'user_name' | OWNER }
) [ , ] ]
[ POISON_MESSAGE_HANDLING (
[ STATUS = { ON | OFF } ] ) ]
]
[ ON { filegroup | [ DEFAULT ] } ]
[ ; ]

<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
queue_name
}

<procedure> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
stored_procedure_name
}

Arguments
database_name (object)
Is the name of the database within which to create the new queue. database_name must specify the name of an
existing database. When database_name is not provided, the queue is created in the current database.
schema_name (object)
Is the name of the schema to which the new queue belongs. The schema defaults to the default schema for the
user that executes the statement. If the CREATE QUEUE statement is executed by a member of the sysadmin fixed
server role, or a member of the db_dbowner or db_ddladmin fixed database roles in the database specified by
database_name, schema_name can specify a schema other than the one associated with the login of the current
connection. Otherwise, schema_name must be the default schema for the user who executes the statement.
queue_name
Is the name of the queue to create. This name must meet the guidelines for SQL Server identifiers.
STATUS (Queue)
Specifies whether the queue is available (ON ) or unavailable (OFF ). When the queue is unavailable, no messages
can be added to the queue or removed from the queue. You can create the queue in an unavailable state to keep
messages from arriving on the queue until the queue is made available with an ALTER QUEUE statement. If this
clause is omitted, the default is ON, and the queue is available.
RETENTION
Specifies the retention setting for the queue. If RETENTION = ON, all messages sent or received on conversations
that use this queue are retained in the queue until the conversations have ended. This lets you retain messages for
auditing purposes, or to perform compensating transactions if an error occurs. If this clause is not specified, the
retention setting defaults to OFF.

NOTE
Setting RETENTION = ON can decrease performance. This setting should only be used if it is required for the application.

ACTIVATION
Specifies information about which stored procedure you have to start to process messages in this queue.
STATUS (Activation)
Specifies whether Service Broker starts the stored procedure. When STATUS = ON, the queue starts the stored
procedure specified with PROCEDURE_NAME when the number of procedures currently running is less than
MAX_QUEUE_READERS and when messages arrive on the queue faster than the stored procedures receive
messages. When STATUS = OFF, the queue does not start the stored procedure. If this clause is not specified, the
default is ON.
PROCEDURE_NAME = <procedure>
Specifies the name of the stored procedure to start to process messages in this queue. This value must be a SQL
Server identifier.
database_name(procedure)
Is the name of the database that contains the stored procedure.
schema_name(procedure)
Is the name of the schema that contains the stored procedure.
procedure_name
Is the name of the stored procedure.
MAX_QUEUE_READERS =max_readers
Specifies the maximum number of instances of the activation stored procedure that the queue starts at the same
time. The value of max_readers must be a number between 0 and 32767.
EXECUTE AS
Specifies the SQL Server database user account under which the activation stored procedure runs. SQL Server
must be able to check the permissions for this user at the time that the queue starts the stored procedure. For a
domain user, the server must be connected to the domain when the procedure is started or activation fails. For a
SQL Server user, the server can always check permissions.
SELF
Specifies that the stored procedure executes as the current user. (The database principal executing this CREATE
QUEUE statement.)
'user_name'
Is the name of the user who the stored procedure executes as. The user_name parameter must be a valid SQL
Server user specified as a SQL Server identifier. The current user must have IMPERSONATE permission for the
user_name specified.
OWNER
Specifies that the stored procedure executes as the owner of the queue.
POISON_MESSAGE_HANDLING
Specifies whether poison message handling is enabled for the queue. The default is ON.
A queue that has poison message handling set to OFF will not be disabled after five consecutive transaction
rollbacks. This allows for a custom poison message handing system to be defined by the application.
ON filegroup | [DEFAULT]
Specifies the SQL Server filegroup on which to create this queue. You can use the filegroup parameter to identify
a filegroup, or use the DEFAULT identifier to use the default filegroup for the service broker database. In the
context of this clause, DEFAULT is not a keyword, and must be delimited as an identifier. When no filegroup is
specified, the queue uses the default filegroup for the database.

Remarks
A queue can be the target of a SELECT statement. However, the contents of a queue can only be modified using
statements that operate on Service Broker conversations, such as SEND, RECEIVE, and END CONVERSATION. A
queue cannot be the target of an INSERT, UPDATE, DELETE, or TRUNCATE statement.
A queue might not be a temporary object. Therefore, queue names starting with # are not valid.
Creating a queue in an inactive state lets you get the infrastructure in place for a service before allowing messages
to be received on the queue.
Service Broker does not stop activation stored procedures when there are no messages on the queue. An
activation stored procedure should exit when no messages are available on the queue for a short time.
Permissions for the activation stored procedure are checked when Service Broker starts the stored procedure, not
when the queue is created. The CREATE QUEUE statement does not verify that the user specified in the EXECUTE
AS clause has permission to execute the stored procedure specified in the PROCEDURE NAME clause.
When a queue is unavailable, Service Broker holds messages for services that use the queue in the transmission
queue for the database. The sys.transmission_queue catalog view provides a view of the transmission queue.
A queue is a schema-owned object. Queues appear in the sys.objects catalog view.
The following table lists the columns in a queue.

COLUMN NAME DATA TYPE DESCRIPTION

status tinyint Status of the message. The RECEIVE


statement returns all messages that
have a status of 1. If message retention
is on, the status is then set to 0. If
message retention is off, the message is
deleted from the queue. Messages in
the queue can contain one of the
following values:

0=Retained received message

1=Ready to receive

2=Not yet complete

3=Retained sent message


COLUMN NAME DATA TYPE DESCRIPTION

priority tinyint The priority level that is assigned to this


message.

queuing_order bigint Message order number in the queue.

conversation_group_id uniqueidentifier Identifier for the conversation group


that this message belongs to.

conversation_handle uniqueidentifier Handle for the conversation that this


message is part of.

message_sequence_number bigint Sequence number of the message in


the conversation.

service_name nvarchar(512) Name of the service that the


conversation is to.

service_id int SQL Server object identifier of the


service that the conversation is to.

service_contract_name nvarchar(256) Name of the contract that the


conversation follows.

service_contract_id int SQL Server object identifier of the


contract that the conversation follows.

message_type_name nvarchar(256) Name of the message type that


describes the message.

message_type_id int SQL Server object identifier of the


message type that describes the
message.

validation nchar(2) Validation used for the message.

E=Empty

N=None

X=XML

message_body varbinary(max) Content of the message.

message_id uniqueidentifier Unique identifier for the message.

Permissions
Permission for creating a queue uses members of the db_ddladmin or db_owner fixed database roles and the
sysadmin fixed server role.
REFERENCES permission for a queue defaults to the owner of the queue, members of the db_ddladmin or
db_owner fixed database roles, and members of the sysadmin fixed server role.
RECEIVE permission for a queue defaults to the owner of the queue, members of the db_owner fixed database
role, and members of the sysadmin fixed server role.

Examples
A. Creating a queue with no parameters
The following example creates a queue that is available to receive messages. No activation stored procedure is
specified for the queue.

CREATE QUEUE ExpenseQueue ;

B. Creating an unavailable queue


The following example creates a queue that is unavailable to receive messages. No activation stored procedure is
specified for the queue.

CREATE QUEUE ExpenseQueue WITH STATUS=OFF ;

C. Creating a queue and specify internal activation information


The following example creates a queue that is available to receive messages. The queue starts the stored
procedure expense_procedure when a message enters the queue. The stored procedure executes as the user
ExpenseUser . The queue starts a maximum of 5 instances of the stored procedure.

CREATE QUEUE ExpenseQueue


WITH STATUS=ON,
ACTIVATION (
PROCEDURE_NAME = expense_procedure,
MAX_QUEUE_READERS = 5,
EXECUTE AS 'ExpenseUser' ) ;

D. Creating a queue on a specific filegroup


The following example creates a queue on the filegroup ExpenseWorkFileGroup .

CREATE QUEUE ExpenseQueue


ON ExpenseWorkFileGroup ;

E. Creating a queue with multiple parameters


The following example creates a queue on the DEFAULT filegroup. The queue is unavailable. Messages are retained
in the queue until the conversation that they belong to ends. When the queue is made available through ALTER
QUEUE, the queue starts the stored procedure 2008R2.dbo.expense_procedure to process messages. The stored
procedure executes as the user who ran the CREATE QUEUE statement. The queue starts a maximum of 10
instances of the stored procedure.

CREATE QUEUE ExpenseQueue


WITH STATUS = OFF,
RETENTION = ON,
ACTIVATION (
PROCEDURE_NAME = AdventureWorks2012.dbo.expense_procedure,
MAX_QUEUE_READERS = 10,
EXECUTE AS SELF )
ON [DEFAULT] ;

See Also
ALTER QUEUE (Transact-SQL )
CREATE SERVICE (Transact-SQL )
DROP QUEUE (Transact-SQL )
RECEIVE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE REMOTE SERVICE BINDING (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a binding that defines the security credentials to use to initiate a conversation with a remote service.
Transact-SQL Syntax Conventions

Syntax
CREATE REMOTE SERVICE BINDING binding_name
[ AUTHORIZATION owner_name ]
TO SERVICE 'service_name'
WITH USER = user_name [ , ANONYMOUS = { ON | OFF } ]
[ ; ]

Arguments
binding_name
Is the name of the remote service binding to be created. Server, database, and schema names cannot be specified.
The binding_name must be a valid sysname.
AUTHORIZATION owner_name
Sets the owner of the binding to the specified database user or role. When the current user is dbo or sa,
owner_name can be the name of any valid user or role. Otherwise, owner_name must be the name of the current
user, the name of a user who the current user has IMPERSONATE permissions for, or the name of a role to which
the current user belongs.
TO SERVICE 'service_name'
Specifies the remote service to bind to the user identified in the WITH USER clause.
USER = user_name
Specifies the database principal that owns the certificate associated with the remote service identified by the TO
SERVICE clause. This certificate is used for encryption and authentication of messages exchanged with the remote
service.
ANONYMOUS
Specifies whether anonymous authentication is used when communicating with the remote service. If
ANONYMOUS = ON, anonymous authentication is used and operations in the remote database occur as a
member of the public fixed database role. If ANONYMOUS = OFF, operations in the remote database occur as a
specific user in that database. If this clause is not specified, the default is OFF.

Remarks
Service Broker uses a remote service binding to locate the certificate to use for a new conversation. The public key
in the certificate associated with user_name is used to authenticate messages sent to the remote service and to
encrypt a session key that is then used to encrypt the conversation. The certificate for user_name must correspond
to the certificate for a user in the database that hosts the remote service.
A remote service binding is only necessary for initiating services that communicate with target services outside of
the SQL Server instance. A database that hosts an initiating service must contain remote service bindings for any
target services outside of the SQL Server instance. A database that hosts a target service need not contain remote
service bindings for the initiating services that communicate with the target service. When the initiator and target
services are in the same instance of SQL Server, no remote service binding is necessary. However, if a remote
service binding is present where the service_name specified for TO SERVICE matches the name of the local
service, Service Broker will use the binding.
When ANONYMOUS = ON, the initiating service connects to the target service as a member of the public fixed
database role. By default, members of this role do not have permission to connect to a database. To successfully
send a message, the target database must grant the public role CONNECT permission for the database and
SEND permission for the target service.
When a user owns more than one certificate, Service Broker selects the certificate with the latest expiration date
from among the certificates that currently valid and marked as AVAIL ABLE FOR BEGIN_DIALOG.

Permissions
Permissions for creating a remote service binding default to the user named in the USER clause, members of the
db_owner fixed database role, members of the db_ddladmin fixed database role, and members of the sysadmin
fixed server role.
The user that executes the CREATE REMOTE SERVICE BINDING statement must have impersonate permission
for the principal specified in the statement.
A remote service binding may not be a temporary object. Remote service binding names beginning with # are
allowed, but are permanent objects.

Examples
A. Creating a remote service binding
The following example creates a binding for the service //Adventure-Works.com/services/AccountsPayable . Service
Broker uses the certificate owned by the APUser database principal to authenticate to the remote service and to
exchange the session encryption key with the remote service.

CREATE REMOTE SERVICE BINDING APBinding


TO SERVICE '//Adventure-Works.com/services/AccountsPayable'
WITH USER = APUser ;

B. Creating a remote service binding using anonymous authentication


The following example creates a binding for the service //Adventure-Works.com/services/AccountsPayable . Service
Broker uses the certificate owned by the APUser database principal to exchange the session encryption key with
the remote service. The broker does not authenticate to the remote service. In the database that hosts the remote
service, messages are delivered as the guest user.

CREATE REMOTE SERVICE BINDING APBinding


TO SERVICE '//Adventure-Works.com/services/AccountsPayable'
WITH USER = APUser, ANONYMOUS=ON ;

See Also
ALTER REMOTE SERVICE BINDING (Transact-SQL )
DROP REMOTE SERVICE BINDING (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE REMOTE TABLE AS SELECT (Parallel Data
Warehouse)
5/4/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Selects data from a Parallel Data Warehouse database and copies that data to a new table in a SMP SQL Server
database on a remote server. Parallel Data Warehouse uses the appliance, with all the benefits of MPP query
processing, to select the data for the remote copy. Use this for scenarios that require SQL Server functionality.
To configure the remote server, see "Remote Table Copy" in the Parallel Data Warehouse product documentation.
Transact-SQL Syntax Conventions (Transact-SQL )

Syntax
CREATE REMOTE TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name AT
('<connection_string>')
[ WITH ( BATCH_SIZE = batch_size ) ]
AS <select_statement>
[;]

<connection_string> ::=
Data Source = { IP_address | hostname } [, port ]; User ID = user_name ;Password = password;

<select_statement> ::=
[ WITH <common_table_expression> [ ,...n ] ]
SELECT <select_criteria>

Arguments
database_name
The database to create the remote table in. database_name is a SQL Server database. Default is the default
database for the user login on the destination SQL Server instance.
schema_name
The schema for the new table. Default is the default schema for the user login on the destination SQL Server
instance.
table_name
The name of the new table. For details on permitted table names, see "Object Naming Rules" in the Parallel Data
Warehouse product documentation.
The remote table is created as a heap. It does not have check constraints or triggers. The collation of the remote
table columns is the same as the collation of the source table columns. This applies to columns of type char, nchar,
varchar, and nvarchar.
connection_string
A character string that specifies the Data Source , User ID , and Password parameters for connecting to the remote
server and database.
The connection string is a semicolon-delimited list of key and value pairs. Keywords are not case-sensitive. Spaces
between key and value pairs are ignored. However, values may be case-sensitive, depending on the data source.
Data Source
The parameter that specifies the name or IP address, and TCP port number for the remote SMP SQL Server.
hostname or IP_address
Name of the remote server computer or the IPv4 address of the remote server. IPv6 addresses are not supported.
You can specify a SQL Server named instance in the format Computer_Name\Instance_Name or
IP_address\Instance_Name. The server must be remote and therefore cannot be specified as (local).
TCP port number
The TCP port number for the connection. You can specify a TCP port number from 0 to 65535 for an instance of
SQL Server that is not listening on the default port 1433. For example: ServerA,1450 or 10.192.14.27,1435

NOTE
We recommend connecting to a remote server by using the IP address. Depending on your network configuration,
connecting by using the computer name might require additional steps to use your non-appliance DNS server to resolve the
name to the correct server. This step is not necessary when connecting with an IP address. For more information, see "Use a
DNS Forwarder to Resolve Non-Appliance DNS Names (Analytics Platform System)" in the Parallel Data Warehouse product
documentation.

user_name
A valid SQL Server authentication login name. Maximum number of characters is 128.
password
The login password. Maximum number of characters is 128.
batch_size
The maximum number of rows per batch. Parallel Data Warehouse sends rows in batches to the destination server.
Batch_size is a positive integer >= 0. Default is 0.
WITH common_table_expression
Specifies a temporary named result set, known as a common table expression (CTE ). For more information, see
WITH common_table_expression (Transact-SQL ).
SELECT <select_criteria> The query predicate that specifies which data will populate the new remote table. For
information on the SELECT statement, see SELECT (Transact-SQL ).

Permissions
Requires:
SELECT permission on each object in the SELECT clause.
Requires CREATE TABLE permission on the destination SMP database.
Requires ALTER, INSERT, and SELECT permissions on the destination SMP schema.

Error Handling
If copying data to the remote database fails, Parallel Data Warehouse will abort the operation, log an error, and
attempt to delete the remote table. Parallel Data Warehouse does not guarantee a successful cleanup of the new
table.
Limitations and Restrictions
Remote Destination Server:
TCP is the default and only supported protocol for connecting to a remote server.
The destination server must be a non-appliance server. CREATE REMOTE TABLE cannot be used to copy
data from one appliance to another.
The CREATE REMOTE TABLE statement only creates new tables. Therefore, the new table cannot already
exist. The remote database and schema must already exist.
The remote server must have space available to store the data that is transferred from the appliance to the
SQL Server remote database.
SELECT statement:
The ORDER BY and TOP clauses are not supported in the select criteria.
CREATE REMOTE TABLE cannot be run inside an active transaction or when the AUTOCOMMIT OFF
setting is active for the session.
SET ROWCOUNT (Transact-SQL ) has no effect on this statement. To achieve a similar behavior, use TOP
(Transact-SQL ).

Locking Behavior
After creating the remote table, the destination table is not locked until the copy starts. Therefore, it is possible for
another process to delete the remote table after it is created and before the copy starts. When this occurs, Parallel
Data Warehouse will generate an error and the copy will fail.

Metadata

Use sys.dm_pdw_dms_workers (Transact-SQL ) to view the progress of copying the selected data to the remote
SMP server. Rows with type PARALLEL_COPY_READER contain this information.

Security
CREATE REMOTE TABLE uses SQL Server Authentication to connect to the remote SQL Server instance; it does
not use Windows Authentication.
The Parallel Data Warehouse external facing network must be firewalled, with exception of SQL Server ports,
administrative ports, and management ports.
To help prevent accidental data loss or corruption, the user account that is used to copy from the appliance to the
destination database should have only the minimum required permissions on the destination database.
Connection settings allow you to connect to the SMP SQL Server instance with SSL protecting user name and
password data, but with actual data being sent unencrypted in clear text. When this occurs, a malicious user could
intercept the CREATE REMOTE TABLE statement text, which contains the SQL Server user name and password to
log onto the SMP SQL Server instance. To avoid this risk, use data encryption on the connection to the SMP SQL
Server instance.

Examples
A. Creating a remote table
This example creates a SQL Server SMP remote table called MyOrdersTable on database OrderReporting and
schema Orders . The OrderReporting database is on a server named SQLA that listens on the default port 1433.
The connection to the server uses the credentials of the user David , whose password is e4n8@3 .

CREATE REMOTE TABLE OrderReporting.Orders.MyOrdersTable


AT ( 'Data Source = SQLA, 1433; User ID = David; Password = e4n8@3;' )
AS SELECT <select_criteria>;

B. Querying the sys.dm_pdw_dms_workers DMV for remote table copy status


This query shows how to view copy status for a remote table copy.

SELECT * FROM sys.dm_pdw_dms_workers


WHERE type = 'PARALLEL_COPY_READER';

C. Using a query join hint with CREATE REMOTE TABLE


This query shows the basic syntax for using a query join hint with CREATE REMOTE TABLE. After the query is
submitted to the Control node, SQL Server, running on the Compute nodes, will apply the hash join strategy when
generating the SQL Server query plan. For more information on join hints and how to use the OPTION clause, see
OPTION Clause (Transact-SQL ).

USE ssawPDW;
CREATE REMOTE TABLE OrderReporting.Orders.MyOrdersTable
AT ( 'Data Source = SQLA, 1433; User ID = David; Password = e4n8@3;' )
AS SELECT T1.* FROM OrderReporting.Orders.MyOrdersTable T1
JOIN OrderReporting.Order.Customer T2
ON T1.CustomerID=T2.CustomerID OPTION (HASH JOIN);
CREATE RESOURCE POOL (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a Resource Governor resource pool in SQL Server. A resource pool represents a subset of the physical
resources (memory, CPUs and IO ) of an instance of the Database Engine. Resource Governor enables a database
administrator to distribute server resources among resource pools, up to a maximum of 64 pools. Resource
Governor is not available in every edition of SQL Server. For a list of features that are supported by the editions
of SQL Server, see Features Supported by the Editions of SQL Server 2016.
Transact-SQL Syntax Conventions.

Syntax
CREATE RESOURCE POOL pool_name
[ WITH
(
[ MIN_CPU_PERCENT = value ]
[ [ , ] MAX_CPU_PERCENT = value ]
[ [ , ] CAP_CPU_PERCENT = value ]
[ [ , ] AFFINITY {SCHEDULER =
AUTO
| ( <scheduler_range_spec> )
| NUMANODE = ( <NUMA_node_range_spec> )
} ]
[ [ , ] MIN_MEMORY_PERCENT = value ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MIN_IOPS_PER_VOLUME = value ]
[ [ , ] MAX_IOPS_PER_VOLUME = value ]
)
]
[;]

<scheduler_range_spec> ::=
{ SCHED_ID | SCHED_ID TO SCHED_ID }[,…n]

<NUMA_node_range_spec> ::=
{ NUMA_node_ID | NUMA_node_ID TO NUMA_node_ID }[,…n]

Arguments
pool_name
Is the user-defined name for the resource pool. pool_name is alphanumeric, can be up to 128 characters, must be
unique within an instance of SQL Server, and must comply with the rules for identifiers.
MIN_CPU_PERCENT =value
Specifies the guaranteed average CPU bandwidth for all requests in the resource pool when there is CPU
contention. value is an integer with a default setting of 0. The allowed range for value is from 0 through 100.
MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in resource pool will receive when there is CPU
contention. value is an integer with a default setting of 100. The allowed range for value is from 1 through 100.
CAP_CPU_PERCENT =value
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. Limits the
maximum CPU bandwidth level to be the same as the specified value. value is an integer with a default setting of
100. The allowed range for value is from 1 through 100.
AFFINITY {SCHEDULER = AUTO | ( <scheduler_range_spec> ) | NUMANODE =
(<NUMA_node_range_spec>)} Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Attach the resource pool to specific schedulers. The default value is AUTO.
AFFINITY SCHEDULER = ( <scheduler_range_spec> ) maps the resource pool to the SQL Server schedules
identified by the given IDs. These IDs map to the values in the scheduler_id column in sys.dm_os_schedulers
(Transact-SQL ).
When you use AFFINITY NUMANODE = ( <NUMA_node_range_spec> ), the resource pool is affinitized to the
SQL Server schedulers that map to the physical CPUs that correspond to the given NUMA node or range of
nodes. You can use the following Transact-SQL query to discover the mapping between the physical NUMA
configuration and the SQL Server scheduler IDs.

SELECT osn.memory_node_id AS [numa_node_id], sc.cpu_id, sc.scheduler_id


FROM sys.dm_os_nodes AS osn
INNER JOIN sys.dm_os_schedulers AS sc
ON osn.node_id = sc.parent_node_id
AND sc.scheduler_id < 1048576;

MIN_MEMORY_PERCENT =value
Specifies the minimum amount of memory reserved for this resource pool that can not be shared with other
resource pools. value is an integer with a default setting of 0 The allowed range for value is from 0 to 100.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this resource pool. value is an integer with a
default setting of 100. The allowed range for value is from 1 through 100.
MIN_IOPS_PER_VOLUME =value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Specifies the minimum I/O operations per second (IOPS ) per disk volume to reserve for the resource pool. The
allowed range for value is from 0 through 2^31-1 (2,147,483,647). Specify 0 to indicate no minimum threshold
for the pool. The default is 0.
MAX_IOPS_PER_VOLUME =value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Specifies the maximum I/O operations per second (IOPS ) per disk volume to allow for the resource pool. The
allowed range for value is from 0 through 2^31-1 (2,147,483,647). Specify 0 to set an unlimited threshold for the
pool. The default is 0.
If the MAX_IOPS_PER_VOLUME for a pool is set to 0, the pool is not governed at all and can take all the IOPS in
the system even if other pools have MIN_IOPS_PER_VOLUME set. For this case, we recommend that you set the
MAX_IOPS_PER_VOLUME value for this pool to a high number (for example, the maximum value 2^31-1) if
you want this pool to be governed for IO.

Remarks
MIN_IOPS_PER_VOLUME and MAX_IOPS_PER_VOLUME specify the minimum and maximum reads or writes
per second. These reads or writes can be of any size and do not indicate minimum or maximum throughput.
The values for MAX_CPU_PERCENT and MAX_MEMORY_PERCENT must be greater than or equal to the
values for MIN_CPU_PERCENT and MIN_MEMORY_PERCENT, respectively.
CAP_CPU_PERCENT differs from MAX_CPU_PERCENT in that workloads associated with the pool can use CPU
capacity above the value of MAX_CPU_PERCENT if it is available, but not above the value of
CAP_CPU_PERCENT.
The total CPU percentage for each affinitized component (scheduler(s) or NUMA node(s)) should not exceed
100%.

Permissions
Requires CONTROL SERVER permission.

Examples
The following example shows how to create a resource pool named bigPool . This pool uses the default Resource
Governor settings.

CREATE RESOURCE POOL bigPool;


GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

The following example sets the CAP_CPU_PERCENT to a hard cap of 30% and sets AFFINITY SCHEDULER to a range of
0 to 63, 128 to 191.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.

CREATE RESOURCE POOL PoolAdmin


WITH (
MIN_CPU_PERCENT = 10,
MAX_CPU_PERCENT = 20,
CAP_CPU_PERCENT = 30,
AFFINITY SCHEDULER = (0 TO 63, 128 TO 191),
MIN_MEMORY_PERCENT = 5,
MAX_MEMORY_PERCENT = 15
);

The following example sets MIN_IOPS_PER_VOLUME to <some value> and MAX_IOPS_PER_VOLUME to <some value>.
These values govern the physical I/O read and write operations that are available for the resource pool.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.

CREATE RESOURCE POOL PoolAdmin


WITH (
MIN_IOPS_PER_VOLUME = 20,
MAX_IOPS_PER_VOLUME = 100
);

See Also
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
Resource Governor Resource Pool
Create a Resource Pool
CREATE ROLE (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new database role in the current database.
Transact-SQL Syntax Conventions

Syntax
CREATE ROLE role_name [ AUTHORIZATION owner_name ]

Arguments
role_name
Is the name of the role to be created.
AUTHORIZATION owner_name
Is the database user or role that is to own the new role. If no user is specified, the role will be owned by the user
that executes CREATE ROLE.

Remarks
Roles are database-level securables. After you create a role, configure the database-level permissions of the role
by using GRANT, DENY, and REVOKE. To add members to a database role, use ALTER ROLE (Transact-SQL ). For
more information, see Database-Level Roles.
Database roles are visible in the sys.database_role_members and sys.database_principals catalog views.
For information about designing a permissions system, see Getting Started with Database Engine Permissions.
Cau t i on

Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).

Permissions
Requires CREATE ROLE permission on the database or membership in the db_securityadmin fixed database
role. When you use the AUTHORIZATION option, the following permissions are also required:
To assign ownership of a role to another user, requires IMPERSONATE permission on that user.
To assign ownership of a role to another role, requires membership in the recipient role or ALTER
permission on that role.
To assign ownership of a role to an application role, requires ALTER permission on the application role.

Examples
The following examples all use the AdventureWorks database.
A. Creating a database role that is owned by a database user
The following example creates the database role buyers that is owned by user BenMiller .

CREATE ROLE buyers AUTHORIZATION BenMiller;


GO

B. Creating a database role that is owned by a fixed database role


The following example creates the database role auditors that is owned the db_securityadmin fixed database
role.

CREATE ROLE auditors AUTHORIZATION db_securityadmin;


GO

See Also
Principals (Database Engine)
ALTER ROLE (Transact-SQL )
DROP ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
Getting Started with Database Engine Permissions
CREATE ROUTE (Transact-SQL)
5/4/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Adds a new route to the routing table for the current database. For outgoing messages, Service Broker determines
routing by checking the routing table in the local database. For messages on conversations that originate in
another instance, including messages to be forwarded, Service Broker checks the routes in msdb.
Transact-SQL Syntax Conventions

Syntax
CREATE ROUTE route_name
[ AUTHORIZATION owner_name ]
WITH
[ SERVICE_NAME = 'service_name', ]
[ BROKER_INSTANCE = 'broker_instance_identifier' , ]
[ LIFETIME = route_lifetime , ]
ADDRESS = 'next_hop_address'
[ , MIRROR_ADDRESS = 'next_hop_mirror_address' ]
[ ; ]

Arguments
route_name
Is the name of the route to create. A new route is created in the current database and owned by the principal
specified in the AUTHORIZATION clause. Server, database, and schema names cannot be specified. The
route_name must be a valid sysname.
AUTHORIZATION owner_name
Sets the owner of the route to the specified database user or role. The owner_name can be the name of any valid
user or role when the current user is a member of either the db_owner fixed database role or the sysadmin fixed
server role. Otherwise, owner_name must be the name of the current user, the name of a user that the current user
has IMPERSONATE permission for, or the name of a role to which the current user belongs. When this clause is
omitted, the route belongs to the current user.
WITH
Introduces the clauses that define the route being created.
SERVICE_NAME = 'service_name'
Specifies the name of the remote service that this route points to. The service_name must exactly match the name
the remote service uses. Service Broker uses a byte-by-byte comparison to match the service_name. In other
words, the comparison is case sensitive and does not consider the current collation. If the SERVICE_NAME is
omitted, this route matches any service name, but has lower priority for matching than a route that specifies a
SERVICE_NAME. A route with a service name of 'SQL/ServiceBroker/BrokerConfiguration' is a route to a
Broker Configuration Notice service. A route to this service might not specify a broker instance.
BROKER_INSTANCE = 'broker_instance_identifier'
Specifies the database that hosts the target service. The broker_instance_identifier parameter must be the broker
instance identifier for the remote database, which can be obtained by running the following query in the selected
database:

SELECT service_broker_guid
FROM sys.databases
WHERE database_id = DB_ID()

When the BROKER_INSTANCE clause is omitted, this route matches any broker instance. A route that matches
any broker instance has higher priority for matching than routes with an explicit broker instance when the
conversation does not specify a broker instance. For conversations that specify a broker instance, a route with a
broker instance has higher priority than a route that matches any broker instance.
LIFETIME =route_lifetime
Specifies the time, in seconds, that SQL Server retains the route in the routing table. At the end of the lifetime, the
route expires, and SQL Server no longer considers the route when choosing a route for a new conversation. If this
clause is omitted, the route_lifetime is NULL and the route never expires.
ADDRESS ='next_hop_address'
For SQL Database Managed Instance, ADDRESS must be local.
Specifies the network address for this route. The next_hop_address specifies a TCP/IP address in the following
format:
TCP://{ dns_name | netbios_name | ip_address } :port_number
The specified port_number must match the port number for the Service Broker endpoint of an instance of SQL
Server at the specified computer. This can be obtained by running the following query in the selected database:

SELECT tcpe.port
FROM sys.tcp_endpoints AS tcpe
INNER JOIN sys.service_broker_endpoints AS ssbe
ON ssbe.endpoint_id = tcpe.endpoint_id
WHERE ssbe.name = N'MyServiceBrokerEndpoint';

When the service is hosted in a mirrored database, you must also specify the MIRROR_ADDRESS for the other
instance that hosts a mirrored database. Otherwise, this route does not fail over to the mirror.
When a route specifies 'LOCAL' for the next_hop_address, the message is delivered to a service within the current
instance of SQL Server.
When a route specifies 'TRANSPORT' for the next_hop_address, the network address is determined based on the
network address in the name of the service. A route that specifies 'TRANSPORT' might not specify a service
name or broker instance.
MIRROR_ADDRESS ='next_hop_mirror_address'
Specifies the network address for a mirrored database with one mirrored database hosted at the
next_hop_address. The next_hop_mirror_address specifies a TCP/IP address in the following format:
TCP://{ dns_name | netbios_name | ip_address } : port_number
The specified port_number must match the port number for the Service Broker endpoint of an instance of SQL
Server at the specified computer. This can be obtained by running the following query in the selected database:

SELECT tcpe.port
FROM sys.tcp_endpoints AS tcpe
INNER JOIN sys.service_broker_endpoints AS ssbe
ON ssbe.endpoint_id = tcpe.endpoint_id
WHERE ssbe.name = N'MyServiceBrokerEndpoint';
When the MIRROR_ADDRESS is specified, the route must specify the SERVICE_NAME clause and the
BROKER_INSTANCE clause. A route that specifies 'LOCAL' or 'TRANSPORT' for the next_hop_address might
not specify a mirror address.

Remarks
The routing table that stores the routes is a metadata table that can be read through the sys.routes catalog view.
This catalog view can only be updated through the CREATE ROUTE, ALTER ROUTE, and DROP ROUTE
statements.
By default, the routing table in each user database contains one route. This route is named AutoCreatedLocal.
The route specifies 'LOCAL' for the next_hop_address and matches any service name and broker instance
identifier.
When a route specifies 'TRANSPORT' for the next_hop_address, the network address is determined based on the
name of the service. SQL Server can successfully process service names that begin with a network address in a
format that is valid for a next_hop_address.
The routing table can contain any number of routes that specify the same service, network address, and broker
instance identifier. In this case, Service Broker chooses a route using a procedure designed to find the most exact
match between the information specified in the conversation and the information in the routing table.
Service Broker does not remove expired routes from the routing table. An expired route can be made active using
the ALTER ROUTE statement.
A route cannot be a temporary object. Route names that start with # are allowed, but are permanent objects.

Permissions
Permission for creating a route defaults to members of the db_ddladmin or db_owner fixed database roles and
the sysadmin fixed server role.

Examples
A. Creating a TCP/IP route by using a DNS name
The following example creates a route to the service //Adventure-Works.com/Expenses . The route specifies that
messages to this service travel over TCP to port 1234 on the host identified by the DNS name
www.Adventure-Works.com . The target server delivers the messages upon arrival to the broker instance identified by
the unique identifier D8D4D268-00A3-4C62-8F91-634B89C1E315 .

CREATE ROUTE ExpenseRoute


WITH
SERVICE_NAME = '//Adventure-Works.com/Expenses',
BROKER_INSTANCE = 'D8D4D268-00A3-4C62-8F91-634B89C1E315',
ADDRESS = 'TCP://www.Adventure-Works.com:1234' ;

B. Creating a TCP/IP route by using a NetBIOS name


The following example creates a route to the service //Adventure-Works.com/Expenses . The route specifies that
messages to this service travel over TCP to port 1234 on the host identified by the NetBIOS name SERVER02 .
Upon arrival, the target SQL Server delivers the message to the database instance identified by the unique
identifier D8D4D268-00A3-4C62-8F91-634B89C1E315 .
CREATE ROUTE ExpenseRoute
WITH
SERVICE_NAME = '//Adventure-Works.com/Expenses',
BROKER_INSTANCE = 'D8D4D268-00A3-4C62-8F91-634B89C1E315',
ADDRESS = 'TCP://SERVER02:1234' ;

C. Creating a TCP/IP route by using an IP address


The following example creates a route to the service //Adventure-Works.com/Expenses . The route specifies that
messages to this service travel over TCP to port 1234 on the host at the IP address 192.168.10.2 . Upon arrival,
the target SQL Server delivers the message to the broker instance identified by the unique identifier
D8D4D268-00A3-4C62-8F91-634B89C1E315 .

CREATE ROUTE ExpenseRoute


WITH
SERVICE_NAME = '//Adventure-Works.com/Expenses',
BROKER_INSTANCE = 'D8D4D268-00A3-4C62-8F91-634B89C1E315',
ADDRESS = 'TCP://192.168.10.2:1234' ;

D. Creating a route to a forwarding broker


The following example creates a route to the forwarding broker on the server dispatch.Adventure-Works.com .
Because both the service name and the broker instance identifier are not specified, SQL Server uses this route for
services that have no other route defined.

CREATE ROUTE ExpenseRoute


WITH
ADDRESS = 'TCP://dispatch.Adventure-Works.com' ;

E. Creating a route to a local service


The following example creates a route to the service //Adventure-Works.com/LogRequests in the same instance as
the route.

CREATE ROUTE LogRequests


WITH
SERVICE_NAME = '//Adventure-Works.com/LogRequests',
ADDRESS = 'LOCAL' ;

F. Creating a route with a specified lifetime


The following example creates a route to the service //Adventure-Works.com/Expenses . The lifetime for the route is
259200 seconds, which equates to 72 hours.

CREATE ROUTE ExpenseRoute


WITH
SERVICE_NAME = '//Adventure-Works.com/Expenses',
LIFETIME = 259200,
ADDRESS = 'TCP://services.Adventure-Works.com:1234' ;

G. Creating a route to a mirrored database


The following example creates a route to the service //Adventure-Works.com/Expenses . The service is hosted in a
database that is mirrored. One of the mirrored databases is located at the address
services.Adventure-Works.com:1234 , and the other database is located at the address
services-mirror.Adventure-Works.com:1234 .
CREATE ROUTE ExpenseRoute
WITH
SERVICE_NAME = '//Adventure-Works.com/Expenses',
BROKER_INSTANCE = '69fcc80c-2239-4700-8437-1001ecddf933',
ADDRESS = 'TCP://services.Adventure-Works.com:1234',
MIRROR_ADDRESS = 'TCP://services-mirror.Adventure-Works.com:1234' ;

H. Creating a route that uses the service name for routing


The following example creates a route that uses the service name to determine the network address to send the
message to. Notice that a route that specifies 'TRANSPORT' as the network address has lower priority for matching
than other routes.

CREATE ROUTE TransportRoute


WITH ADDRESS = 'TRANSPORT' ;

See Also
ALTER ROUTE (Transact-SQL )
DROP ROUTE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE RULE (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an object called a rule. When bound to a column or an alias data type, a rule specifies the acceptable
values that can be inserted into that column.

IMPORTANT
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature. We recommend that you use check constraints instead. Check
constraints are created by using the CHECK keyword of CREATE TABLE or ALTER TABLE. For more information, see Unique
Constraints and Check Constraints.

A column or alias data type can have only one rule bound to it. However, a column can have both a rule and one or
more check constraints associated with it. When this is true, all restrictions are evaluated.
Transact-SQL Syntax Conventions

Syntax
CREATE RULE [ schema_name . ] rule_name
AS condition_expression
[ ; ]

Arguments
schema_name
Is the name of the schema to which the rule belongs.
rule_name
Is the name of the new rule. Rule names must comply with the rules for identifiers. Specifying the rule owner
name is optional.
condition_expression
Is the condition or conditions that define the rule. A rule can be any expression valid in a WHERE clause and can
include elements such as arithmetic operators, relational operators, and predicates (for example, IN, LIKE,
BETWEEN ). A rule cannot reference columns or other database objects. Built-in functions that do not reference
database objects can be included. User-defined functions cannot be used.
condition_expression includes one variable. The at sign (@) precedes each local variable. The expression refers to
the value entered with the UPDATE or INSERT statement. Any name or symbol can be used to represent the value
when creating the rule, but the first character must be the at sign (@).
NOTE
Avoid creating rules on expressions that use alias data types. Although rules can be created on expressions that use alias
data types, after binding the rules to columns or alias data types, the expressions fail to compile when referenced.

Remarks
CREATE RULE cannot be combined with other Transact-SQL statements in a single batch. Rules do not apply to
data already existing in the database at the time the rules are created, and rules cannot be bound to system data
types.
A rule can be created only in the current database. After you create a rule, execute sp_bindrule to bind the rule to
a column or to alias data type. A rule must be compatible with the column data type. For example, "@value LIKE
A%" cannot be used as a rule for a numeric column. A rule cannot be bound to a text, ntext, image,
varchar(max), nvarchar(max), varbinary(max), xml, CLR user-defined type, or timestampcolumn. A rule
cannot be bound to a computed column.
Enclose character and date constants with single quotation marks (') and precede binary constants with 0x. If the
rule is not compatible with the column to which it is bound, the SQL Server Database Engine returns an error
message when a value is inserted, but not when the rule is bound.
A rule bound to an alias data type is activated only when you try to insert a value into, or to update, a database
column of the alias data type. Because rules do not test variables, do not assign a value to an alias data type
variable that would be rejected by a rule that is bound to a column of the same data type.
To get a report on a rule, use sp_help. To display the text of a rule, execute sp_helptext with the rule name as the
parameter. To rename a rule, use sp_rename.
A rule must be dropped by using DROP RULE before a new one with the same name is created, and the rule must
be unbound byusing sp_unbindrule before it is dropped. To unbind a rule from a column, use sp_unbindrule.
You can bind a new rule to a column or data type without unbinding the previous one; the new rule overrides the
previous one. Rules bound to columns always take precedence over rules bound to alias data types. Binding a rule
to a column replaces a rule already bound to the alias data type of that column. But binding a rule to a data type
does not replace a rule bound to a column of that alias data type. The following table shows the precedence in
effect when rules are bound to columns and to alias data types on which rules already exist.

OLD RULE BOUND TO OLD RULE BOUND TO

NEW RULE BOUND TO ALIAS DATA TYPE COLUMN

Alias data type Old rule replaced No change

Column Old rule replaced Old rule replaced

If a column has both a default and a rule associated with it, the default must fall within the domain defined by the
rule. A default that conflicts with a rule is never inserted. The SQL Server Database Engine generates an error
message each time it tries to insert such a default.

Permissions
To execute CREATE RULE, at a minimum, a user must have CREATE RULE permission in the current database and
ALTER permission on the schema in which the rule is being created.

Examples
A. Creating a rule with a range
The following example creates a rule that restricts the range of integers inserted into the column or columns to
which this rule is bound.

CREATE RULE range_rule


AS
@range>= $1000 AND @range <$20000;

B. Creating a rule with a list


The following example creates a rule that restricts the actual values entered into the column or columns (to which
this rule is bound) to only those listed in the rule.

CREATE RULE list_rule


AS
@list IN ('1389', '0736', '0877');

C. Creating a rule with a pattern


The following example creates a rule to follow a pattern of any two characters followed by a hyphen ( - ), any
number of characters or no characters, and ending with an integer from 0 through 9 .

CREATE RULE pattern_rule


AS
@value LIKE '__-%[0-9]'

See Also
ALTER TABLE (Transact-SQL )
CREATE DEFAULT (Transact-SQL )
CREATE TABLE (Transact-SQL )
DROP DEFAULT (Transact-SQL )
DROP RULE (Transact-SQL )
Expressions (Transact-SQL )
sp_bindrule (Transact-SQL )
sp_help (Transact-SQL )
sp_helptext (Transact-SQL )
sp_rename (Transact-SQL )
sp_unbindrule (Transact-SQL )
WHERE (Transact-SQL )
CREATE SCHEMA (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a schema in the current database. The CREATE SCHEMA transaction can also create tables and views
within the new schema, and set GRANT, DENY, or REVOKE permissions on those objects.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

CREATE SCHEMA schema_name_clause [ <schema_element> [ ...n ] ]

<schema_name_clause> ::=
{
schema_name
| AUTHORIZATION owner_name
| schema_name AUTHORIZATION owner_name
}

<schema_element> ::=
{
table_definition | view_definition | grant_statement |
revoke_statement | deny_statement
}

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

CREATE SCHEMA schema_name [ AUTHORIZATION owner_name ] [;]

Arguments
schema_name
Is the name by which the schema is identified within the database.
AUTHORIZATION owner_name
Specifies the name of the database-level principal that will own the schema. This principal may own other
schemas, and may not use the current schema as its default schema.
table_definition
Specifies a CREATE TABLE statement that creates a table within the schema. The principal executing this
statement must have CREATE TABLE permission on the current database.
view_definition
Specifies a CREATE VIEW statement that creates a view within the schema. The principal executing this statement
must have CREATE VIEW permission on the current database.
grant_statement
Specifies a GRANT statement that grants permissions on any securable except the new schema.
revoke_statement
Specifies a REVOKE statement that revokes permissions on any securable except the new schema.
deny_statement
Specifies a DENY statement that denies permissions on any securable except the new schema.

Remarks
NOTE
Statements that contain CREATE SCHEMA AUTHORIZATION but do not specify a name, are permitted for backward
compatibility only. The statement does not cause an error, but does not create a schema.

CREATE SCHEMA can create a schema, the tables and views it contains, and GRANT, REVOKE, or DENY
permissions on any securable in a single statement. This statement must be executed as a separate batch. Objects
created by the CREATE SCHEMA statement are created inside the schema that is being created.
CREATE SCHEMA transactions are atomic. If any error occurs during the execution of a CREATE SCHEMA
statement, none of the specified securables are created and no permissions are granted.
Securables to be created by CREATE SCHEMA can be listed in any order, except for views that reference other
views. In that case, the referenced view must be created before the view that references it.
Therefore, a GRANT statement can grant permission on an object before the object itself is created, or a CREATE
VIEW statement can appear before the CREATE TABLE statements that create the tables referenced by the view.
Also, CREATE TABLE statements can declare foreign keys to tables that are defined later in the CREATE SCHEMA
statement.

NOTE
DENY and REVOKE are supported inside CREATE SCHEMA statements. DENY and REVOKE clauses will be executed in the
order in which they appear in the CREATE SCHEMA statement.

The principal that executes CREATE SCHEMA can specify another database principal as the owner of the schema
being created. This requires additional permissions, as described in the "Permissions" section later in this topic.
The new schema is owned by one of the following database-level principals: database user, database role, or
application role. Objects created within a schema are owned by the owner of the schema, and have a NULL
principal_id in sys.objects. Ownership of schema-contained objects can be transferred to any database-level
principal, but the schema owner always retains CONTROL permission on objects within the schema.
Cau t i on

Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).
Implicit Schema and User Creation
In some cases a user can use a database without having a database user account (a database principal in the
database). This can happen in the following situations:
A login has CONTROL SERVER privileges.
A Windows user does not have an individual database user account (a database principal in the database),
but accesses a database as a member of a Windows group which has a database user account (a database
principal for the Windows group).
When a user without a database user account creates an object without specifying an existing schema, a
database principal and default schema will be automatically created in the database for that user. The
created database principal and schema will have the same name as the name that user used when
connecting to SQL Server (the SQL Server authentication login name or the Windows user name).
This behavior is necessary to allow users that are based on Windows groups to create and own objects.
However it can result in the unintentional creation of schemas and users. To avoid implicitly creating users
and schemas, whenever possible explicitly create database principals and assign a default schema. Or
explicitly state an existing schema when creating objects in a database, using two or three-part object
names.

NOTE
The implicit creation of an Azure Active Directory user is not possible on SQL Database. Since creating an Azure AD user
from external provider must check the users status in the AAD, creating the user will fail with error 2760: The specified
schema name "<user_name@domain>" either does not exist or you do not have permission to use it. And then
error 2759: CREATE SCHEMA failed due to previous errors. To work around these errors, create the Azure AD user from
external provider first and then rerun the statement creating the object.

Deprecation Notice
CREATE SCHEMA statements that do not specify a schema name are currently supported for backward
compatibility. Such statements do not actually create a schema inside the database, but they do create tables and
views, and grant permissions. Principals do not need CREATE SCHEMA permission to execute this earlier form of
CREATE SCHEMA, because no schema is being created. This functionality will be removed from a future release
of SQL Server.

Permissions
Requires CREATE SCHEMA permission on the database.
To create an object specified within the CREATE SCHEMA statement, the user must have the corresponding
CREATE permission.
To specify another user as the owner of the schema being created, the caller must have IMPERSONATE
permission on that user. If a database role is specified as the owner, the caller must have one of the following:
membership in the role or ALTER permission on the role.

NOTE
For the backward-compatible syntax, no permissions to CREATE SCHEMA are checked because no schema is being created.

Examples
A. Creating a schema and granting permissions
The following example creates schema Sprockets owned by Annik that contains table NineProngs . The statement
grants SELECT to Mandar and denies SELECT to Prasanna . Note that Sprockets and NineProngs are created in a
single statement.
USE AdventureWorks2012;
GO
CREATE SCHEMA Sprockets AUTHORIZATION Annik
CREATE TABLE NineProngs (source int, cost int, partnumber int)
GRANT SELECT ON SCHEMA::Sprockets TO Mandar
DENY SELECT ON SCHEMA::Sprockets TO Prasanna;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


B. Creating a schema and a table in the schema
The following example creates schema Sales and then creates a table Sales.Region in that schema.

CREATE SCHEMA Sales;


GO;

CREATE TABLE Sales.Region


(Region_id int NOT NULL,
Region_Name char(5) NOT NULL)
WITH (DISTRIBUTION = REPLICATE);
GO

C. Setting the owner of a schema


The following example creates a schema Production owned by Mary .

CREATE SCHEMA Production AUTHORIZATION [Contoso\Mary];


GO

See Also
ALTER SCHEMA (Transact-SQL )
DROP SCHEMA (Transact-SQL )
GRANT (Transact-SQL )
DENY (Transact-SQL )
REVOKE (Transact-SQL )
CREATE VIEW (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.schemas (Transact-SQL )
Create a Database Schema
CREATE SEARCH PROPERTY LIST (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new search property list. A search property list is used to specify one or more search properties that
you want to include in a full-text index.
Transact-SQL Syntax Conventions

Syntax
CREATE SEARCH PROPERTY LIST new_list_name
[ FROM [ database_name. ] source_list_name ]
[ AUTHORIZATION owner_name ]
;

Arguments
new_list_name
Is the name of the new search property list. new_list_name is an identifier with a maximum of 128 characters.
new_list_name must be unique among all property lists in the current database, and conform to the rules for
identifiers. new_list_name will be used when the full-text index is created.
database_name
Is the name of the database where the property list specified by source_list_name is located. If not specified,
database_name defaults to the current database.
database_name must specify the name of an existing database. The login for the current connection must be
associated with an existing user ID in the database specified by database_name. You must also have the required
permissions on the database.
source_list_name
Specifies that the new property list is created by copying an existing property list from database_name. If
source_list_name does not exist, CREATE SEARCH PROPERTY LIST fails with an error. The search properties in
source_list_name are inherited by new_list_name.
AUTHORIZATION owner_name
Specifies the name of a user or role to own of the property list. owner_name must either be the name of a role of
which the current user is a member, or the current user must have IMPERSONATE permission on owner_name.
If not specified, ownership is given to the current user.

NOTE
The owner can be changed by using the ALTER AUTHORIZATION Transact-SQL statement.

Remarks
NOTE
For information about property lists in general, see Search Document Properties with Search Property Lists.

By default, a new search property list is empty and you must alter it to manually to add one or more search
properties. Alternatively, you can copy an existing search property list. In this case, the new list inherits the search
properties of its source, but you can alter the new list to add or remove search properties. Any properties in the
search property list at the time of the next full population are included in the full-text index.
A CREATE SEARCH PROPERTY LIST statement fails under any of the following conditions:
If the database specified by database_name does not exist.
If the list specified by source_list_name does not exist.
If you do not have the correct permissions.
To add or remove properties from a list
ALTER SEARCH PROPERTY LIST (Transact-SQL )
To drop a property list
DROP SEARCH PROPERTY LIST (Transact-SQL )

Permissions
Requires CREATE FULLTEXT CATALOG permissions in the current database and REFERENCES permissions on
any database from which you copy a source property list.

NOTE
REFERENCES permission is required to associate the list with a full-text index. CONTROL permission is required to add and
remove properties or drop the list. The property list owner can grant REFERENCES or CONTROL permissions on the list.
Users with CONTROL permission can also grant REFERENCES permission to other users.

Examples
A. Creating an empty property list and associating it with an index
The following example creates a new search property list named DocumentPropertyList . The example then uses
an ALTER FULLTEXT INDEX statement to associate the new property list with the full-text index of the
Production.Document table in the AdventureWorks database, without starting a population.

NOTE
For an example that adds several predefined, well-known search properties to this search property list, see ALTER SEARCH
PROPERTY LIST (Transact-SQL). After adding search properties to the list, the database administrator would need to use
another ALTER FULLTEXT INDEX statement with the START FULL POPULATION clause.
CREATE SEARCH PROPERTY LIST DocumentPropertyList;
GO
USE AdventureWorks2012;
ALTER FULLTEXT INDEX ON Production.Document
SET SEARCH PROPERTY LIST DocumentPropertyList
WITH NO POPULATION;
GO

B. Creating a property list from an existing one


The following example creates a new the search property list, JobCandidateProperties , from the list created by
Example A, DocumentPropertyList , which is associated with a full-text index in the AdventureWorks2012 database.
The example then uses an ALTER FULLTEXT INDEX statement to associate the new property list with the full-
text index of the HumanResources.JobCandidate table in the AdventureWorks2012 database. This ALTER FULLTEXT
INDEX statement starts a full population, which is the default behavior of the SET SEARCH PROPERTY LIST
clause.

CREATE SEARCH PROPERTY LIST JobCandidateProperties


FROM AdventureWorks2012.DocumentPropertyList;
GO
ALTER FULLTEXT INDEX ON HumanResources.JobCandidate
SET SEARCH PROPERTY LIST JobCandidateProperties;
GO

See Also
ALTER SEARCH PROPERTY LIST (Transact-SQL )
DROP SEARCH PROPERTY LIST (Transact-SQL )
sys.registered_search_properties (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
sys.dm_fts_index_keywords_by_property (Transact-SQL )
Search Document Properties with Search Property Lists
Find Property Set GUIDs and Property Integer IDs for Search Properties
CREATE SECURITY POLICY (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a security policy for row level security.
Transact-SQL Syntax Conventions

Syntax
CREATE SECURITY POLICY [schema_name. ] security_policy_name
{ ADD [ FILTER | BLOCK ] } PREDICATE tvf_schema_name.security_predicate_function_name
( { column_name | expression } [ , …n] ) ON table_schema_name. table_name
[ <block_dml_operation> ] , [ , …n]
[ WITH ( STATE = { ON | OFF } [,] [ SCHEMABINDING = { ON | OFF } ] ) ]
[ NOT FOR REPLICATION ]
[;]

<block_dml_operation>
[ { AFTER { INSERT | UPDATE } }
| { BEFORE { UPDATE | DELETE } } ]

Arguments
security_policy_name
The name of the security policy. Security policy names must comply with the rules for identifiers and must be
unique within the database and to its schema.
schema_name
Is the name of the schema to which the security policy belongs. schema_name is required because of schema
binding.
[ FILTER | BLOCK ]
The type of security predicate for the function being bound to the target table. FILTER predicates silently filter the
rows that are available to read operations. BLOCK predicates explicitly block write operations that violate the
predicate function.
tvf_schema_name.security_predicate_function_name
Is the inline table value function that will be used as a predicate and that will be enforced upon queries against a
target table. At most one security predicate can be defined for a particular DML operation against a particular
table. The inline table value function must have been created using the SCHEMABINDING option.
{ column_name | expression }
A column name or expression used as a parameter for the security predicate function. Any column on the target
table can be used. An Expression can only include constants, built in scalar functions, operators and columns from
the target table. A column name or expression needs to be specified for each parameter of the function.
table_schema_name.table_name
Is the target table to which the security predicate will be applied. Multiple disabled security policies can target a
single table for a particular DML operation, but only one can be enabled at any given time.
<block_dml_operation> The particular DML operation for which the block predicate will be applied. AFTER
specifies that the predicate will be evaluated on the values of the rows after the DML operation was performed
(INSERT or UPDATE ). BEFORE specifies that the predicate will be evaluated on the values of the rows before the
DML operation is performed (UPDATE or DELETE ). If no operation is specified, the predicate will apply to all
operations.
[ STATE = { ON | OFF } ]
Enables or disables the security policy from enforcing its security predicates against the target tables. If not
specified the security policy being created is enabled.
[ SCHEMABINDING = { ON | OFF } ]
Indicates whether all predicate functions in the policy must be created with the SCHEMABINDING option. By
default, all functions must be created with SCHEMABINDING.
NOT FOR REPLICATION
Indicates that the security policy should not be executed when a replication agent modifies the target object. For
more information, see Control the Behavior of Triggers and Constraints During Synchronization (Replication
Transact-SQL Programming).
[table_schema_name.] table_name
Is the target table to which the security predicate will be applied. Multiple disabled security policies can target a
single table, but only one can be enabled at any given time.

Remarks
When using predicate functions with memory-optimized tables, you must include SCHEMABINDING and use
the WITH NATIVE_COMPILATION compilation hint.
Block predicates are evaluated after the corresponding DML operation is executed. Therefore, a READ
UNCOMMITTED query can see transient values that will be rolled back.

Permissions
Requires the ALTER ANY SECURITY POLICY permission and ALTER permission on the schema.
Additionally the following permissions are required for each predicate that is added:
SELECT and REFERENCES permissions on the function being used as a predicate.
REFERENCES permission on the target table being bound to the policy.
REFERENCES permission on every column from the target table used as arguments.

Examples
The following examples demonstrate the use of the CREATE SECURITY POLICY syntax. For an example of a
complete security policy scenario, see Row -Level Security.
A. Creating a security policy
The following syntax creates a security policy with a filter predicate for the Customer table, and leaves the security
policy disabled.

CREATE SECURITY POLICY [FederatedSecurityPolicy]


ADD FILTER PREDICATE [rls].[fn_securitypredicate]([CustomerId])
ON [dbo].[Customer];

B. Creating a policy that affects multiple tables


The following syntax creates a security policy with three filter predicates on three different tables, and enables the
security policy.

CREATE SECURITY POLICY [FederatedSecurityPolicy]


ADD FILTER PREDICATE [rls].[fn_securitypredicate1]([CustomerId])
ON [dbo].[Customer],
ADD FILTER PREDICATE [rls].[fn_securitypredicate1]([VendorId])
ON [dbo].[ Vendor],
ADD FILTER PREDICATE [rls].[fn_securitypredicate2]([WingId])
ON [dbo].[Patient]
WITH (STATE = ON);

C. Creating a policy with multiple types of security predicates


Adding both a filter predicate and a block predicate to the Sales table.

CREATE SECURITY POLICY rls.SecPol


ADD FILTER PREDICATE rls.tenantAccessPredicate(TenantId) ON dbo.Sales,
ADD BLOCK PREDICATE rls.tenantAccessPredicate(TenantId) ON dbo.Sales AFTER INSERT;

See Also
Row -Level Security
ALTER SECURITY POLICY (Transact-SQL )
DROP SECURITY POLICY (Transact-SQL )
sys.security_policies (Transact-SQL )
sys.security_predicates (Transact-SQL )
CREATE SELECTIVE XML INDEX (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new selective XML index on the specified table and XML column. Selective XML indexes improve the
performance of XML indexing and querying by indexing only the subset of nodes that you typically query. You can
also create secondary selective XML indexes. For information, see Create, Alter, and Drop Secondary Selective
XML Indexes.
Transact-SQL Syntax Conventions

Syntax
CREATE SELECTIVE XML INDEX index_name
ON <table_object> (xml_column_name)
[WITH XMLNAMESPACES (<xmlnamespace_list>)]
FOR (<promoted_node_path_list>)
[WITH (<index_options>)]

<table_object> ::=
{ [database_name. [schema_name ] . | schema_name. ] table_name }

<promoted_node_path_list> ::=
<named_promoted_node_path_item> [, <promoted_node_path_list>]

<named_promoted_node_path_item> ::=
<path_name> = <promoted_node_path_item>

<promoted_node_path_item>::=
<xquery_node_path_item> | <sql_values_node_path_item>

<xquery_node_path_item> ::=
<node_path> [AS XQUERY <xsd_type_or_node_hint>] [SINGLETON]

<xsd_type_or_node_hint> ::=
[<xsd_type>] [MAXLENGTH(x)] | node()

<sql_values_node_path_item> ::=
<node_path> AS SQL <sql_type> [SINGLETON]

<node_path> ::=
character_string_literal

<xsd_type> ::=
character_string_literal

<sql_type> ::=
identifier

<path_name> ::=
identifier

<xmlnamespace_list> ::=
<xmlnamespace_item> [, <xmlnamespace_list>]

<xmlnamespace_item> ::=
<xmlnamespace_uri> AS <xmlnamespace_prefix>

<xml_namespace_uri> ::=
character_string_literal

<xml_namespace_prefix> ::=
identifier

<index_options> ::=
(
| PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = OFF
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
)

Arguments
index_name
Is the name of the new index to create. Index names must be unique within a table, but do not have to be unique
within a database. Index names must follow the rules of identifiers.
<table_object> Is the table that contains the XML column to index. Use one of the following formats:
database_name.schema_name.table_name

database_name..table_name

schema_name.table_name

table_name

xml_column_name
Is the name of the XML column that contains the paths to index.
[WITH XMLNAMESPACES (<xmlnamespace_list>)] Is the list of namespaces used by the paths to index.
For information about the syntax of the WITH XMLNAMESPACES clause, see WITH XMLNAMESPACES
(Transact-SQL ).
FOR (<promoted_node_path_list>) Is the list of paths to index with optional optimization hints. For
information about the paths and the optimization hints that you can specify in the CREATE or ALTER
statement, see Specify Paths and Optimization Hints for Selective XML Indexes.
WITH <index_options> For information about the index options, see CREATE XML INDEX (Selective XML
Indexes).

Best Practices
Create a selective XML index instead of an ordinary XML index in most cases for better performance and more
efficient storage. However, a selective XML index is not recommended when either of the following conditions is
true:
You need to map a large number of node paths.
You need to support queries for unknown elements or elements in an unknown location.

Limitations and Restrictions


For information about limitations and restrictions, see Selective XML Indexes (SXI).

Security
Permissions
Requires ALTER permission on the table or view. User must be a member of the sysadmin fixed server role or the
db_ddladmin and db_owner fixed database roles.

Examples
The following example shows the syntax for creating a selective XML index. It also shows several variations of the
syntax for describing the paths to be indexed, with optional optimization hints.
CREATE TABLE Tbl ( id INT PRIMARY KEY, xmlcol XML );
GO
CREATE SELECTIVE XML INDEX sxi_index
ON Tbl(xmlcol)
FOR(
pathab = '/a/b' as XQUERY 'node()',
pathabc = '/a/b/c' as XQUERY 'xs:double',
pathdtext = '/a/b/d/text()' as XQUERY 'xs:string' MAXLENGTH(200) SINGLETON,
pathabe = '/a/b/e' as SQL NVARCHAR(100)
);

The following example includes a WITH XMLNAMESPACES clause.

CREATE SELECTIVE XML INDEX on T1(C1)


WITH XMLNAMESPACES ('http://www.tempuri.org/' as myns)
FOR ( path1 = '/myns:book/myns:author/text()' );

See Also
Selective XML Indexes (SXI)
Create, Alter, and Drop Selective XML Indexes
Specify Paths and Optimization Hints for Selective XML Indexes
CREATE SEQUENCE (Transact-SQL)
5/3/2018 • 10 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a sequence object and specifies its properties. A sequence is a user-defined schema bound object that
generates a sequence of numeric values according to the specification with which the sequence was created. The
sequence of numeric values is generated in an ascending or descending order at a defined interval and can be
configured to restart (cycle) when exhausted. Sequences, unlike identity columns, are not associated with specific
tables. Applications refer to a sequence object to retrieve its next value. The relationship between sequences and
tables is controlled by the application. User applications can reference a sequence object and coordinate the
values across multiple rows and tables.
Unlike identity columns values that are generated when rows are inserted, an application can obtain the next
sequence number without inserting the row by calling the NEXT VALUE FOR function. Use
sp_sequence_get_range to get multiple sequence numbers at once.
For information and scenarios that use both CREATE SEQUENCE and the NEXT VALUE FOR function, see
Sequence Numbers.
Transact-SQL Syntax Conventions

Syntax
CREATE SEQUENCE [schema_name . ] sequence_name
[ AS [ built_in_integer_type | user-defined_integer_type ] ]
[ START WITH <constant> ]
[ INCREMENT BY <constant> ]
[ { MINVALUE [ <constant> ] } | { NO MINVALUE } ]
[ { MAXVALUE [ <constant> ] } | { NO MAXVALUE } ]
[ CYCLE | { NO CYCLE } ]
[ { CACHE [ <constant> ] } | { NO CACHE } ]
[ ; ]

Arguments
sequence_name
Specifies the unique name by which the sequence is known in the database. Type is sysname.
[ built_in_integer_type | user-defined_integer_type
A sequence can be defined as any integer type. The following types are allowed.
tinyint - Range 0 to 255
smallint - Range -32,768 to 32,767
int - Range -2,147,483,648 to 2,147,483,647
bigint - Range -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
decimal and numeric with a scale of 0.
Any user-defined data type (alias type) that is based on one of the allowed types.
If no data type is provided, the bigint data type is used as the default.
START WITH <constant>
The first value returned by the sequence object. The START value must be a value less than or equal to the
maximum and greater than or equal to the minimum value of the sequence object. The default start value
for a new sequence object is the minimum value for an ascending sequence object and the maximum value
for a descending sequence object.
INCREMENT BY <constant>
Value used to increment (or decrement if negative) the value of the sequence object for each call to the
NEXT VALUE FOR function. If the increment is a negative value, the sequence object is descending;
otherwise, it is ascending. The increment cannot be 0. The default increment for a new sequence object is 1.
[ MINVALUE <constant> | NO MINVALUE ]
Specifies the bounds for the sequence object. The default minimum value for a new sequence object is the
minimum value of the data type of the sequence object. This is zero for the tinyint data type and a
negative number for all other data types.
[ MAXVALUE <constant> | NO MAXVALUE
Specifies the bounds for the sequence object. The default maximum value for a new sequence object is the
maximum value of the data type of the sequence object.
[ CYCLE | NO CYCLE ]
Property that specifies whether the sequence object should restart from the minimum value (or maximum
for descending sequence objects) or throw an exception when its minimum or maximum value is exceeded.
The default cycle option for new sequence objects is NO CYCLE.
Note that cycling restarts from the minimum or maximum value, not from the start value.
[ CACHE [<constant> ] | NO CACHE ]
Increases performance for applications that use sequence objects by minimizing the number of disk IOs
that are required to generate sequence numbers. Defaults to CACHE.
For example, if a cache size of 50 is chosen, SQL Server does not keep 50 individual values cached. It only
caches the current value and the number of values left in the cache. This means that the amount of
memory required to store the cache is always two instances of the data type of the sequence object.

NOTE
If the cache option is enabled without specifying a cache size, the Database Engine will select a size. However, users should
not rely upon the selection being consistent. Microsoft might change the method of calculating the cache size without
notice.

When created with the CACHE option, an unexpected shutdown (such as a power failure) may result in the loss of
sequence numbers remaining in the cache.

General Remarks
Sequence numbers are generated outside the scope of the current transaction. They are consumed whether the
transaction using the sequence number is committed or rolled back.
Cache management
To improve performance, SQL Server pre-allocates the number of sequence numbers specified by the CACHE
argument.
For an example, a new sequence is created with a starting value of 1 and a cache size of 15. When the first value is
needed, values 1 through 15 are made available from memory. The last cached value (15) is written to the system
tables on the disk. When all 15 numbers are used, the next request (for number 16) will cause the cache to be
allocated again. The new last cached value (30) will be written to the system tables.
If the Database Engine is stopped after you use 22 numbers, the next intended sequence number in memory (23)
is written to the system tables, replacing the previously stored number.
After SQL Server restarts and a sequence number is needed, the starting number is read from the system tables
(23). The cache amount of 15 numbers (23-38) is allocated to memory and the next non-cache number (39) is
written to the system tables.
If the Database Engine stops abnormally for an event such as a power failure, the sequence restarts with the
number read from system tables (39). Any sequence numbers allocated to memory (but never requested by a
user or application) are lost. This functionality may leave gaps, but guarantees that the same value will never be
issued two times for a single sequence object unless it is defined as CYCLE or is manually restarted.
The cache is maintained in memory by tracking the current value (the last value issued) and the number of values
left in the cache. Therefore, the amount of memory used by the cache is always two instances of the data type of
the sequence object.
Setting the cache argument to NO CACHE writes the current sequence value to the system tables every time that
a sequence is used. This might slow performance by increasing disk access, but reduces the chance of unintended
gaps. Gaps can still occur if numbers are requested using the NEXT VALUE FOR or sp_sequence_get_range
functions, but then the numbers are either not used or are used in uncommitted transactions.
When a sequence object uses the CACHE option, if you restart the sequence object, or alter the INCREMENT,
CYCLE, MINVALUE, MAXVALUE, or the cache size properties, it will cause the cache to be written to the
system tables before the change occurs. Then the cache is reloaded starting with the current value (i.e. no
numbers are skipped). Changing the cache size takes effect immediately.
CACHE option when cached values are available
The following process occurs every time that a sequence object is requested to generate the next value for the
CACHE option if there are unused values available in the in-memory cache for the sequence object.
1. The next value for the sequence object is calculated.
2. The new current value for the sequence object is updated in memory.
3. The calculated value is returned to the calling statement.
CACHE option when the cache is exhausted
The following process occurs every time a sequence object is requested to generate the next value for the
CACHE option if the cache has been exhausted:
4. The next value for the sequence object is calculated.
5. The last value for the new cache is calculated.
6. The system table row for the sequence object is locked, and the value calculated in step 2 (the last value) is
written to the system table. A cache-exhausted xevent is fired to notify the user of the new persisted value.
NO CACHE option
The following process occurs every time that a sequence object is requested to generate the next value for
the NO CACHE option:
7. The next value for the sequence object is calculated.
8. The new current value for the sequence object is written to the system table.
9. The calculated value is returned to the calling statement.

Metadata

For information about sequences, query sys.sequences.

Security
Permissions
Requires CREATE SEQUENCE, ALTER, or CONTROL permission on the SCHEMA.
Members of the db_owner and db_ddladmin fixed database roles can create, alter, and drop sequence
objects.
Members of the db_owner and db_datawriter fixed database roles can update sequence objects by causing
them to generate numbers.
The following example grants the user AdventureWorks\Larry permission to create sequences in the Test
schema.

GRANT CREATE SEQUENCE ON SCHEMA::Test TO [AdventureWorks\Larry]

Ownership of a sequence object can be transferred by using the ALTER AUTHORIZATION statement.
If a sequence uses a user-defined data type, the creator of the sequence must have REFERENCES permission on
the type.
Audit
To audit CREATE SEQUENCE, monitor the SCHEMA_OBJECT_CHANGE_GROUP.

Examples
For examples of creating sequences and using the NEXT VALUE FOR function to generate sequence numbers,
see Sequence Numbers.
Most of the following examples create sequence objects in a schema named Test.
To create the Test schema, execute the following statement.

CREATE SCHEMA Test ;


GO

A. Creating a sequence that increases by 1


In the following example, Thierry creates a sequence named CountBy1 that increases by one every time that it is
used.

CREATE SEQUENCE Test.CountBy1


START WITH 1
INCREMENT BY 1 ;
GO

B. Creating a sequence that decreases by 1


The following example starts at 0 and counts into negative numbers by one every time it is used.
CREATE SEQUENCE Test.CountByNeg1
START WITH 0
INCREMENT BY -1 ;
GO

C. Creating a sequence that increases by 5


The following example creates a sequence that increases by 5 every time it is used.

CREATE SEQUENCE Test.CountBy1


START WITH 5
INCREMENT BY 5 ;
GO

D. Creating a sequence that starts with a designated number


After importing a table, Thierry notices that the highest ID number used is 24,328. Thierry needs a sequence that
will generate numbers starting at 24,329. The following code creates a sequence that starts with 24,329 and
increments by 1.

CREATE SEQUENCE Test.ID_Seq


START WITH 24329
INCREMENT BY 1 ;
GO

E. Creating a sequence using default values


The following example creates a sequence using the default values.

CREATE SEQUENCE Test.TestSequence ;

Execute the following statement to view the properties of the sequence.

SELECT * FROM sys.sequences WHERE name = 'TestSequence' ;

A partial list of the output demonstrates the default values.

start_value -9223372036854775808

increment 1

mimimum_value -9223372036854775808

maximum_value 9223372036854775807

is_cycling 0

is_cached 1

current_value -9223372036854775808

F. Creating a sequence with a specific data type


The following example creates a sequence using the smallint data type, with a range from -32,768 to 32,767.

CREATE SEQUENCE SmallSeq


AS smallint ;

G. Creating a sequence using all arguments


The following example creates a sequence named DecSeq using the decimal data type, having a range from 0 to
255. The sequence starts with 125 and increments by 25 every time that a number is generated. Because the
sequence is configured to cycle when the value exceeds the maximum value of 200, the sequence restarts at the
minimum value of 100.

CREATE SEQUENCE Test.DecSeq


AS decimal(3,0)
START WITH 125
INCREMENT BY 25
MINVALUE 100
MAXVALUE 200
CYCLE
CACHE 3
;

Execute the following statement to see the first value; the START WITH option of 125.

SELECT NEXT VALUE FOR Test.DecSeq;

Execute the statement three more times to return 150, 175, and 200.
Execute the statement again to see how the start value cycles back to the MINVALUE option of 100.
Execute the following code to confirm the cache size and see the current value.

SELECT cache_size, current_value


FROM sys.sequences
WHERE name = 'DecSeq' ;

See Also
ALTER SEQUENCE (Transact-SQL )
DROP SEQUENCE (Transact-SQL )
NEXT VALUE FOR (Transact-SQL )
Sequence Numbers
CREATE SERVER AUDIT (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database (Managed Instance only) Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a server audit object using SQL Server Audit. For more information, see SQL Server Audit (Database
Engine).

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
CREATE SERVER AUDIT audit_name
{
TO { [ FILE (<file_options> [ , ...n ] ) ] | APPLICATION_LOG | SECURITY_LOG }
[ WITH ( <audit_options> [ , ...n ] ) ]
[ WHERE <predicate_expression> ]
}
[ ; ]

<file_options>::=
{
FILEPATH = 'os_file_path'
[ , MAXSIZE = { max_size { MB | GB | TB } | UNLIMITED } ]
[ , { MAX_ROLLOVER_FILES = { integer | UNLIMITED } } | { MAX_FILES = integer } ]
[ , RESERVE_DISK_SPACE = { ON | OFF } ]
}

<audit_options>::=
{
[ QUEUE_DELAY = integer ]
[ , ON_FAILURE = { CONTINUE | SHUTDOWN | FAIL_OPERATION } ]
[ , AUDIT_GUID = uniqueidentifier ]
}

<predicate_expression>::=
{
[NOT ] <predicate_factor>
[ { AND | OR } [NOT ] { <predicate_factor> } ]
[,...n ]
}

<predicate_factor>::=
event_field_name { = | < > | ! = | > | > = | < | < = } { number | ' string ' }

Arguments
TO { FILE | APPLICATION_LOG | SECURITY_LOG }
Determines the location of the audit target. The options are a binary file, The Windows Application log, or the
Windows Security log. SQL Server cannot write to the Windows Security log without configuring additional
settings in Windows. For more information, see Write SQL Server Audit Events to the Security Log.
FILEPATH ='os_file_path'
The path of the audit log. The file name is generated based on the audit name and audit GUID.
MAXSIZE = { max_size }
Specifies the maximum size to which the audit file can grow. The max_size value must be an integer followed by
MB, GB, TB, or UNLIMITED. The minimum size that you can specify for max_size is 2 MB and the maximum is
2,147,483,647 TB. When UNLIMITED is specified, the file grows until the disk is full. (0 also indicates
UNLIMITED.) Specifying a value lower than 2 MB raises the error MSG_MAXSIZE_TOO_SMALL. The default
value is UNLIMITED.
MAX_ROLLOVER_FILES ={ integer | UNLIMITED }
Specifies the maximum number of files to retain in the file system in addition to the current file. The
MAX_ROLLOVER_FILES value must be an integer or UNLIMITED. The default value is UNLIMITED. This
parameter is evaluated whenever the audit restarts (which can happen when the instance of the Database Engine
restarts or when the audit is turned off and then on again) or when a new file is needed because the MAXSIZE
has been reached. When MAX_ROLLOVER_FILES is evaluated, if the number of files exceeds the
MAX_ROLLOVER_FILES setting, the oldest file is deleted. As a result, when the setting of MAX_ROLLOVER_FILES
is 0 a new file is created each time the MAX_ROLLOVER_FILES setting is evaluated. Only one file is automatically
deleted when MAX_ROLLOVER_FILES setting is evaluated, so when the value of MAX_ROLLOVER_FILES is
decreased, the number of files does not shrink unless old files are manually deleted. The maximum number of
files that can be specified is 2,147,483,647.
MAX_FILES =integer
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the maximum number of audit files that can be created. Does not rollover to the first file when the limit
is reached. When the MAX_FILES limit is reached, any action that causes additional audit events to be generated,
fails with an error.
RESERVE_DISK_SPACE = { ON | OFF }
This option pre-allocates the file on the disk to the MAXSIZE value. It applies only if MAXSIZE is not equal to
UNLIMITED. The default value is OFF.
QUEUE_DEL AY =integer
Determines the time, in milliseconds, that can elapse before audit actions are forced to be processed. A value of 0
indicates synchronous delivery. The minimum settable query delay value is 1000 (1 second), which is the default.
The maximum is 2,147,483,647 (2,147,483.647 seconds or 24 days, 20 hours, 31 minutes, 23.647 seconds).
Specifying an invalid number, raises the MSG_INVALID_QUEUE_DEL AY error.
ON_FAILURE = { CONTINUE | SHUTDOWN | FAIL_OPERATION }
Indicates whether the instance writing to the target should fail, continue, or stop SQL Server if the target cannot
write to the audit log. The default value is CONTINUE.
CONTINUE
SQL Server operations continue. Audit records are not retained. The audit continues to attempt to log events and
resumes if the failure condition is resolved. Selecting the continue option can allow unaudited activity, which
could violate your security policies. Use this option, when continuing operation of the Database Engine is more
important than maintaining a complete audit.
SHUTDOWN
Forces the instance of SQL Server to shut down, if SQL Server fails to write data to the audit target for any
reason. The login executing the CREATE SERVER AUDIT statement must have the SHUTDOWN permission within SQL
Server. The shutdown behavior persists even if the SHUTDOWN permission is later revoked from the executing
login. If the user does not have this permission, then the statement fails and the audit is not be created. Use the
option when an audit failure could compromise the security or integrity of the system. For more information, see
SHUTDOWN.
FAIL_OPERATION
Database actions fail if they cause audited events. Actions, which do not cause audited events can continue, but no
audited events can occur. The audit continues to attempt to log events and resumes if the failure condition is
resolved. Use this option when maintaining a complete audit is more important than full access to the Database
Engine.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
AUDIT_GUID =uniqueidentifier
To support scenarios such as database mirroring, an audit needs a specific GUID that matches the GUID found in
the mirrored database. The GUID cannot be modified after the audit has been created.
predicate_expression
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the predicate expression used to determine if an event should be processed or not. Predicate
expressions are limited to 3000 characters, which limits string arguments.
event_field_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Is the name of the event field that identifies the predicate source. Audit fields are described in sys.fn_get_audit_file
(Transact-SQL ). All fields can be filtered except file_name , audit_file_offset , and event_time .

NOTE
While the action_id and class_type fields are of type varchar in sys.fn_get_audit_file, they can only be used with
numbers when they are a predicate source for filtering. To get the list of values to be used with class_type , execute the
following query:

SELECT spt.[name], spt.[number]


FROM [master].[dbo].[spt_values] spt
WHERE spt.[type] = N'EOD'
ORDER BY spt.[name];

number
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Is any numeric type including decimal. Limitations are the lack of available physical memory or a number that is
too large to be represented as a 64-bit integer.
' string '
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Either an ANSI or Unicode string as required by the predicate compare. No implicit string type conversion is
performed for the predicate compare functions. Passing the wrong type results in an error.

Remarks
When a server audit is created, it is in a disabled state.
The CREATE SERVER AUDIT statement is in a transaction's scope. If the transaction is rolled back, the statement
is also rolled back.
Permissions
To create, alter, or drop a server audit, principals require the ALTER ANY SERVER AUDIT or the CONTROL
SERVER permission.
When you are saving audit information to a file, to help prevent tampering, restrict access to the file location.

Examples
A. Creating a server audit with a file target
The following example creates a server audit called HIPPA_Audit with a binary file as the target and no options.

CREATE SERVER AUDIT HIPAA_Audit


TO FILE ( FILEPATH ='\\SQLPROD_1\Audit\' );

B. Creating a server audit with a Windows Application log target with options
The following example creates a server audit called HIPPA_Audit with the target set for the Windows Application
log. The queue is written every second and shuts down the SQL Server engine on failure.

CREATE SERVER AUDIT HIPAA_Audit


TO APPLICATION_LOG
WITH ( QUEUE_DELAY = 1000, ON_FAILURE = SHUTDOWN);

C. Creating a server audit containing a WHERE clause


The following example creates a database, schema, and two tables for the example. The table named
DataSchema.SensitiveData contains confidential data and access to the table must be recorded in the audit. The
table named DataSchema.GeneralData does not contain confidential data. The database audit specification audits
access to all objects in the DataSchema schema. The server audit is created with a WHERE clause that limits the
server audit to only the SensitiveData table. The server audit presumes an audit folder exists at C:\SQLAudit .
CREATE DATABASE TestDB;
GO
USE TestDB;
GO
CREATE SCHEMA DataSchema;
GO
CREATE TABLE DataSchema.GeneralData (ID int PRIMARY KEY, DataField varchar(50) NOT NULL);
GO
CREATE TABLE DataSchema.SensitiveData (ID int PRIMARY KEY, DataField varchar(50) NOT NULL);
GO
-- Create the server audit in the master database
USE master;
GO
CREATE SERVER AUDIT AuditDataAccess
TO FILE ( FILEPATH ='C:\SQLAudit\' )
WHERE object_name = 'SensitiveData' ;
GO
ALTER SERVER AUDIT AuditDataAccess WITH (STATE = ON);
GO
-- Create the database audit specification in the TestDB database
USE TestDB;
GO
CREATE DATABASE AUDIT SPECIFICATION [FilterForSensitiveData]
FOR SERVER AUDIT [AuditDataAccess]
ADD (SELECT ON SCHEMA::[DataSchema] BY [public])
WITH (STATE = ON);
GO
-- Trigger the audit event by selecting from tables
SELECT ID, DataField FROM DataSchema.GeneralData;
SELECT ID, DataField FROM DataSchema.SensitiveData;
GO
-- Check the audit for the filtered content
SELECT * FROM fn_get_audit_file('C:\SQLAudit\AuditDataAccess_*.sqlaudit',default,default);
GO

See Also
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
sys.dm_audit_class_type_map (Transact-SQL )
Create a Server Audit and Server Audit Specification
CREATE SERVER AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a server audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions

Syntax
CREATE SERVER AUDIT SPECIFICATION audit_specification_name
FOR SERVER AUDIT audit_name
{
{ ADD ( { audit_action_group_name } )
} [, ...n]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]

Arguments
audit_specification_name
Name of the server audit specification.
audit_name
Name of the audit to which this specification is applied.
audit_action_group_name
Name of a group of server-level auditable actions. For a list of Audit Action Groups, see SQL Server Audit Action
Groups and Actions.
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification.

Remarks
An audit must exist before creating a server audit specification for it. When a server audit specification is created,
it is in a disabled state.

Permissions
Users with the ALTER ANY SERVER AUDIT permission can create server audit specifications and bind them to
any audit.
After a server audit specification is created, it can be viewed by principals with the, CONTROL SERVER, or
ALTER ANY SERVER AUDIT permissions, the sysadmin account, or principals having explicit access to the audit.
Examples
The following example creates a server audit specification called HIPPA_Audit_Specification that audits failed
logins, for a SQL Server Audit called HIPPA_Audit .

CREATE SERVER AUDIT SPECIFICATION HIPPA_Audit_Specification


FOR SERVER AUDIT HIPPA_Audit
ADD (FAILED_LOGIN_GROUP);
GO

For a full example about how to create an audit, see SQL Server Audit (Database Engine).

See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
CREATE SERVER ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new user-defined server role.
Transact-SQL Syntax Conventions

Syntax
CREATE SERVER ROLE role_name [ AUTHORIZATION server_principal ]

Arguments
role_name
Is the name of the server role to be created.
AUTHORIZATION server_principal
Is the login that will own the new server role. If no login is specified, the server role will be owned by the login
that executes CREATE SERVER ROLE.

Remarks
Server roles are server-level securables. After you create a server role, configure the server-level permissions of
the role by using GRANT, DENY, and REVOKE. To add logins to or remove logins from a server role, use ALTER
SERVER ROLE (Transact-SQL ). To drop a server role, use DROP SERVER ROLE (Transact-SQL ). For more
information, see sys.server_principals (Transact-SQL ).
You can view the server roles by querying the sys.server_role_members and sys.server_principals catalog views.
Server roles cannot be granted permission on database-level securables. To create database roles, see CREATE
ROLE (Transact-SQL ).
For information about designing a permissions system, see Getting Started with Database Engine Permissions.

Permissions
Requires CREATE SERVER ROLE permission or membership in the sysadmin fixed server role.
Also requires IMPERSONATE on the server_principal for logins, ALTER permission for server roles used as the
server_principal, or membership in a Windows group that is used as the server_principal.
This will fire the Audit Server Principal Management event withthe object type set to server role and event type to
add.
When you use the AUTHORIZATION option to assign server role ownership, the following permissions are also
required:
To assign ownership of a server role to another login, requires IMPERSONATE permission on that login.
To assign ownership of a server role to another server role, requires membership in the recipient server
role or ALTER permission on that server role.

Examples
A. Creating a server role that is owned by a login
The following example creates the server role buyers that is owned by login BenMiller .

USE master;
CREATE SERVER ROLE buyers AUTHORIZATION BenMiller;
GO

B. Creating a server role that is owned by a fixed server role


The following example creates the server role auditors that is owned the securityadmin fixed server role.

USE master;
CREATE SERVER ROLE auditors AUTHORIZATION securityadmin;
GO

See Also
DROP SERVER ROLE (Transact-SQL )
Principals (Database Engine)
EVENTDATA (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
Getting Started with Database Engine Permissions
CREATE SERVICE (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new service. A Service Broker service is a name for a specific task or set of tasks. Service Broker uses
the name of the service to route messages, deliver messages to the correct queue within a database, and enforce
the contract for a conversation.
Transact-SQL Syntax Conventions

Syntax
CREATE SERVICE service_name
[ AUTHORIZATION owner_name ]
ON QUEUE [ schema_name. ]queue_name
[ ( contract_name | [DEFAULT][ ,...n ] ) ]
[ ; ]

Arguments
service_name
Is the name of the service to create. A new service is created in the current database and owned by the principal
specified in the AUTHORIZATION clause. Server, database, and schema names cannot be specified. The
service_name must be a valid sysname.

NOTE
Do not create a service that uses the keyword ANY for the service_name. When you specify ANY for a service name in
CREATE BROKER PRIORITY, the priority is considered for all services. It is not limited to a service whose name is ANY.

AUTHORIZATION owner_name
Sets the owner of the service to the specified database user or role. When the current user is dbo or sa,
owner_name may be the name of any valid user or role. Otherwise, owner_name must be the name of the current
user, the name of a user that the current user has IMPERSONATE permission for, or the name of a role to which
the current user belongs.
ON QUEUE [ schema_name. ] queue_name
Specifies the queue that receives messages for the service. The queue must exist in the same database as the
service. If no schema_name is provided, the schema is the default schema for the user that executes the statement.
contract_name
Specifies a contract for which this service may be a target. Service programs initiate conversations to this service
using the contracts specified. If no contracts are specified, the service may only initiate conversations.
[DEFAULT]
Specifies that the service may be a target for conversations that follow the DEFAULT contract. In the context of
this clause, DEFAULT is not a keyword, and must be delimited as an identifier. The DEFAULT contract allows both
sides of the conversation to send messages of message type DEFAULT. Message type DEFAULT uses validation
NONE.

Remarks
A service exposes the functionality provided by the contracts with which it is associated, so that they can be used
by other services. The CREATE SERVICE statement specifies the contracts that this service is the target for. A
service can only be a target for conversations that use the contracts specified by the service. A service that
specifies no contracts exposes no functionality to other services.
Conversations initiated from this service may use any contract. You create a service without specifying contracts
when the service only initiates conversations.
When Service Broker accepts a new conversation from a remote service, the name of the target service
determines the queue where the broker places messages in the conversation.

Permissions
Permission for creating a service defaults to members of the db_ddladmin or db_owner fixed database roles and
the sysadmin fixed server role. The user executing the CREATE SERVICE statement must have REFERENCES
permission on the queue and all contracts specified.
REFERENCES permission for a service defaults to the owner of the service, members of the db_ddladmin or
db_owner fixed database roles, and members of the sysadmin fixed server role. SEND permissions for a service
default to the owner of the service, members of the db_owner fixed database role, and members of the sysadmin
fixed server role.
A service may not be a temporary object. Service names beginning with # are allowed, but are permanent objects.

Examples
A. Creating a service with one contract
The following example creates the service //Adventure-Works.com/Expenses on the ExpenseQueue queue in the dbo
schema. Dialogs that target this service must follow the contract
//Adventure-Works.com/Expenses/ExpenseSubmission .

CREATE SERVICE [//Adventure-Works.com/Expenses]


ON QUEUE [dbo].[ExpenseQueue]
([//Adventure-Works.com/Expenses/ExpenseSubmission]) ;

B. Creating a service with multiple contracts


The following example creates the service //Adventure-Works.com/Expenses on the ExpenseQueue queue. Dialogs
that target this service must either follow the contract //Adventure-Works.com/Expenses/ExpenseSubmission or the
contract //Adventure-Works.com/Expenses/ExpenseProcessing .

CREATE SERVICE [//Adventure-Works.com/Expenses] ON QUEUE ExpenseQueue


([//Adventure-Works.com/Expenses/ExpenseSubmission],
[//Adventure-Works.com/Expenses/ExpenseProcessing]) ;

C. Creating a service with no contracts


The following example creates the service //Adventure-Works.com/Expenses on the ExpenseQueue queue. This service
has no contract information. Therefore, the service can only be the initiator of a dialog.
CREATE SERVICE [//Adventure-Works.com/Expenses] ON QUEUE ExpenseQueue ;

See Also
ALTER SERVICE (Transact-SQL )
DROP SERVICE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL)
5/3/2018 • 18 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a spatial index on a specified table and column in SQL Server. An index can be created before there is
data in the table. Indexes can be created on tables or views in another database by specifying a qualified database
name. Spatial indexes require the table to have a clustered primary key. For information about spatial indexes, see
Spatial Indexes Overview.
Transact-SQL Syntax Conventions

Syntax
-- SQL Server Syntax

CREATE SPATIAL INDEX index_name


ON <object> ( spatial_column_name )
{
<geometry_tessellation> | <geography_tessellation>
}
[ ON { filegroup_name | "default" } ]
[;]

<object> ::=
[ database_name. [ schema_name ] . | schema_name. ] table_name

<geometry_tessellation> ::=
{
<geometry_automatic_grid_tessellation>
| <geometry_manual_grid_tessellation>
}

<geometry_automatic_grid_tessellation> ::=
{
[ USING GEOMETRY_AUTO_GRID ]
WITH (
<bounding_box>
[ [,] <tessellation_cells_per_object> [ ,…n] ]
[ [,] <spatial_index_option> [ ,…n] ]
)
}

<geometry_manual_grid_tessellation> ::=
{
[ USING GEOMETRY_GRID ]
WITH (
<bounding_box>
[ [,]<tessellation_grid> [ ,…n] ]
[ [,]<tessellation_cells_per_object> [ ,…n] ]
[ [,]<spatial_index_option> [ ,…n] ]
)
}

<geography_tessellation> ::=
{
<geography_automatic_grid_tessellation> | <geography_manual_grid_tessellation>
}

<geography_automatic_grid_tessellation> ::=
<geography_automatic_grid_tessellation> ::=
{
[ USING GEOGRAPHY_AUTO_GRID ]
[ WITH (
[ [,] <tessellation_cells_per_object> [ ,…n] ]
[ [,] <spatial_index_option> ]
) ]
}

<geography_manual_grid_tessellation> ::=
{
[ USING GEOGRAPHY_GRID ]
[ WITH (
[ <tessellation_grid> [ ,…n] ]
[ [,] <tessellation_cells_per_object> [ ,…n] ]
[ [,] <spatial_index_option> [ ,…n] ]
) ]
}

<bounding_box> ::=
{
BOUNDING_BOX = ( {
xmin, ymin, xmax, ymax
| <named_bb_coordinate>, <named_bb_coordinate>, <named_bb_coordinate>, <named_bb_coordinate>
} )
}

<named_bb_coordinate> ::= { XMIN = xmin | YMIN = ymin | XMAX = xmax | YMAX=ymax }

<tesselation_grid> ::=
{
GRIDS = ( { <grid_level> [ ,...n ] | <grid_size>, <grid_size>, <grid_size>, <grid_size> }
)
}
<tesseallation_cells_per_object> ::=
{
CELLS_PER_OBJECT = n
}

<grid_level> ::=
{
LEVEL_1 = <grid_size>
| LEVEL_2 = <grid_size>
| LEVEL_3 = <grid_size>
| LEVEL_4 = <grid_size>
}

<grid_size> ::= { LOW | MEDIUM | HIGH }

<spatial_index_option> ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = OFF
| STATISTICS_NORECOMPUTE = { ON | OFF }
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE | ROW | PAGE }
}
-- Windows Azure SQL Database Syntax

CREATE SPATIAL INDEX index_name


ON <object> ( spatial_column_name )
{
[ USING <geometry_grid_tessellation> ]
WITH ( <bounding_box>
[ [,] <tesselation_parameters> [,... n ] ]
[ [,] <spatial_index_option> [,... n ] ] )
| [ USING <geography_grid_tessellation> ]
[ WITH ( [ <tesselation_parameters> [,... n ] ]
[ [,] <spatial_index_option> [,... n ] ] ) ]
}

[ ; ]

<object> ::=
{
[database_name. [schema_name ] . | schema_name. ]
table_name
}

<geometry_grid_tessellation> ::=
{ GEOMETRY_GRID }

<bounding_box> ::=
BOUNDING_BOX = ( {
xmin, ymin, xmax, ymax
| <named_bb_coordinate>, <named_bb_coordinate>, <named_bb_coordinate>, <named_bb_coordinate>
} )

<named_bb_coordinate> ::= { XMIN = xmin | YMIN = ymin | XMAX = xmax | YMAX=ymax }

<tesselation_parameters> ::=
{
GRIDS = ( { <grid_density> [ ,... n ] | <density>, <density>, <density>, <density> } )
| CELLS_PER_OBJECT = n
}

<grid_density> ::=
{
LEVEL_1 = <density>
| LEVEL_2 = <density>
| LEVEL_3 = <density>
| LEVEL_4 = <density>
}

<density> ::= { LOW | MEDIUM | HIGH }

<geography_grid_tessellation> ::=
{ GEOGRAPHY_GRID }

<spatial_index_option> ::=
{
IGNORE_DUP_KEY = OFF
| STATISTICS_NORECOMPUTE = { ON | OFF }
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
}

Arguments
index_name
Is the name of the index. Index names must be unique within a table but do not have to be unique within a
database. Index names must follow the rules of identifiers.
ON <object> ( spatial_column_name )
Specifies the object (database, schema, or table) on which the index is to be created and the name of spatial
column.
spatial_column_name specifies the spatial column on which the index is based. Only one spatial column can be
specified in a single spatial index definition; however, multiple spatial indexes can be created on a geometry or
geography column.
USING
Indicates the tessellation scheme for the spatial index. This parameter uses the type-specific value, shown in the
following table:

DATA TYPE OF COLUMN TESSELLATION SCHEME

geometry GEOMETRY_GRID

geometry GEOMETRY_AUTO_GRID

geography GEOGRAPY_GRID

geography GEOGRAPHY_AUTO_GRID

A spatial index can be created only on a column of type geometry or geography. Otherwise, an error is raised.
Also, if an invalid parameter for a given type is passed, an error is raised.
For information about how SQL Server implements tessellation, see Spatial Indexes Overview.
ON filegroup_name
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Creates the specified index on the specified filegroup. If no location is specified and the table is not partitioned,
the index uses the same filegroup as the underlying table. The filegroup must already exist.
ON "default"
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Creates the specified index on the default filegroup.
The term default, in this context, is not a keyword. It is an identifier for the default filegroup and must be
delimited, as in ON "default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be
ON for the current session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER
(Transact-SQL ).
<object>::=
Is the fully qualified or non-fully qualified object to be indexed.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to be indexed.
Windows Azure SQL Database supports the three-part name format database_name.
[schema_name].object_name when the database_name is the current database or the database_name is tempdb
and the object_name starts with #.
USING Options
GEOMETRY_GRID
Specifies the geometry grid tessellation scheme that you are using. GEOMETRY_GRID can be specified only on
a column of the geometry data type. GEOMETRY_GRID allows for manual adjusting of the tessellation scheme.
GEOMETRY_AUTO_GRID
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Can be specified only on a column of the geometry data type. This is the default for this data type and does not
need to be specified.
GEOGRAPHY_GRID
Specifies the geography grid tessellation scheme. GEOGRAPHY_GRID can be specified only on a column of the
geography data type.
GEOGRAPHY_AUTO_GRID
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Can be specified only on a column of the geography data type. This is the default for this data type and does not
need to be specified.
WITH Options
BOUNDING_BOX
Specifies a numeric four-tuple that defines the four coordinates of the bounding box: the x-min and y-min
coordinates of the lower-left corner, and the x-max and y-max coordinates of the upper-right corner.
xmin
Specifies the x-coordinate of the lower-left corner of the bounding box.
ymin
Specifies the y-coordinate of the lower-left corner of the bounding box.
xmax
Specifies the x-coordinate of the upper-right corner of the bounding box.
ymax
Specifies the y-coordinate of the upper-right corner of the bounding box.
XMIN = xmin
Specifies the property name and value for the x-coordinate of the lower-left corner of the bounding box.
YMIN =ymin
Specifies the property name and value for the y-coordinate of the lower-left corner of the bounding box.
XMAX =xmax
Specifies the property name and value for the x-coordinate of the upper-right corner of the bounding box.
YMAX =ymax
Specifies the property name and value for the y-coordinate of upper-right corner of the bounding box
NOTE
Bounding-box coordinates apply only within a USING GEOMETRY_GRID clause.
xmax must be greater than xmin and ymax must be greater than ymin. You can specify any valid float value representation,
assuming that: xmax > xmin and ymax > ymin. Otherwise the appropriate errors are raised.
There are no default values.
The bounding-box property names are case-insensitive regardless of the database collation.

To specify property names, you must specify each of them once and only once. You can specify them in any order.
For example, the following clauses are equivalent:
BOUNDING_BOX =( XMIN =xmin, YMIN =ymin, XMAX =xmax, YMAX =ymax )
BOUNDING_BOX =( XMIN =xmin, XMAX =xmax, YMIN =ymin, YMAX =ymax)
GRIDS
Defines the density of the grid at each level of a tessellation scheme. When GEOMETRY_AUTO_GRID and
GEOGRAPHY_AUTO_GRID are selected, this option is disabled.
For information about tessellation, see Spatial Indexes Overview.
The GRIDS parameters are as follows:
LEVEL_1
Specifies the first-level (top) grid.
LEVEL_2
Specifies the second-level grid.
LEVEL_3
Specifies the third-level grid.
LEVEL_4
Specifies the fourth-level grid.
LOW
Specifies the lowest possible density for the grid at a given level. LOW equates to 16 cells (a 4x4 grid).
MEDIUM
Specifies the medium density for the grid at a given level. MEDIUM equates to 64 cells (an 8x8 grid).
HIGH
Specifies the highest possible density for the grid at a given level. HIGH equates to 256 cells (a 16x16 grid).

NOTE
Using level names allows you to specify the levels in any order and to omit levels. If you use the name for any level, you
must use the name of any other level that you specify. If you omit a level, its density defaults to MEDIUM.

WARNING
If an invalid density is specified, an error is raised.

CELLS_PER_OBJECT =n
Specifies the number of tessellation cells per object that can be used for a single spatial object in the index by the
tessellation process. n can be any integer between 1 and 8192, inclusive. If an invalid number is passed or the
number is larger than the maximum number of cells for the specified tessellation, an error is raised.
CELLS_PER_OBJECT has the following default values:

USING OPTION DEFAULT CELLS PER OBJECT

GEOMETRY_GRID 16

GEOMETRY_AUTO_GRID 8

GEOGRAPHY_GRID 16

GEOGRAPHY_AUTO_GRID 12

At the top level, if an object covers more cells than specified by n, the indexing uses as many cells as necessary to
provide a complete top-level tessellation. In such cases, an object might receive more than the specified number
of cells. In this case, the maximum number is the number of cells generated by the top-level grid, which depends
on the density.
The CELLS_PER_OBJECT value is used by the cells-per-object tessellation rule. For information about the
tessellation rules, see Spatial Indexes Overview.
PAD_INDEX = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies index padding. The default is OFF.
ON
Indicates that the percentage of free space that is specified by fillfactor is applied to the intermediate-level pages
of the index.
OFF or fillfactor is not specified
Indicates that the intermediate-level pages are filled to near capacity, leaving sufficient space for at least one row
of the maximum size the index can have, considering the set of keys on the intermediate pages.
The PAD_INDEX option is useful only when FILLFACTOR is specified, because PAD_INDEX uses the percentage
specified by FILLFACTOR. If the percentage specified for FILLFACTOR is not large enough to allow for one row,
the Database Engine internally overrides the percentage to allow for the minimum. The number of rows on an
intermediate index page is never less than two, regardless of how low the value of fillfactor.
FILLFACTOR =fillfactor
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or rebuild. fillfactor must be an integer value from 1 to 100. The default is 0. If fillfactor is
100 or 0, the Database Engine creates indexes with leaf pages filled to capacity.

NOTE
Fill factor values 0 and 100 are the same in all respects.

The FILLFACTOR setting applies only when the index is created or rebuilt. The Database Engine does not
dynamically keep the specified percentage of empty space in the pages. To view the fill factor setting, use the
sys.indexes catalog view.
IMPORTANT
Creating a clustered index with a FILLFACTOR less than 100 affects the amount of storage space the data occupies because
the Database Engine redistributes the data when it creates the clustered index.

For more information, see Specify Fill Factor for an Index.


SORT_IN_TEMPDB = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies whether to store temporary sort results in tempdb. The default is OFF.
ON
The intermediate sort results that are used to build the index are stored in tempdb. This may reduce the time
required to create an index if tempdb is on a different set of disks than the user database. However, this increases
the amount of disk space that is used during the index build.
OFF
The intermediate sort results are stored in the same database as the index.
In addition to the space required in the user database to create the index, tempdb must have about the same
amount of additional space to hold the intermediate sort results. For more information, see SORT_IN_TEMPDB
Option For Indexes.
IGNORE_DUP_KEY =OFF
Has no effect for spatial indexes because the index type is never unique. Do not set this option to ON, or else an
error is raised.
STATISTICS_NORECOMPUTE = { ON | OFF}
Specifies whether distribution statistics are recomputed. The default is OFF.
ON
Out-of-date statistics are not automatically recomputed.
OFF
Automatic statistics updating are enabled.
To restore automatic statistics updating, set the STATISTICS_NORECOMPUTE to OFF, or execute UPDATE
STATISTICS without the NORECOMPUTE clause.

IMPORTANT
Disabling automatic recomputation of distribution statistics may prevent the query optimizer from picking optimal
execution plans for queries involving the table.

DROP_EXISTING = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies that the named, preexisting spatial index is dropped and rebuilt. The default is OFF.
ON
The existing index is dropped and rebuilt. The index name specified must be the same as a currently existing
index; however, the index definition can be modified. For example, you can specify different columns, sort order,
partition scheme, or index options.
OFF
An error is displayed if the specified index name already exists.
The index type cannot be changed by using DROP_EXISTING.
ONLINE =OFF
Specifies that underlying tables and associated indexes are not available for queries and data modification during
the index operation. In this version of SQL Server, online index builds are not supported for spatial indexes. If this
option is set to ON for a spatial index, an error is raised. Either omit the ONLINE option or set ONLINE to OFF.
An offline index operation that creates, rebuilds, or drops a spatial index, acquires a Schema modification (Sch-M )
lock on the table. This prevents all user access to the underlying table for the duration of the operation.

NOTE
Online index operations are not available in every edition of SQL Server. For a list of features that are supported by the
editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.

ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when accessing the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
MAXDOP =max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Overrides the max degree of parallelism configuration option for the duration of the index operation. Use
MAXDOP to limit the number of processors used in a parallel plan execution. The maximum is 64 processors.

IMPORTANT
Although the MAXDOP option is syntactically supported, CREATE SPATIAL INDEX currently always uses only a single
processor.

max_degree_of_parallelism can be:


1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number or fewer
based on the current system workload.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.

NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.

DATA_COMPRESSION = {NONE | ROW | PAGE }


Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Determines the level of data compression used by the index.
NONE
No compression used on data by the index
ROW
Row compression used on data by the index
PAGE
Page compression used on data by the index

Remarks
Every option can be specified only once per CREATE SPATIAL INDEX statement. Specifying a duplicate of any
option raises an error.
You can create up to 249 spatial indexes on each spatial column in a table. Creating more than one spatial index
on specific spatial column can be useful, for example, to index different tessellation parameters in a single column.

IMPORTANT
There are a number of other restrictions on creating a spatial index. For more information, see Spatial Indexes Overview.

An index build cannot make use of available process parallelism.

Methods Supported on Spatial Indexes


Under certain conditions, spatial indexes support a number of set-oriented geometry methods. For more
information, see Spatial Indexes Overview.

Spatial Indexes and Partitioning


By default, if a spatial index is created on a partitioned table, the index is partitioned according to the partition
scheme of the table. This assures that index data and the related row are stored in the same partition.
In this case, to alter the partition scheme of the base table, you would have to drop the spatial index before you
can repartition the base table. To avoid this restriction, when you are creating a spatial index, you can specify the
"ON filegroup" option. For more information, see "Spatial Indexes and Filegroups," later in this topic.

Spatial Indexes and Filegroups


By default, spatial indexes are partitioned to the same filegroups as the table on which the index is specified. This
can be overridden by using the filegroup specification:
[ ON { filegroup_name | "default" } ]
If you specify a filegroup for a spatial index, the index is placed on that filegroup, regardless of the partitioning
scheme of the table.

Catalog Views for Spatial Indexes


The following catalog views are specific to spatial indexes:
sys.spatial_indexes
Represents the main index information of the spatial indexes.
sys.spatial_index_tessellations
Represents the information about the tessellation scheme and parameters of each of the spatial indexes.

Additional Remarks about creating indexes


For more information about creating indexes, see the "Remarks" section in CREATE INDEX (Transact-SQL ).

Permissions
The user must have ALTER permission on the table or view, or be a member of the sysadmin fixed server role or
the db_ddladmin and db_owner fixed database roles.

Examples
A. Creating a spatial index on a geometry column
The following example creates a table named SpatialTable that contains a geometry type column,
geometry_col . The example then creates a spatial index, SIndx_SpatialTable_geometry_col1 , on the geometry_col .
The example uses the default tessellation scheme and specifies the bounding box.

CREATE TABLE SpatialTable(id int primary key, geometry_col geometry);


CREATE SPATIAL INDEX SIndx_SpatialTable_geometry_col1
ON SpatialTable(geometry_col)
WITH ( BOUNDING_BOX = ( 0, 0, 500, 200 ) );

B. Creating a spatial index on a geometry column


The following example creates a second spatial index, SIndx_SpatialTable_geometry_col2 , on the geometry_col in
the SpatialTable table. The example specifies GEOMETRY_GRID as the tessellation scheme. The example also
specifies the bounding box, different densities on different grid levels, and 64 cells per object. The example also
sets the index padding to ON .

CREATE SPATIAL INDEX SIndx_SpatialTable_geometry_col2


ON SpatialTable(geometry_col)
USING GEOMETRY_GRID
WITH (
BOUNDING_BOX = ( xmin=0, ymin=0, xmax=500, ymax=200 ),
GRIDS = (LOW, LOW, MEDIUM, HIGH),
CELLS_PER_OBJECT = 64,
PAD_INDEX = ON );

C. Creating a spatial index on a geometry column


The following example creates a third spatial index, SIndx_SpatialTable_geometry_col3 , on the geometry_col in
the SpatialTable table. The example uses the default tessellation scheme. The example specifies the bounding
box and uses different cell densities on the third and fourth levels, while using the default number of cells per
object.

CREATE SPATIAL INDEX SIndx_SpatialTable_geometry_col3


ON SpatialTable(geometry_col)
WITH (
BOUNDING_BOX = ( 0, 0, 500, 200 ),
GRIDS = ( LEVEL_4 = HIGH, LEVEL_3 = MEDIUM ) );

D. Changing an option that is specific to spatial indexes


The following example rebuilds the spatial index created in the preceding example,
SIndx_SpatialTable_geography_col3 , by specifying a new LEVEL_3 density with DROP_EXISTING = ON.

CREATE SPATIAL INDEX SIndx_SpatialTable_geography_col3


ON SpatialTable(geography_col)
WITH ( BOUNDING_BOX = ( 0, 0, 500, 200 ),
GRIDS = ( LEVEL_3 = LOW ),
DROP_EXISTING = ON );

E. Creating a spatial index on a geography column


The following example creates a table named SpatialTable2 that contains a geography type column,
geography_col . The example then creates a spatial index, SIndx_SpatialTable_geography_col1 , on the
geography_col . The example uses the default parameters values of the GEOGRAPHY_AUTO_GRID tessellation
scheme.

CREATE TABLE SpatialTable2(id int primary key, object GEOGRAPHY);


CREATE SPATIAL INDEX SIndx_SpatialTable_geography_col1
ON SpatialTable2(object);

NOTE
For geography grid indexes, a bounding box cannot be specified.

F. Creating a spatial index on a geography column


The following example creates a second spatial index, SIndx_SpatialTable_geography_col2 , on the geography_col
in the SpatialTable2 table. The example specifies GEOGRAPHY_GRID as the tessellation scheme. The example also
specifies different grid densities on different levels and 64 cells per object. The example also sets the index
padding to ON .

CREATE SPATIAL INDEX SIndx_SpatialTable_geography_col2


ON SpatialTable2(object)
USING GEOGRAPHY_GRID
WITH (
GRIDS = (MEDIUM, LOW, MEDIUM, HIGH ),
CELLS_PER_OBJECT = 64,
PAD_INDEX = ON );

G. Creating a spatial index on a geography column


The example then creates a third spatial index, SIndx_SpatialTable_geography_col3 , on the geography_col in the
SpatialTable2 table. The example uses the default tessellation scheme, GEOGRAPHY_GRID, and the default
CELLS_PER_OBJECT value (16).
CREATE SPATIAL INDEX SIndx_SpatialTable_geography_col3
ON SpatialTable2(object)
WITH ( GRIDS = ( LEVEL_3 = HIGH, LEVEL_2 = HIGH ) );

See Also
ALTER INDEX (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
CREATE TABLE (Transact-SQL )
Data Types (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP INDEX (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.index_columns (Transact-SQL )
sys.indexes (Transact-SQL )
sys.spatial_index_tessellations (Transact-SQL )
sys.spatial_indexes (Transact-SQL )
Spatial Indexes Overview
CREATE STATISTICS (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates query optimization statistics on one or more columns of a table, an indexed view, or an external table. For
most queries, the query optimizer already generates the necessary statistics for a high-quality query plan; in a few
cases, you need to create additional statistics with CREATE STATISTICS or modify the query design to improve
query performance.
To learn more, see Statistics.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

-- Create statistics on an external table


CREATE STATISTICS statistics_name
ON { table_or_indexed_view_name } ( column [ ,...n ] )
[ WITH FULLSCAN ] ;

-- Create statistics on a regular table or indexed view


CREATE STATISTICS statistics_name
ON { table_or_indexed_view_name } ( column [ ,...n ] )
[ WHERE <filter_predicate> ]
[ WITH
[ [ FULLSCAN
[ [ , ] PERSIST_SAMPLE_PERCENT = { ON | OFF } ]
| SAMPLE number { PERCENT | ROWS }
[ [ , ] PERSIST_SAMPLE_PERCENT = { ON | OFF } ]
| <update_stats_stream_option> [ ,...n ]
[ [ , ] NORECOMPUTE ]
[ [ , ] INCREMENTAL = { ON | OFF } ]
[ [ , ] MAXDOP = max_degree_of_parallelism ]
] ;

<filter_predicate> ::=
<conjunct> [AND <conjunct>]

<conjunct> ::=
<disjunct> | <comparison>

<disjunct> ::=
column_name IN (constant ,…)

<comparison> ::=
column_name <comparison_op> constant

<comparison_op> ::=
IS | IS NOT | = | <> | != | > | >= | !> | < | <= | !<

<update_stats_stream_option> ::=
[ STATS_STREAM = stats_stream ]
[ ROWCOUNT = numeric_constant ]
[ PAGECOUNT = numeric_contant ]
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

CREATE STATISTICS statistics_name


ON [ database_name . [schema_name ] . | schema_name. ] table_name
( column_name [ ,...n ] )
[ WHERE <filter_predicate> ]
[ WITH {
FULLSCAN
| SAMPLE number PERCENT
}
]
[;]

<filter_predicate> ::=
<conjunct> [AND <conjunct>]

<conjunct> ::=
<disjunct> | <comparison>

<disjunct> ::=
column_name IN (constant ,…)

<comparison> ::=
column_name <comparison_op> constant

<comparison_op> ::=
IS | IS NOT | = | <> | != | > | >= | !> | < | <= | !<

Arguments
statistics_name
Is the name of the statistics to create.
table_or_indexed_view_name
Is the name of the table, indexed view, or external table on which to create the statistics. To create statistics on
another database, specify a qualified table name.
column [ ,…n]
One or more columns to be included in the statistics. The columns should be in priority order from left to right.
Only the first column is used for creating the histogram. All columns are used for cross-column correlation
statistics called densities.
You can specify any column that can be specified as an index key column with the following exceptions:
Xml, full-text, and FILESTREAM columns cannot be specified.
Computed columns can be specified only if the ARITHABORT and QUOTED_IDENTIFIER database
settings are ON.
CLR user-defined type columns can be specified if the type supports binary ordering. Computed columns
defined as method invocations of a user-defined type column can be specified if the methods are marked
deterministic.
WHERE <filter_predicate> Specifies an expression for selecting a subset of rows to include when creating
the statistics object. Statistics that are created with a filter predicate are called filtered statistics. The filter
predicate uses simple comparison logic and cannot reference a computed column, a UDT column, a spatial
data type column, or a hierarchyID data type column. Comparisons using NULL literals are not allowed
with the comparison operators. Use the IS NULL and IS NOT NULL operators instead.
Here are some examples of filter predicates for the Production.BillOfMaterials table:
WHERE StartDate > '20000101' AND EndDate <= '20000630'

WHERE ComponentID IN (533, 324, 753)

WHERE StartDate IN ('20000404', '20000905') AND EndDate IS NOT NULL

For more information about filter predicates, see Create Filtered Indexes.
FULLSCAN
Compute statistics by scanning all rows. FULLSCAN and SAMPLE 100 PERCENT have the same results.
FULLSCAN cannot be used with the SAMPLE option.
When omitted, SQL Server uses sampling to create the statistics, and determines the sample size that is
required to create a high quality query plan
SAMPLE number { PERCENT | ROWS }
Specifies the approximate percentage or number of rows in the table or indexed view for the query
optimizer to use when it creates statistics. For PERCENT, number can be from 0 through 100 and for
ROWS, number can be from 0 to the total number of rows. The actual percentage or number of rows the
query optimizer samples might not match the percentage or number specified. For example, the query
optimizer scans all rows on a data page.
SAMPLE is useful for special cases in which the query plan, based on default sampling, is not optimal. In
most situations, it is not necessary to specify SAMPLE because the query optimizer already uses sampling
and determines the statistically significant sample size by default, as required to create high-quality query
plans.
SAMPLE cannot be used with the FULLSCAN option. When neither SAMPLE nor FULLSCAN is specified,
the query optimizer uses sampled data and computes the sample size by default.
We recommend against specifying 0 PERCENT or 0 ROWS. When 0 PERCENT or ROWS is specified, the
statistics object is created but does not contain statistics data.
PERSIST_SAMPLE_PERCENT = { ON | OFF }
When ON, the statistics will retain the creation sampling percentage for subsequent updates that do not
explicitly specify a sampling percentage. When OFF, statistics sampling percentage will get reset to default
sampling in subsequent updates that do not explicitly specify a sampling percentage. The default is OFF.
Applies to: SQL Server 2016 (13.x) (starting with SQL Server 2016 (13.x) SP1 CU4) through SQL Server
2017 (starting with SQL Server 2017 (14.x) CU1).
STATS_STREAM =stats_stream
Identified for informational purposes only. Not supported. Future compatibility is not guaranteed.
NORECOMPUTE
Disable the automatic statistics update option, AUTO_STATISTICS_UPDATE, for statistics_name. If this
option is specified, the query optimizer will complete any in-progress statistics updates for statistics_name
and disable future updates.
To re-enable statistics updates, remove the statistics with DROP STATISTICS and then run CREATE
STATISTICS without the NORECOMPUTE option.

WARNING
Using this option can produce suboptimal query plans. We recommend using this option sparingly, and then only by a
qualified system administrator.
For more information about the AUTO_STATISTICS_UPDATE option, see ALTER DATABASE SET Options
(Transact-SQL ). For more information about disabling and re-enabling statistics updates, see Statistics.
INCREMENTAL = { ON | OFF }
When ON, the statistics created are per partition statistics. When OFF, stats are combined for all partitions. The
default is OFF.
If per partition statistics are not supported an error is generated. Incremental stats are not supported for
following statistics types:
Statistics created with indexes that are not partition-aligned with the base table.
Statistics created on Always On readable secondary databases.
Statistics created on read-only databases.
Statistics created on filtered indexes.
Statistics created on views.
Statistics created on internal tables.
Statistics created with spatial indexes or XML indexes.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server (Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3).
Overrides the max degree of parallelism configuration option for the duration of the statistic operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to
limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism can be:
1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel statistic operation to the specified number or
fewer based on the current system workload.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
<update_stats_stream_option> Identified for informational purposes only. Not supported. Future compatibility is
not guaranteed.

Permissions
Requires one of these permissions:
ALTER TABLE
User is the table owner
Membership in the db_ddladmin fixed database role

General Remarks
SQL Server can use tempdb to sort the sampled rows before building statistics.
Statistics for external tables
When creating external table statistics, SQL Server imports the external table into a temporary SQL Server table,
and then creates the statistics. For samples statistics, only the sampled rows are imported. If you have a large
external table, it will be much faster to use the default sampling instead of the full scan option.
Statistics with a filtered condition
Filtered statistics can improve query performance for queries that select from well-defined subsets of data.
Filtered statistics use a filter predicate in the WHERE clause to select the subset of data that is included in the
statistics.
When to Use CREATE STATISTICS
For more information about when to use CREATE STATISTICS, see Statistics.
Referencing Dependencies for Filtered Statistics
The sys.sql_expression_dependencies catalog view tracks each column in the filtered statistics predicate as a
referencing dependency. Consider the operations that you perform on table columns before creating filtered
statistics because you cannot drop, rename, or alter the definition of a table column that is defined in a filtered
statistics predicate.

Limitations and Restrictions


Updating statistics is not supported on external tables. To update statistics on an external table, drop and re-
create the statistics.
You can list up to 64 columns per statistics object.
The MAXDOP option is not compatible with STATS_STREAM, ROWCOUNT and PAGECOUNT options.

Examples
Examples use the AdventureWorks database.
A. Using CREATE STATISTICS with SAMPLE number PERCENT
The following example creates the ContactMail1 statistics, using a random sample of 5 percent of the
BusinessEntityID and EmailPromotion columns of the Contact table of the AdventureWorks2012database.

CREATE STATISTICS ContactMail1


ON Person.Person (BusinessEntityID, EmailPromotion)
WITH SAMPLE 5 PERCENT;

B. Using CREATE STATISTICS with FULLSCAN and NORECOMPUTE


The following example creates the ContactMail2 statistics for all rows in the BusinessEntityID and
EmailPromotion columns of the Contact table and disables automatic recomputing of statistics.

CREATE STATISTICS NamePurchase


ON AdventureWorks2012.Person.Person (BusinessEntityID, EmailPromotion)
WITH FULLSCAN, NORECOMPUTE;

C. Using CREATE STATISTICS to create filtered statistics


The following example creates the filtered statistics ContactPromotion1 . The Database Engine samples 50 percent
of the data and then selects the rows with EmailPromotion equal to 2.

CREATE STATISTICS ContactPromotion1


ON Person.Person (BusinessEntityID, LastName, EmailPromotion)
WHERE EmailPromotion = 2
WITH SAMPLE 50 PERCENT;
GO
D. Create statistics on an external table
The only decision you need to make when you create statistics on an external table, besides providing the list of
columns, is whether to create the statistics by sampling the rows or by scanning all of the rows.
Since SQL Server imports data from the external table into a temporary table to create statistics, the full scan
option will take much longer. For a large table, the default sampling method is usually sufficient.

--Create statistics on an external table and use default sampling.


CREATE STATISTICS CustomerStats1 ON DimCustomer (CustomerKey, EmailAddress);

--Create statistics on an external table and scan all the rows


CREATE STATISTICS CustomerStats1 ON DimCustomer (CustomerKey, EmailAddress) WITH FULLSCAN;

E. Using CREATE STATISTICS with FULLSCAN and PERSIST_SAMPLE_PERCENT


The following example creates the ContactMail2 statistics for all rows in the BusinessEntityID and
EmailPromotion columns of the Contact table and sets a 100 percent sampling percentage for all subsequent
updates that do not explicitely specify a sampling percentage.

CREATE STATISTICS NamePurchase


ON AdventureWorks2012.Person.Person (BusinessEntityID, EmailPromotion)
WITH FULLSCAN, PERSIST_SAMPLE_PERCENT = ON;

Examples using AdventureWorksDW database.


F. Create statistics on two columns
The following example creates the CustomerStats1 statistics, based on the CustomerKey and EmailAddress
columns of the DimCustomer table. The statistics are created based on a statistically significant sampling of the
rows in the Customer table.

CREATE STATISTICS CustomerStats1 ON DimCustomer (CustomerKey, EmailAddress);

G. Create statistics by using a full scan


The following example creates the CustomerStatsFullScan statistics, based on scanning all of the rows in the
DimCustomer table.

CREATE STATISTICS CustomerStatsFullScan


ON DimCustomer (CustomerKey, EmailAddress) WITH FULLSCAN;

H. Create statistics by specifying the sample percentage


The following example creates the CustomerStatsSampleScan statistics, based on scanning 50 percent of the rows
in the DimCustomer table.

CREATE STATISTICS CustomerStatsSampleScan


ON DimCustomer (CustomerKey, EmailAddress) WITH SAMPLE 50 PERCENT;

See Also
Statistics
UPDATE STATISTICS (Transact-SQL )
sp_updatestats (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP STATISTICS (Transact-SQL )
sys.stats (Transact-SQL )
sys.stats_columns (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Generates a symmetric key and specifies its properties in SQL Server.
This feature is incompatible with database export using Data Tier Application Framework (DACFx). You must
drop all symmetric keys before exporting.
Transact-SQL Syntax Conventions

Syntax
CREATE SYMMETRIC KEY key_name
[ AUTHORIZATION owner_name ]
[ FROM PROVIDER provider_name ]
WITH
[
<key_options> [ , ... n ]
| ENCRYPTION BY <encrypting_mechanism> [ , ... n ]
]

<key_options> ::=
KEY_SOURCE = 'pass_phrase'
| ALGORITHM = <algorithm>
| IDENTITY_VALUE = 'identity_phrase'
| PROVIDER_KEY_NAME = 'key_name_in_provider'
| CREATION_DISPOSITION = {CREATE_NEW | OPEN_EXISTING }

<algorithm> ::=
DES | TRIPLE_DES | TRIPLE_DES_3KEY | RC2 | RC4 | RC4_128
| DESX | AES_128 | AES_192 | AES_256

<encrypting_mechanism> ::=
CERTIFICATE certificate_name
| PASSWORD = 'password'
| SYMMETRIC KEY symmetric_key_name
| ASYMMETRIC KEY asym_key_name

Arguments
Key_name
Specifies the unique name by which the symmetric key is known in the database. The names of temporary keys
should begin with one number (#) sign. For example, #temporaryKey900007. You cannot create a symmetric
key that has a name that starts with more than one #. You cannot create a temporary symmetric key using an
EKM provider.
AUTHORIZATION owner_name
Specifies the name of the database user or application role that will own this key.
FROM PROVIDER provider_name
Specifies an Extensible Key Management (EKM ) provider and name. The key is not exported from the EKM
device. The provider must be defined first using the CREATE PROVIDER statement. For more information about
creating external key providers, see Extensible Key Management (EKM ).
NOTE
This option is not available in a contained database.

KEY_SOURCE ='pass_phrase'
Specifies a pass phrase from which to derive the key.
IDENTITY_VALUE ='identity_phrase'
Specifies an identity phrase from which to generate a GUID for tagging data that is encrypted with a temporary
key.
PROVIDER_KEY_NAME='key_name_in_provider'
Specifies the name referenced in the Extensible Key Management provider.

NOTE
This option is not available in a contained database.

CREATION_DISPOSITION = CREATE_NEW
Creates a new key on the Extensible Key Management device. If a key already exists on the device, the statement
fails with error.
CREATION_DISPOSITION = OPEN_EXISTING
Maps a SQL Server symmetric key to an existing Extensible Key Management key. If CREATION_DISPOSITION
= OPEN_EXISTING is not provided, this defaults to CREATE_NEW.
certificate_name
Specifies the name of the certificate that will be used to encrypt the symmetric key. The certificate must already
exist in the database.
' password '
Specifies a password from which to derive a TRIPLE_DES key with which to secure the symmetric key. password
must meet the Windows password policy requirements of the computer that is running the instance of SQL
Server. Always use strong passwords.
symmetric_key_name
Specifies a symmetric key, used to encrypt the key that is being created. The specified key must already exist in
the database, and the key must be open.
asym_key_name
Specifies an asymmetric key, used to encrypt the key that is being created. This asymmetric key must already
exist in the database.
<algorithm>
Specify the encrypting algorithm.

WARNING
Beginning with SQL Server 2016 (13.x), all algorithms other than AES_128, AES_192, and AES_256 are deprecated. To use
older algorithms (not recommended), you must set the database to database compatibility level 120 or lower.

Remarks
When a symmetric key is created, the symmetric key must be encrypted by using at least one of the following:
certificate, password, symmetric key, asymmetric key, or PROVIDER. The key can have more than one encryption
of each type. In other words, a single symmetric key can be encrypted by using multiple certificates, passwords,
symmetric keys, and asymmetric keys at the same time.
Cau t i on

When a symmetric key is encrypted with a password instead of a certificate (or another key), the TRIPLE DES
encryption algorithm is used to encrypt the password. Because of this, keys that are created with a strong
encryption algorithm, such as AES, are themselves secured by a weaker algorithm.
The optional password can be used to encrypt the symmetric key before distributing the key to multiple users.
Temporary keys are owned by the user that creates them. Temporary keys are only valid for the current session.
IDENTITY_VALUE generates a GUID with which to tag data that is encrypted with the new symmetric key. This
tagging can be used to match keys to encrypted data. The GUID generated by a specific phrase is always the
same. After a phrase has been used to generate a GUID, the phrase cannot be reused as long as there is at least
one session that is actively using the phrase. IDENTITY_VALUE is an optional clause; however, we recommend
using it when you are storing data encrypted with a temporary key.
There is no default encryption algorithm.

IMPORTANT
We do not recommend using the RC4 and RC4_128 stream ciphers to protect sensitive data. SQL Server does not further
encode the encryption performed with such keys.

Information about symmetric keys is visible in the sys.symmetric_keys catalog view.


Symmetric keys cannot be encrypted by symmetric keys created from the encryption provider.
Clarification regarding DES algorithms:
DESX was incorrectly named. Symmetric keys created with ALGORITHM = DESX actually use the TRIPLE
DES cipher with a 192-bit key. The DESX algorithm is not provided. This feature will be removed in a future
version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify
applications that currently use this feature.
Symmetric keys created with ALGORITHM = TRIPLE_DES_3KEY use TRIPLE DES with a 192-bit key.
Symmetric keys created with ALGORITHM = TRIPLE_DES use TRIPLE DES with a 128-bit key.
Deprecation of the RC4 algorithm:
Repeated use of the same RC4 or RC4_128 KEY_GUID on different blocks of data, results in the same
RC4 key because SQL Server does not provide a salt automatically. Using the same RC4 key repeatedly is
a well known error that will result in very weak encryption. Therefore we have deprecated the RC4 and
RC4_128 keywords. This feature will be removed in a future version of Microsoft SQL Server. Do not use
this feature in new development work, and modify applications that currently use this feature as soon as
possible.

WARNING
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or
RC4_128 when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of
the AES algorithms instead. In SQL Server 2017 material encrypted using RC4 or RC4_128 can be decrypted in any
compatibility level.

Permissions
Requires ALTER ANY SYMMETRIC KEY permission on the database. If AUTHORIZATION is specified, requires
IMPERSONATE permission on the database user or ALTER permission on the application role. If encryption is
by certificate or asymmetric key, requires VIEW DEFINITION permission on the certificate or asymmetric key.
Only Windows logins, SQL Server logins, and application roles can own symmetric keys. Groups and roles
cannot own symmetric keys.

Examples
A. Creating a symmetric key
The following example creates a symmetric key called JanainaKey09 by using the AES 256 algorithm, and then
encrypts the new key with certificate Shipping04 .

CREATE SYMMETRIC KEY JanainaKey09


WITH ALGORITHM = AES_256
ENCRYPTION BY CERTIFICATE Shipping04;
GO

B. Creating a temporary symmetric key


The following example creates a temporary symmetric key called #MarketingXXV from the pass phrase:
The square of the hypotenuse is equal to the sum of the squares of the sides . The key is provisioned with a
GUID that is generated from the string Pythagoras and encrypted with certificate Marketing25 .

CREATE SYMMETRIC KEY #MarketingXXV


WITH ALGORITHM = AES_128,
KEY_SOURCE
= 'The square of the hypotenuse is equal to the sum of the squares of the sides',
IDENTITY_VALUE = 'Pythagoras'
ENCRYPTION BY CERTIFICATE Marketing25;
GO

C. Creating a symmetric key using an Extensible Key Management (EKM ) device


The following example creates a symmetric key called MySymKey by using a provider called MyEKMProvider and a
key name of KeyForSensitiveData . It assigns authorization to User1 and assumes that the system administrator
has already registered the provider called MyEKMProvider in SQL Server.

CREATE SYMMETRIC KEY MySymKey


AUTHORIZATION User1
FROM PROVIDER EKMProvider
WITH
PROVIDER_KEY_NAME='KeyForSensitiveData',
CREATION_DISPOSITION=OPEN_EXISTING;
GO

See Also
Choose an Encryption Algorithm
ALTER SYMMETRIC KEY (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
sys.symmetric_keys (Transact-SQL )
Extensible Key Management (EKM )
Extensible Key Management Using Azure Key Vault (SQL Server)
CREATE SYNONYM (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new synonym.
Transact-SQL Syntax Conventions

Syntax
-- SQL Server Syntax

CREATE SYNONYM [ schema_name_1. ] synonym_name FOR <object>

<object> :: =
{
[ server_name.[ database_name ] . [ schema_name_2 ]. object_name
| database_name . [ schema_name_2 ].| schema_name_2. ] object_name
}

-- Azure SQL Database Syntax

CREATE SYNONYM [ schema_name_1. ] synonym_name FOR < object >

< object > :: =


{
[database_name. [ schema_name_2 ].| schema_name_2. ] object_name
}

Arguments
schema_name_1
Specifies the schema in which the synonym is created. If schema is not specified, SQL Server uses the default
schema of the current user.
synonym_name
Is the name of the new synonym.
server_name
Applies to: SQL Server 2008 through SQL Server 2017.
Is the name of the server on which base object is located.
database_name
Is the name of the database in which the base object is located. If database_name is not specified, the name of the
current database is used.
schema_name_2
Is the name of the schema of the base object. If schema_name is not specified the default schema of the current
user is used.
object_name
Is the name of the base object that the synonym references.
Windows Azure SQL Database supports the three-part name format database_name.[schema_name].object_name
when the database_name is the current database or the database_name is tempdb and the object_name starts with
#.

Remarks
The base object need not exist at synonym create time. SQL Server checks for the existence of the base object at
run time.
Synonyms can be created for the following types of objects:

Assembly (CLR) Stored Procedure Assembly (CLR) Table-valued Function

Assembly (CLR) Scalar Function Assembly Aggregate (CLR) Aggregate Functions

Replication-filter-procedure Extended Stored Procedure

SQL Scalar Function SQL Table-valued Function

SQL Inline-table-valued Function SQL Stored Procedure

View Table1 (User-defined)

1 Includes local and global temporary tables

Four-part names for function base objects are not supported.


Synonyms can be created, dropped and referenced in dynamic SQL.

Permissions
To create a synonym in a given schema, a user must have CREATE SYNONYM permission and either own the
schema or have ALTER SCHEMA permission.
The CREATE SYNONYM permission is a grantable permission.

NOTE
You do not need permission on the base object to successfully compile the CREATE SYNONYM statement, because all
permission checking on the base object is deferred until run time.

Examples
A. Creating a synonym for a local object
The following example first creates a synonym for the base object, Product in the AdventureWorks2012 database,
and then queries the synonym.
-- Create a synonym for the Product table in AdventureWorks2012.
CREATE SYNONYM MyProduct
FOR AdventureWorks2012.Production.Product;
GO

-- Query the Product table by using the synonym.


SELECT ProductID, Name
FROM MyProduct
WHERE ProductID < 5;
GO

Here is the result set.

-----------------------
ProductID Name
----------- --------------------------
1 Adjustable Race
2 Bearing Ball
3 BB Ball Bearing
4 Headset Ball Bearings

(4 row(s) affected)

B. Creating a synonym to remote object


In the following example, the base object, Contact , resides on a remote server named Server_Remote .
Applies to: SQL Server 2008 through SQL Server 2017.

EXEC sp_addlinkedserver Server_Remote;


GO
USE tempdb;
GO
CREATE SYNONYM MyEmployee FOR Server_Remote.AdventureWorks2012.HumanResources.Employee;
GO

C. Creating a synonym for a user-defined function


The following example creates a function named dbo.OrderDozen that increases order amounts to an even dozen
units. The example then creates the synonym dbo.CorrectOrder for the dbo.OrderDozen function.
-- Creating the dbo.OrderDozen function
CREATE FUNCTION dbo.OrderDozen (@OrderAmt int)
RETURNS int
WITH EXECUTE AS CALLER
AS
BEGIN
IF @OrderAmt % 12 <> 0
BEGIN
SET @OrderAmt += 12 - (@OrderAmt % 12)
END
RETURN(@OrderAmt);
END;
GO

-- Using the dbo.OrderDozen function


DECLARE @Amt int;
SET @Amt = 15;
SELECT @Amt AS OriginalOrder, dbo.OrderDozen(@Amt) AS ModifiedOrder;

-- Create a synonym dbo.CorrectOrder for the dbo.OrderDozen function.


CREATE SYNONYM dbo.CorrectOrder
FOR dbo.OrderDozen;
GO

-- Using the dbo.CorrectOrder synonym.


DECLARE @Amt int;
SET @Amt = 15;
SELECT @Amt AS OriginalOrder, dbo.CorrectOrder(@Amt) AS ModifiedOrder;

See Also
DROP SYNONYM (Transact-SQL )
GRANT (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE TABLE (Transact-SQL)
5/3/2018 • 67 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new table in SQL Server and Azure SQL Database.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL
Database Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

NOTE
For SQL Data Warehouse syntax, see CREATE TABLE (Azure SQL Data Warehouse).

Transact-SQL Syntax Conventions

Simple Syntax
--Simple CREATE TABLE Syntax (common if not using options)
CREATE TABLE
[ database_name . [ schema_name ] . | schema_name . ] table_name
( { <column_definition> } [ ,...n ] )
[ ; ]

Full Syntax
--Disk-Based CREATE TABLE Syntax
CREATE TABLE
[ database_name . [ schema_name ] . | schema_name . ] table_name
[ AS FileTable ]
( { <column_definition>
| <computed_column_definition>
| <column_set_definition>
| [ <table_constraint> ]
| [ <table_index> ] }
[ ,...n ]
[ PERIOD FOR SYSTEM_TIME ( system_start_time_column_name
, system_end_time_column_name ) ]
)
[ ON { partition_scheme_name ( partition_column_name )
| filegroup
| "default" } ]
[ TEXTIMAGE_ON { filegroup | "default" } ]
[ FILESTREAM_ON { partition_scheme_name
| filegroup
| "default" } ]
[ WITH ( <table_option> [ ,...n ] ) ]
[ ; ]

<column_definition> ::=
column_name <data_type>
[ FILESTREAM ]
[ FILESTREAM ]
[ COLLATE collation_name ]
[ SPARSE ]
[ MASKED WITH ( FUNCTION = ' mask_function ') ]
[ CONSTRAINT constraint_name [ DEFAULT constant_expression ] ]
[ IDENTITY [ ( seed,increment ) ]
[ NOT FOR REPLICATION ]
[ GENERATED ALWAYS AS ROW { START | END } [ HIDDEN ] ]
[ NULL | NOT NULL ]
[ ROWGUIDCOL ]
[ ENCRYPTED WITH
( COLUMN_ENCRYPTION_KEY = key_name ,
ENCRYPTION_TYPE = { DETERMINISTIC | RANDOMIZED } ,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
) ]
[ <column_constraint> [ ...n ] ]
[ <column_index> ]

<data type> ::=


[ type_schema_name . ] type_name
[ ( precision [ , scale ] | max |
[ { CONTENT | DOCUMENT } ] xml_schema_collection ) ]

<column_constraint> ::=
[ CONSTRAINT constraint_name ]
{ { PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
]
[ ON { partition_scheme_name ( partition_column_name )
| filegroup | "default" } ]

| [ FOREIGN KEY ]
REFERENCES [ schema_name . ] referenced_table_name [ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]

| CHECK [ NOT FOR REPLICATION ] ( logical_expression )


}

<column_index> ::=
INDEX index_name [ CLUSTERED | NONCLUSTERED ]
[ WITH ( <index_option> [ ,... n ] ) ]
[ ON { partition_scheme_name (column_name )
| filegroup_name
| default
}
]
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]

<computed_column_definition> ::=
column_name AS computed_column_expression
[ PERSISTED [ NOT NULL ] ]
[
[ CONSTRAINT constraint_name ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH FILLFACTOR = fillfactor
| WITH ( <index_option> [ , ...n ] )
]
[ ON { partition_scheme_name ( partition_column_name )
| filegroup | "default" } ]

| [ FOREIGN KEY ]
REFERENCES referenced_table_name [ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE } ]
[ ON UPDATE { NO ACTION } ]
[ NOT FOR REPLICATION ]

| CHECK [ NOT FOR REPLICATION ] ( logical_expression )


]

<column_set_definition> ::=
column_set_name XML COLUMN_SET FOR ALL_SPARSE_COLUMNS

< table_constraint > ::=


[ CONSTRAINT constraint_name ]
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
(column [ ASC | DESC ] [ ,...n ] )
[
WITH FILLFACTOR = fillfactor
|WITH ( <index_option> [ , ...n ] )
]
[ ON { partition_scheme_name (partition_column_name)
| filegroup | "default" } ]
| FOREIGN KEY
( column [ ,...n ] )
REFERENCES referenced_table_name [ ( ref_column [ ,...n ] ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )

< table_index > ::=


{
{
INDEX index_name [ CLUSTERED | NONCLUSTERED ]
(column_name [ ASC | DESC ] [ ,... n ] )
| INDEX index_name CLUSTERED COLUMNSTORE
| INDEX index_name [ NONCLUSTERED ] COLUMNSTORE (column_name [ ,... n ] )
}
[ WITH ( <index_option> [ ,... n ] ) ]
[ ON { partition_scheme_name (column_name )
| filegroup_name
| default
}
]
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]

<table_option> ::=
{
[DATA_COMPRESSION = { NONE | ROW | PAGE }
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]]
[ FILETABLE_DIRECTORY = <directory_name> ]
[ FILETABLE_COLLATE_FILENAME = { <collation_name> | database_default } ]
[ FILETABLE_PRIMARY_KEY_CONSTRAINT_NAME = <constraint_name> ]
[ FILETABLE_STREAMID_UNIQUE_CONSTRAINT_NAME = <constraint_name> ]
[ FILETABLE_FULLPATH_UNIQUE_CONSTRAINT_NAME = <constraint_name> ]
[ SYSTEM_VERSIONING = ON [ ( HISTORY_TABLE = schema_name . history_table_name
[, DATA_CONSISTENCY_CHECK = { ON | OFF } ] ) ] ]
[ REMOTE_DATA_ARCHIVE =
{
ON [ ( <table_stretch_options> [,...n] ) ]
| OFF ( MIGRATION_STATE = PAUSED )
}
]
}

<table_stretch_options> ::=
{
[ FILTER_PREDICATE = { null | table_predicate_function } , ]
MIGRATION_STATE = { OUTBOUND | INBOUND | PAUSED }
}

<index_option> ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF}
| ALLOW_PAGE_LOCKS ={ ON | OFF}
| COMPRESSION_DELAY= {0 | delay [Minutes]}
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]
}
<range> ::=
<partition_number_expression> TO <partition_number_expression>

--Memory optimized CREATE TABLE Syntax


CREATE TABLE
[database_name . [schema_name ] . | schema_name . ] table_name
( { <column_definition>
| [ <table_constraint> ] [ ,... n ]
| [ <table_index> ]
[ ,... n ] }
[ PERIOD FOR SYSTEM_TIME ( system_start_time_column_name
, system_end_time_column_name ) ]
)
[ WITH ( <table_option> [ ,... n ] ) ]
[ ; ]

<column_definition> ::=
column_name <data_type>
[ COLLATE collation_name ]
[ GENERATED ALWAYS AS ROW { START | END } [ HIDDEN ] ]
[ NULL | NOT NULL ]
[
[ CONSTRAINT constraint_name ] DEFAULT memory_optimized_constant_expression ]
| [ IDENTITY [ ( 1, 1 ) ]
]
[ <column_constraint> ]
[ <column_index> ]

<data type> ::=


[type_schema_name . ] type_name [ (precision [ , scale ]) ]

<column_constraint> ::=
[ CONSTRAINT constraint_name ]
{
{ PRIMARY KEY | UNIQUE }
{ NONCLUSTERED
| NONCLUSTERED HASH WITH (BUCKET_COUNT = bucket_count)
}
| [ FOREIGN KEY ]
REFERENCES [ schema_name . ] referenced_table_name [ ( ref_column ) ]
| CHECK ( logical_expression )
}

< table_constraint > ::=


[ CONSTRAINT constraint_name ]
{
{ PRIMARY KEY | UNIQUE }
{
NONCLUSTERED (column [ ASC | DESC ] [ ,... n ])
| NONCLUSTERED HASH (column [ ,... n ] ) WITH ( BUCKET_COUNT = bucket_count )
}
| FOREIGN KEY
( column [ ,...n ] )
REFERENCES referenced_table_name [ ( ref_column [ ,...n ] ) ]
| CHECK ( logical_expression )
}

<column_index> ::=
INDEX index_name
{ [ NONCLUSTERED ] | [ NONCLUSTERED ] HASH WITH (BUCKET_COUNT = bucket_count) }

<table_index> ::=
INDEX index_name
{ [ NONCLUSTERED ] HASH (column [ ,... n ] ) WITH (BUCKET_COUNT = bucket_count)
| [ NONCLUSTERED ] (column [ ASC | DESC ] [ ,... n ] )
[ ON filegroup_name | default ]
| CLUSTERED COLUMNSTORE [WITH ( COMPRESSION_DELAY = {0 | delay [Minutes]})]
[ ON filegroup_name | default ]

<table_option> ::=
{
MEMORY_OPTIMIZED = ON
| DURABILITY = {SCHEMA_ONLY | SCHEMA_AND_DATA}
| SYSTEM_VERSIONING = ON [ ( HISTORY_TABLE = schema_name . history_table_name
[, DATA_CONSISTENCY_CHECK = { ON | OFF } ] ) ]

Arguments
database_name
Is the name of the database in which the table is created. database_name must specify the name of an
existing database. If not specified, database_name defaults to the current database. The login for the current
connection must be associated with an existing user ID in the database specified by database_name, and
that user ID must have CREATE TABLE permissions.
schema_name
Is the name of the schema to which the new table belongs.
table_name
Is the name of the new table. Table names must follow the rules for identifiers. table_name can be a
maximum of 128 characters, except for local temporary table names (names prefixed with a single number
sign (#)) that cannot exceed 116 characters.
AS FileTable
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Creates the new table as a FileTable. You do not specify columns because a FileTable has a fixed schema. For
more information about FileTables, see FileTables (SQL Server).
column_name
computed_column_expression
Is an expression that defines the value of a computed column. A computed column is a virtual column that is
not physically stored in the table, unless the column is marked PERSISTED. The column is computed from
an expression that uses other columns in the same table. For example, a computed column can have the
definition: cost AS price * qty. The expression can be a noncomputed column name, constant, function,
variable, and any combination of these connected by one or more operators. The expression cannot be a
subquery or contain alias data types.
Computed columns can be used in select lists, WHERE clauses, ORDER BY clauses, or any other locations in
which regular expressions can be used, with the following exceptions:
Computed columns must be marked PERSISTED to participate in a FOREIGN KEY or CHECK
constraint.
A computed column can be used as a key column in an index or as part of any PRIMARY KEY or
UNIQUE constraint, if the computed column value is defined by a deterministic expression and the
data type of the result is allowed in index columns.
For example, if the table has integer columns a and b, the computed column a+b may be indexed, but
computed column a+DATEPART(dd, GETDATE ()) cannot be indexed because the value may
change in subsequent invocations.
A computed column cannot be the target of an INSERT or UPDATE statement.

NOTE
Each row in a table can have different values for columns that are involved in a computed column; therefore, the
computed column may not have the same value for each row.

Based on the expressions that are used, the nullability of computed columns is determined automatically by
the Database Engine. The result of most expressions is considered nullable even if only nonnullable columns
are present, because possible underflows or overflows also produce NULL results. Use the
COLUMNPROPERTY function with the AllowsNull property to investigate the nullability of any computed
column in a table. An expression that is nullable can be turned into a nonnullable one by specifying ISNULL
with the check_expression constant, where the constant is a nonnull value substituted for any NULL result.
REFERENCES permission on the type is required for computed columns based on common language
runtime (CLR ) user-defined type expressions.
PERSISTED
Specifies that the SQL Server Database Engine will physically store the computed values in the table, and
update the values when any other columns on which the computed column depends are updated. Marking a
computed column as PERSISTED lets you create an index on a computed column that is deterministic, but
not precise. For more information, see Indexes on Computed Columns. Any computed columns that are
used as partitioning columns of a partitioned table must be explicitly marked PERSISTED.
computed_column_expression must be deterministic when PERSISTED is specified.
ON { partition_scheme | filegroup | "default" }
Specifies the partition scheme or filegroup on which the table is stored. If partition_scheme is specified, the
table is to be a partitioned table whose partitions are stored on a set of one or more filegroups specified in
partition_scheme. If filegroup is specified, the table is stored in the named filegroup. The filegroup must exist
within the database. If "default" is specified, or if ON is not specified at all, the table is stored on the default
filegroup. The storage mechanism of a table as specified in CREATE TABLE cannot be subsequently altered.
ON {partition_scheme | filegroup | "default"} can also be specified in a PRIMARY KEY or UNIQUE
constraint. These constraints create indexes. If filegroup is specified, the index is stored in the named
filegroup. If "default" is specified, or if ON is not specified at all, the index is stored in the same filegroup as
the table. If the PRIMARY KEY or UNIQUE constraint creates a clustered index, the data pages for the table
are stored in the same filegroup as the index. If CLUSTERED is specified or the constraint otherwise creates
a clustered index, and a partition_scheme is specified that differs from the partition_scheme or filegroup of
the table definition, or vice-versa, only the constraint definition will be honored, and the other will be
ignored.

NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in ON
"default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current
session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).

NOTE
After you create a partitioned table, consider setting the LOCK_ESCALATION option for the table to AUTO. This can
improve concurrency by enabling locks to escalate to partition (HoBT) level instead of the table. For more information,
see ALTER TABLE (Transact-SQL).

TEXTIMAGE_ON { filegroup| "default" }


Indicates that the text, ntext, image, xml, varchar(max), nvarchar(max), varbinary(max), and CLR
user-defined type columns (including geometry and geography) are stored on the specified filegroup.
TEXTIMAGE_ON is not allowed if there are no large value columns in the table. TEXTIMAGE_ON cannot be
specified if partition_scheme is specified. If "default" is specified, or if TEXTIMAGE_ON is not specified at all,
the large value columns are stored in the default filegroup. The storage of any large value column data
specified in CREATE TABLE cannot be subsequently altered.

NOTE
Varchar(max), nvarchar(max), varbinary(max), xml and large UDT values are stored directly in the data row, up to a
limit of 8000 bytes and as long as the value can fit the record. If the value does not fit in the record, a pointer is
sorted in-row and the rest is stored out of row in the LOB storage space. 0 is the default value. TEXTIMAGE_ON only
changes the location of the "LOB storage space", it does not affect when data is stored in-row. Use large value types
out of row option of sp_tableoption to store the entire LOB value out of the row.

NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in
TEXTIMAGE_ON "default" or TEXTIMAGE_ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must
be ON for the current session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER
(Transact-SQL).

FILESTREAM_ON { partition_scheme_name | filegroup | "default" } Applies to: SQL Server.


Specifies the filegroup for FILESTREAM data.
If the table contains FILESTREAM data and the table is partitioned, the FILESTREAM_ON clause must be
included and must specify a partition scheme of FILESTREAM filegroups. This partition scheme must use
the same partition function and partition columns as the partition scheme for the table; otherwise, an error
is raised.
If the table is not partitioned, the FILESTREAM column cannot be partitioned. FILESTREAM data for the
table must be stored in a single filegroup. This filegroup is specified in the FILESTREAM_ON clause.
If the table is not partitioned and the FILESTREAM_ON clause is not specified, the FILESTREAM filegroup
that has the DEFAULT property set is used. If there is no FILESTREAM filegroup, an error is raised.
As with ON and TEXTIMAGE_ON, the value set by using CREATE TABLE for FILESTREAM_ON
cannot be changed, except in the following cases:
A CREATE INDEX statement converts a heap into a clustered index. In this case, a different
FILESTREAM filegroup, partition scheme, or NULL can be specified.
A DROP INDEX statement converts a clustered index into a heap. In this case, a different
FILESTREAM filegroup, partition scheme, or "default" can be specified.
The filegroup in the FILESTREAM_ON <filegroup> clause, or each FILESTREAM filegroup that is named
in the partition scheme, must have one file defined for the filegroup. This file must be defined by
using a CREATE DATABASE or ALTER DATABASE statement; otherwise, an error is raised.
For related FILESTREAM topics, see Binary Large Object (Blob) Data (SQL Server).
[ type_schema_name. ] type_name
Specifies the data type of the column, and the schema to which it belongs. For disk-based tables, the
data type can be one of the following:
A system data type.
An alias type based on a SQL Server system data type. Alias data types are created with the CREATE
TYPE statement before they can be used in a table definition. The NULL or NOT NULL assignment
for an alias data type can be overridden during the CREATE TABLE statement. However, the length
specification cannot be changed; the length for an alias data type cannot be specified in a CREATE
TABLE statement.
A CLR user-defined type. CLR user-defined types are created with the CREATE TYPE statement
before they can be used in a table definition. To create a column on CLR user-defined type,
REFERENCES permission is required on the type.
If type_schema_name is not specified, the SQL Server Database Engine references type_name in the
following order:
The SQL Server system data type.
The default schema of the current user in the current database.
The dbo schema in the current database.
For memory-optimized tables, see Supported Data Types for In-Memory OLTP for a list of
supported system types.
precision
Is the precision for the specified data type. For more information about valid precision values, see
Precision, Scale, and Length.
scale
Is the scale for the specified data type. For more information about valid scale values, see Precision,
Scale, and Length.
max
Applies only to the varchar, nvarchar, and varbinary data types for storing 2^31 bytes of character
and binary data, and 2^30 bytes of Unicode data.
CONTENT
Specifies that each instance of the xml data type in column_name can contain multiple top-level
elements. CONTENT applies only to the xml data type and can be specified only if
xml_schema_collection is also specified. If not specified, CONTENT is the default behavior.
DOCUMENT
Specifies that each instance of the xml data type in column_name can contain only one top-level
element. DOCUMENT applies only to the xml data type and can be specified only if
xml_schema_collection is also specified.
xml_schema_collection
Applies only to the xml data type for associating an XML schema collection with the type. Before
typing an xml column to a schema, the schema must first be created in the database by using
CREATE XML SCHEMA COLLECTION.
DEFAULT
Specifies the value provided for the column when a value is not explicitly supplied during an insert.
DEFAULT definitions can be applied to any columns except those defined as timestamp, or those
with the IDENTITY property. If a default value is specified for a user-defined type column, the type
should support an implicit conversion from constant_expression to the user-defined type. DEFAULT
definitions are removed when the table is dropped. Only a constant value, such as a character string; a
scalar function (either a system, user-defined, or CLR function); or NULL can be used as a default. To
maintain compatibility with earlier versions of SQL Server, a constraint name can be assigned to a
DEFAULT.
constant_expression
Is a constant, NULL, or a system function that is used as the default value for the column.
memory_optimized_constant_expression
Is a constant, NULL, or a system function that is supported in used as the default value for the
column. Must be supported in natively compiled stored procedures. For more information about
built-in functions in natively compiled stored procedures, see Supported Features for Natively
Compiled T-SQL Modules.
IDENTITY
Indicates that the new column is an identity column. When a new row is added to the table, the
Database Engine provides a unique, incremental value for the column. Identity columns are typically
used with PRIMARY KEY constraints to serve as the unique row identifier for the table. The
IDENTITY property can be assigned to tinyint, smallint, int, bigint, decimal(p,0), or numeric(p,0)
columns. Only one identity column can be created per table. Bound defaults and DEFAULT
constraints cannot be used with an identity column. Both the seed and increment or neither must be
specified. If neither is specified, the default is (1,1).
In a memory-optimized table, the only allowed value for both seed and increment is 1; (1,1) is the
default for seed and increment.
seed
Is the value used for the very first row loaded into the table.
increment
Is the incremental value added to the identity value of the previous row loaded.
NOT FOR REPLICATION
In the CREATE TABLE statement, the NOT FOR REPLICATION clause can be specified for the
IDENTITY property, FOREIGN KEY constraints, and CHECK constraints. If this clause is specified for
the IDENTITY property, values are not incremented in identity columns when replication agents
perform inserts. If this clause is specified for a constraint, the constraint is not enforced when
replication agents perform insert, update, or delete operations.
GENERATED ALWAYS AS ROW { START | END } [ HIDDEN ] [ NOT NULL ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Specifies that a specified datetime2 column will be used by the system to record either the start time
for which a record is valid or the end time for which a record is valid. The column must be defined as
NOT NULL. If you attempt to specify them as NULL, the system will throw an error. If you do not
explicitly specify NOT NULL for a period column, the system will define the column as NOT NULL by
default. Use this argument in conjunction with the PERIOD FOR SYSTEM_TIME and WITH
SYSTEM_VERSIONING = ON arguments to enable system versioning on a table. For more
information, see Temporal Tables.
You can mark one or both period columns with HIDDEN flag to implicitly hide these columns such
that SELECT * FROM <table> does not return a value for those columns. By default, period columns
are not hidden. In order to be used, hidden columns must be explicitly included in all queries that
directly reference the temporal table. To change the HIDDEN attribute for an existing period column,
PERIOD must be dropped and recreated with a different hidden flag.
INDEX *index_name* [ CLUSTERED | NONCLUSTERED ] (*column_name* [ ASC | DESC ] [ ,... *n* ] )

Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies to create an index on the table. This can be a clustered index, or a nonclustered index. The index will
contain the columns listed, and will sort the data in either ascending or descending order.
INDEX index_name CLUSTERED COLUMNSTORE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies to store the entire table in columnar format with a clustered columnstore index. This always
includes all columns in the table. The data is not sorted in alphabetical or numeric order since the rows are
organized to gain columnstore compression benefits.
INDEX index_name [ NONCLUSTERED ] COLUMNSTORE (column_name [ ,... n ] )
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies to create a nonclustered columnstore index on the table. The underlying table can be a rowstore
heap or clustered index, or it can be a clustered columnstore index. In all cases, creating a nonclustered
columnstore index on a table stores a second copy of the data for the columns in the index.
The nonclustered columnstore index is stored and managed as a clustered columnstore index. It is called a
nonclustered columnstore index to because the columns can be limited and it exists as a secondary index on
a table.
ON partition_scheme_name(column_name)
Specifies the partition scheme that defines the filegroups onto which the partitions of a partitioned index
will be mapped. The partition scheme must exist within the database by executing either CREATE
PARTITION SCHEME or ALTER PARTITION SCHEME. column_name specifies the column against which a
partitioned index will be partitioned. This column must match the data type, length, and precision of the
argument of the partition function that partition_scheme_name is using. column_name is not restricted to
the columns in the index definition. Any column in the base table can be specified, except when partitioning
a UNIQUE index, column_name must be chosen from among those used as the unique key. This restriction
allows the Database Engine to verify uniqueness of key values within a single partition only.

NOTE
When you partition a non-unique, clustered index, the Database Engine by default adds the partitioning column to
the list of clustered index keys, if it is not already specified. When partitioning a non-unique, nonclustered index, the
Database Engine adds the partitioning column as a non-key (included) column of the index, if it is not already
specified.
If partition_scheme_name or filegroup is not specified and the table is partitioned, the index is placed in the
same partition scheme, using the same partitioning column, as the underlying table.

NOTE
You cannot specify a partitioning scheme on an XML index. If the base table is partitioned, the XML index uses the
same partition scheme as the table.

For more information about partitioning indexes, Partitioned Tables and Indexes.
ON filegroup_name
Creates the specified index on the specified filegroup. If no location is specified and the table or view is not
partitioned, the index uses the same filegroup as the underlying table or view. The filegroup must already
exist.
ON "default"
Creates the specified index on the default filegroup.
The term default, in this context, is not a keyword. It is an identifier for the default filegroup and must be
delimited, as in ON "default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option
must be ON for the current session. This is the default setting. For more information, see SET
QUOTED_IDENTIFIER (Transact-SQL ).
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]
Applies to: SQL Server.
Specifies the placement of FILESTREAM data for the table when a clustered index is created. The
FILESTREAM_ON clause allows FILESTREAM data to be moved to a different FILESTREAM filegroup or
partition scheme.
filestream_filegroup_name is the name of a FILESTREAM filegroup. The filegroup must have one file
defined for the filegroup by using a CREATE DATABASE or ALTER DATABASE statement; otherwise, an
error is raised.
If the table is partitioned, the FILESTREAM_ON clause must be included and must specify a partition
scheme of FILESTREAM filegroups that uses the same partition function and partition columns as the
partition scheme for the table. Otherwise, an error is raised.
If the table is not partitioned, the FILESTREAM column cannot be partitioned. FILESTREAM data for the
table must be stored in a single filegroup that is specified in the FILESTREAM_ON clause.
FILESTREAM_ON NULL can be specified in a CREATE INDEX statement if a clustered index is being
created and the table does not contain a FILESTREAM column.
For more information, see FILESTREAM (SQL Server).
ROWGUIDCOL
Indicates that the new column is a row GUID column. Only one uniqueidentifier column per table can be
designated as the ROWGUIDCOL column. Applying the ROWGUIDCOL property enables the column to
be referenced using $ROWGUID. The ROWGUIDCOL property can be assigned only to a
uniqueidentifier column. User-defined data type columns cannot be designated with ROWGUIDCOL.
The ROWGUIDCOL property does not enforce uniqueness of the values stored in the column.
ROWGUIDCOL also does not automatically generate values for new rows inserted into the table. To
generate unique values for each column, either use the NEWID or NEWSEQUENTIALID function on
INSERT statements or use these functions as the default for the column.
ENCRYPTED WITH
Specifies encrypting columns by using the Always Encrypted feature.
COLUMN_ENCRYPTION_KEY = key_name
Specifies the column encryption key. For more information, see CREATE COLUMN ENCRYPTION KEY
(Transact-SQL ).
ENCRYPTION_TYPE = { DETERMINISTIC | RANDOMIZED }
Deterministic encryption uses a method which always generates the same encrypted value for any given
plain text value. Using deterministic encryption allows searching using equality comparison, grouping, and
joining tables using equality joins based on encrypted values, but can also allow unauthorized users to guess
information about encrypted values by examining patterns in the encrypted column. Joining two tables on
columns encrypted deterministically is only possible if both columns are encrypted using the same column
encryption key. Deterministic encryption must use a column collation with a binary2 sort order for character
columns.
Randomized encryption uses a method that encrypts data in a less predictable manner. Randomized
encryption is more secure, but prevents equality searches, grouping, and joining on encrypted columns.
Columns using randomized encryption cannot be indexed.
Use deterministic encryption for columns that will be search parameters or grouping parameters, for
example a government ID number. Use randomized encryption, for data such as a credit card number, which
is not grouped with other records, or used to join tables, and which is not searched for because you use
other columns (such as a transaction number) to find the row which contains the encrypted column of
interest.
Columns must be of a qualifying data type.
ALGORITHM
Must be 'AEAD_AES_256_CBC_HMAC_SHA_256'.
For more information including feature constraints, see Always Encrypted (Database Engine).
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
SPARSE
Indicates that the column is a sparse column. The storage of sparse columns is optimized for null values.
Sparse columns cannot be designated as NOT NULL. For additional restrictions and more information
about sparse columns, see Use Sparse Columns.
MASKED WITH ( FUNCTION = ' mask_function ')
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Specifies a dynamic data mask. mask_function is the name of the masking function with the appropriate
parameters. Three functions are available:
default()
email()
partial()
random()
For function parameters, see Dynamic Data Masking.
FILESTREAM
Applies to: SQL Server.
Valid only for varbinary(max) columns. Specifies FILESTREAM storage for the varbinary(max) BLOB
data.
The table must also have a column of the uniqueidentifier data type that has the ROWGUIDCOL attribute.
This column must not allow null values and must have either a UNIQUE or PRIMARY KEY single-column
constraint. The GUID value for the column must be supplied either by an application when inserting data, or
by a DEFAULT constraint that uses the NEWID () function.
The ROWGUIDCOL column cannot be dropped and the related constraints cannot be changed while there
is a FILESTREAM column defined for the table. The ROWGUIDCOL column can be dropped only after the
last FILESTREAM column is dropped.
When the FILESTREAM storage attribute is specified for a column, all values for that column are stored in a
FILESTREAM data container on the file system.
COLL ATE collation_name
Specifies the collation for the column. Collation name can be either a Windows collation name or an SQL
collation name. collation_name is applicable only for columns of the char, varchar, text, nchar, nvarchar,
and ntext data types. If not specified, the column is assigned either the collation of the user-defined data
type, if the column is of a user-defined data type, or the default collation of the database.
For more information about the Windows and SQL collation names, see Windows Collation Name and SQL
Collation Name.
For more information about the COLL ATE clause, see COLL ATE (Transact-SQL ).
CONSTRAINT
Is an optional keyword that indicates the start of the definition of a PRIMARY KEY, NOT NULL, UNIQUE,
FOREIGN KEY, or CHECK constraint.
constraint_name
Is the name of a constraint. Constraint names must be unique within the schema to which the table belongs.
NULL | NOT NULL
Determine whether null values are allowed in the column. NULL is not strictly a constraint but can be
specified just like NOT NULL. NOT NULL can be specified for computed columns only if PERSISTED is also
specified.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column or columns through a unique index. Only
one PRIMARY KEY constraint can be created per table.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns through a unique index. A
table can have multiple UNIQUE constraints.
CLUSTERED | NONCLUSTERED
Indicate that a clustered or a nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints default to CLUSTERED, and UNIQUE constraints default to NONCLUSTERED.
In a CREATE TABLE statement, CLUSTERED can be specified for only one constraint. If CLUSTERED is
specified for a UNIQUE constraint and a PRIMARY KEY constraint is also specified, the PRIMARY KEY
defaults to NONCLUSTERED.
The following shows how to use NONCLUSTERED in a disk-based table:
CREATE TABLE t1 ( c1 int, INDEX ix_1 NONCLUSTERED (c1))
CREATE TABLE t2( c1 int INDEX ix_1 NONCLUSTERED (c1))
CREATE TABLE t3( c1 int, c2 int INDEX ix_1 NONCLUSTERED)
CREATE TABLE t4( c1 int, c2 int, INDEX ix_1 NONCLUSTERED (c1,c2))

FOREIGN KEY REFERENCES


Is a constraint that provides referential integrity for the data in the column or columns. FOREIGN KEY
constraints require that each value in the column exists in the corresponding referenced column or columns
in the referenced table. FOREIGN KEY constraints can reference only columns that are PRIMARY KEY or
UNIQUE constraints in the referenced table or columns referenced in a UNIQUE INDEX on the referenced
table. Foreign keys on computed columns must also be marked PERSISTED.
[ schema_name.] referenced_table_name]
Is the name of the table referenced by the FOREIGN KEY constraint, and the schema to which it belongs.
( ref_column [ ,... n ] )
Is a column, or list of columns, from the table referenced by the FOREIGN KEY constraint.
ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
Specifies what action happens to rows in the table created, if those rows have a referential relationship and
the referenced row is deleted from the parent table. The default is NO ACTION.
NO ACTION
The Database Engine raises an error and the delete action on the row in the parent table is rolled back.
CASCADE
Corresponding rows are deleted from the referencing table if that row is deleted from the parent table.
SET NULL
All the values that make up the foreign key are set to NULL if the corresponding row in the parent table is
deleted. For this constraint to execute, the foreign key columns must be nullable.
SET DEFAULT
All the values that make up the foreign key are set to their default values if the corresponding row in the
parent table is deleted. For this constraint to execute, all foreign key columns must have default definitions. If
a column is nullable, and there is no explicit default value set, NULL becomes the implicit default value of the
column.
Do not specify CASCADE if the table will be included in a merge publication that uses logical records. For
more information about logical records, see Group Changes to Related Rows with Logical Records.
ON DELETE CASCADE cannot be defined if an INSTEAD OF trigger ON DELETE already exists on the
table.
For example, in the AdventureWorks2012 database, the ProductVendor table has a referential
relationship with the Vendor table. The ProductVendor.BusinessEntityID foreign key references the
Vendor.BusinessEntityID primary key.
If a DELETE statement is executed on a row in the Vendor table, and an ON DELETE CASCADE action is
specified for ProductVendor.BusinessEntityID, the Database Engine checks for one or more dependent
rows in the ProductVendor table. If any exist, the dependent rows in the ProductVendor table are deleted,
and also the row referenced in the Vendor table.
Conversely, if NO ACTION is specified, the Database Engine raises an error and rolls back the delete action
on the Vendor row if there is at least one row in the ProductVendor table that references it.
ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
Specifies what action happens to rows in the table altered when those rows have a referential relationship
and the referenced row is updated in the parent table. The default is NO ACTION.
NO ACTION
The Database Engine raises an error, and the update action on the row in the parent table is rolled back.
CASCADE
Corresponding rows are updated in the referencing table when that row is updated in the parent table.
SET NULL
All the values that make up the foreign key are set to NULL when the corresponding row in the parent table
is updated. For this constraint to execute, the foreign key columns must be nullable.
SET DEFAULT
All the values that make up the foreign key are set to their default values when the corresponding row in the
parent table is updated. For this constraint to execute, all foreign key columns must have default definitions.
If a column is nullable, and there is no explicit default value set, NULL becomes the implicit default value of
the column.
Do not specify CASCADE if the table will be included in a merge publication that uses logical records. For
more information about logical records, see Group Changes to Related Rows with Logical Records.
ON UPDATE CASCADE, SET NULL, or SET DEFAULT cannot be defined if an INSTEAD OF trigger ON
UPDATE already exists on the table that is being altered.
For example, in the AdventureWorks2012 database, the ProductVendor table has a referential
relationship with the Vendor table: ProductVendor.BusinessEntity foreign key references the
Vendor.BusinessEntityID primary key.
If an UPDATE statement is executed on a row in the Vendor table, and an ON UPDATE CASCADE action is
specified for ProductVendor.BusinessEntityID, the Database Engine checks for one or more dependent
rows in the ProductVendor table. If any exist, the dependent rows in the ProductVendor table are
updated, and also the row referenced in the Vendor table.
Conversely, if NO ACTION is specified, the Database Engine raises an error and rolls back the update action
on the Vendor row if there is at least one row in the ProductVendor table that references it.
CHECK
Is a constraint that enforces domain integrity by limiting the possible values that can be entered into a
column or columns. CHECK constraints on computed columns must also be marked PERSISTED.
logical_expression
Is a logical expression that returns TRUE or FALSE. Alias data types cannot be part of the expression.
column
Is a column or list of columns, in parentheses, used in table constraints to indicate the columns used in the
constraint definition.
[ ASC | DESC ]
Specifies the order in which the column or columns participating in table constraints are sorted. The default
is ASC.
partition_scheme_name
Is the name of the partition scheme that defines the filegroups onto which the partitions of a partitioned
table will be mapped. The partition scheme must exist within the database.
[ partition_column_name. ]
Specifies the column against which a partitioned table will be partitioned. The column must match that
specified in the partition function that partition_scheme_name is using in terms of data type, length, and
precision. A computed columns that participates in a partition function must be explicitly marked
PERSISTED.

IMPORTANT
We recommend that you specify NOT NULL on the partitioning column of partitioned tables, and also nonpartitioned
tables that are sources or targets of ALTER TABLE...SWITCH operations. Doing this makes sure that any CHECK
constraints on partitioning columns do not have to check for null values.

WITH FILLFACTOR =fillfactor


Specifies how full the Database Engine should make each index page that is used to store the index data.
User-specified fillfactor values can be from 1 through 100. If a value is not specified, the default is 0. Fill
factor values 0 and 100 are the same in all respects.

IMPORTANT
Documenting WITH FILLFACTOR = fillfactor as the only index option that applies to PRIMARY KEY or UNIQUE
constraints is maintained for backward compatibility, but will not be documented in this manner in future releases.

column_set_name XML COLUMN_SET FOR ALL_SPARSE_COLUMNS


Is the name of the column set. A column set is an untyped XML representation that combines all of the
sparse columns of a table into a structured output. For more information about column sets, see Use
Column Sets.
PERIOD FOR SYSTEM_TIME (system_start_time_column_name , system_end_time_column_name )
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Specifies the names of the columns that the system will use to record the period for which a record is valid.
Use this argument in conjunction with the GENERATED ALWAYS AS ROW { START | END } and WITH
SYSTEM_VERSIONING = ON arguments to enable system versioning on a table. For more information,
see Temporal Tables.
COMPRESSION_DEL AY
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
For a memory-optimized, delay specifies the minimum number of minutes a row must remain in the table,
unchanged, before it is eligible for compression into the columnstore index. SQL Server selects specific rows
to compress according to their last update time. For example, if rows are changing frequently during a two-
hour period of time, you could set COMPRESSION_DEL AY = 120 Minutes to ensure updates are
completed before SQL Server compresses the row.
For a disk-based table, delay specifies the minimum number of minutes a delta rowgroup in the CLOSED
state must remain in the delta rowgroup before SQL Server can compress it into the compressed rowgroup.
Since disk-based tables don't track insert and update times on individual rows, SQL Server applies the delay
to delta rowgroups in the CLOSED state.
The default is 0 minutes.
For recommendations on when to use COMPRESSION_DEL AY, please see Get started with Columnstore
for real time operational analytics
< table_option> ::= Specifies one or more table options.
DATA_COMPRESSION
Specifies the data compression option for the specified table, partition number, or range of partitions. The
options are as follows:
NONE
Table or specified partitions are not compressed.
ROW
Table or specified partitions are compressed by using row compression.
PAGE
Table or specified partitions are compressed by using page compression.
COLUMNSTORE
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Applies only to columnstore indexes, including both nonclustered columnstore and clustered columnstore
indexes. COLUMNSTORE specifies to compress with the most performant columnstore compression. This
is the typical choice.
COLUMNSTORE_ARCHIVE
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Applies only to columnstore indexes, including both nonclustered columnstore and clustered columnstore
indexes. COLUMNSTORE_ARCHIVE will further compress the table or partition to a smaller size. This can
be used for archival, or for other situations that require a smaller storage size and can afford more time for
storage and retrieval.
For more information about compression, see Data Compression.
ON PARTITIONS ( { <partition_number_expression> | [ ,...n ] )
Specifies the partitions to which the DATA_COMPRESSION setting applies. If the table is not partitioned,
the ON PARTITIONS argument will generate an error. If the ON PARTITIONS clause is not provided, the
DATA_COMPRESSION option will apply to all partitions of a partitioned table.
partition_number_expression can be specified in the following ways:
Provide the partition number of a partition, for example: ON PARTITIONS (2).
Provide the partition numbers for several individual partitions separated by commas, for example:
ON PARTITIONS (1, 5).
Provide both ranges and individual partitions, for example: ON PARTITIONS (2, 4, 6 TO 8)
<range>can be specified as partition numbers separated by the word TO, for example: ON
PARTITIONS (6 TO 8).
To set different types of data compression for different partitions, specify the DATA_COMPRESSION
option more than once, for example:

WITH
(
DATA_COMPRESSION = NONE ON PARTITIONS (1),
DATA_COMPRESSION = ROW ON PARTITIONS (2, 4, 6 TO 8),
DATA_COMPRESSION = PAGE ON PARTITIONS (3, 5)
)

<index_option> ::=
Specifies one or more index options. For a complete description of these options, see CREATE INDEX
(Transact-SQL ).
PAD_INDEX = { ON | OFF }
When ON, the percentage of free space specified by FILLFACTOR is applied to the intermediate level pages
of the index. When OFF or a FILLFACTOR value it not specified, the intermediate level pages are filled to
near capacity leaving enough space for at least one row of the maximum size the index can have, considering
the set of keys on the intermediate pages. The default is OFF.
FILLFACTOR =fillfactor
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index
page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0.
Fill factor values 0 and 100 are the same in all respects.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique
index. The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt.
The option has no effect when executing CREATE INDEX, ALTER INDEX, or UPDATE. The default is OFF.
ON
A warning message will occur when duplicate key values are inserted into a unique index. Only the rows
violating the uniqueness constraint will fail.
OFF
An error message will occur when duplicate key values are inserted into a unique index. The entire INSERT
operation will be rolled back.
IGNORE_DUP_KEY cannot be set to ON for indexes created on a view, non-unique indexes, XML indexes,
spatial indexes, and filtered indexes.
To view IGNORE_DUP_KEY, use sys.indexes.
In backward compatible syntax, WITH IGNORE_DUP_KEY is equivalent to WITH IGNORE_DUP_KEY =
ON.
STATISTICS_NORECOMPUTE = { ON | OFF }
When ON, out-of-date index statistics are not automatically recomputed. When OFF, automatic statistics
updating are enabled. The default is OFF.
ALLOW_ROW_LOCKS = { ON | OFF }
When ON, row locks are allowed when you access the index. The Database Engine determines when row
locks are used. When OFF, row locks are not used. The default is ON.
ALLOW_PAGE_LOCKS = { ON | OFF }
When ON, page locks are allowed when you access the index. The Database Engine determines when page
locks are used. When OFF, page locks are not used. The default is ON.
FILETABLE_DIRECTORY = directory_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the windows-compatible FileTable directory name. This name should be unique among all the
FileTable directory names in the database. Uniqueness comparison is case-insensitive, regardless of collation
settings. If this value is not specified, the name of the filetable is used.
FILETABLE_COLL ATE_FILENAME = { collation_name | database_default }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the name of the collation to be applied to the Name column in the FileTable. The collation must be
case-insensitive to comply with Windows file naming semantics. If this value is not specified, the database
default collation is used. If the database default collation is case-sensitive, an error is raised and the CREATE
TABLE operation fails.
collation_name
The name of a case-insensitive collation.
database_default
Specifies that the default collation for the database should be used. This collation must be case-insensitive.
FILETABLE_PRIMARY_KEY_CONSTRAINT_NAME = constraint_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the name to be used for the primary key constraint that is automatically created on the FileTable. If
this value is not specified, the system generates a name for the constraint.
FILETABLE_STREAMID_UNIQUE_CONSTRAINT_NAME = constraint_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the name to be used for the unique constraint that is automatically created on the stream_id
column in the FileTable. If this value is not specified, the system generates a name for the constraint.
FILETABLE_FULLPATH_UNIQUE_CONSTRAINT_NAME = constraint_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the name to be used for the unique constraint that is automatically created on the
parent_path_locator and name columns in the FileTable. If this value is not specified, the system generates
a name for the constraint.
SYSTEM_VERSIONING = ON [ ( HISTORY_TABLE = schema_name . history_table_name [,
DATA_CONSISTENCY_CHECK = { ON | OFF } ] ) ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Enables system versioning of the table if the datatype, nullability constraint, and primary key constraint
requirements are met. If the HISTORY_TABLE argument is not used, the system generates a new history
table matching the schema of the current table in the same filegroup as the current table, creating a link
between the two tables and enables the system to record the history of each record in the current table in
the history table. The name of this history table will be MSSQL_TemporalHistoryFor<primary_table_object_id> .
By default, the history table is PAGE compressed. If the HISTORY_TABLE argument is used to create a link
to and use an existing history table, the link is created between the current table and the specified table. If
current table is partitioned, the history table is created on default file group because partitioning
configuration is not replicated automatically from the current table to the history table. If the name of a
history table is specified during history table creation, you must specify the schema and table name. When
creating a link to an existing history table, you can choose to perform a data consistency check. This data
consistency check ensures that existing records do not overlap. Performing the data consistency check is the
default. Use this argument in conjunction with the PERIOD FOR SYSTEM_TIME and GENERATED
ALWAYS AS ROW { START | END } arguments to enable system versioning on a table. For more
information, see Temporal Tables.
REMOTE_DATA_ARCHIVE = { ON [ ( table_stretch_options [,...n] ) ] | OFF ( MIGRATION_STATE = PAUSED
)}
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Creates the new table with Stretch Database enabled or disabled. For more info, see Stretch Database.
Enabling Stretch Database for a table
When you enable Stretch for a table by specifying ON , you can optionally specify
MIGRATION_STATE = OUTBOUND to begin migrating data immediately, or MIGRATION_STATE = PAUSED to postpone
data migration. The default value is MIGRATION_STATE = OUTBOUND . For more info about enabling Stretch for a
table, see Enable Stretch Database for a table.
Prerequisites. Before you enable Stretch for a table, you have to enable Stretch on the server and on the
database. For more info, see Enable Stretch Database for a database.
Permissions. Enabling Stretch for a database or a table requires db_owner permissions. Enabling Stretch for
a table also requires ALTER permissions on the table.
[ FILTER_PREDICATE = { null | predicate } ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Optionally specifies a filter predicate to select rows to migrate from a table that contains both historical and
current data. The predicate must call a deterministic inline table-valued function. For more info, see Enable
Stretch Database for a table and Select rows to migrate by using a filter function.

IMPORTANT
If you provide a filter predicate that performs poorly, data migration also performs poorly. Stretch Database applies
the filter predicate to the table by using the CROSS APPLY operator.

If you don't specify a filter predicate, the entire table is migrated.


When you specify a filter predicate, you also have to specify MIGRATION_STATE.
MIGRATION_STATE = { OUTBOUND | INBOUND | PAUSED }
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, and Azure SQL .
Specify OUTBOUND to migrate data from SQL Server to Azure.
Specify INBOUND to copy the remote data for the table from Azure back to SQL Server and to disable
Stretch for the table. For more info, see Disable Stretch Database and bring back remote data.
This operation incurs data transfer costs, and it can't be canceled.
Specify PAUSED to pause or postpone data migration. For more info, see Pause and resume data
migration (Stretch Database).
MEMORY_OPTIMIZED
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
The value ON indicates that the table is memory optimized. Memory-optimized tables are part of the In-
Memory OLTP feature, which is used to optimized the performance of transaction processing. To get started
with In-Memory OLTP see Quick Start 1: In-Memory OLTP Technologies for Faster Transact-SQL
Performance. For more in-depth information about memory-optimized tables see Memory-Optimized
Tables.
The default value OFF indicates that the table is disk-based.
DURABILITY
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
The value of SCHEMA_AND_DATA indicates that the table is durable, meaning that changes are persisted
on disk and survive restart or failover. SCHEMA_AND_DATA is the default value.
The value of SCHEMA_ONLY indicates that the table is non-durable. The table schema is persisted but any
data updates are not persisted upon a restart or failover of the database. DURABILITY=SCHEMA_ONLY is
only allowed with MEMORY_OPTIMIZED=ON.

WARNING
When a table is created with DURABILITY = SCHEMA_ONLY, and READ_COMMITTED_SNAPSHOT is subsequently
changed using ALTER DATABASE, data in the table will be lost.

BUCKET_COUNT
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates the number of buckets that should be created in the hash index. The maximum value for
BUCKET_COUNT in hash indexes is 1,073,741,824. For more information about bucket counts, see Indexes
for Memory-Optimized Tables.
Bucket_count is a required argument.
INDEX
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Column and table indexes can be specified as part of the CREATE TABLE statement. For details about
adding and removing indexes on memory-optimized tables see: Altering Memory-Optimized Tables
HASH
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates that a HASH index is created.
Hash indexes are supported only on memory-optimized tables.

Remarks
For information about the number of allowed tables, columns, constraints and indexes, see Maximum
Capacity Specifications for SQL Server.
Space is generally allocated to tables and indexes in increments of one extent at a time. When the SET
MIXED_PAGE_ALLOCATION option of ALTER DATABASE is set to TRUE, or always prior to SQL Server
2016 (13.x), when a table or index is created, it is allocated pages from mixed extents until it has enough
pages to fill a uniform extent. After it has enough pages to fill a uniform extent, another extent is allocated
every time the currently allocated extents become full. For a report about the amount of space allocated and
used by a table, execute sp_spaceused.
The Database Engine does not enforce an order in which DEFAULT, IDENTITY, ROWGUIDCOL, or column
constraints are specified in a column definition.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the metadata for the
table, even if the option is set to OFF when the table is created.

Temporary Tables
You can create local and global temporary tables. Local temporary tables are visible only in the current
session, and global temporary tables are visible to all sessions. Temporary tables cannot be partitioned.
Prefix local temporary table names with single number sign (#table_name), and prefix global temporary
table names with a double number sign (##table_name).
SQL statements reference the temporary table by using the value specified for table_name in the CREATE
TABLE statement, for example####:

CREATE TABLE #MyTempTable (cola INT PRIMARY KEY);

INSERT INTO #MyTempTable VALUES (1);

If more than one temporary table is created inside a single stored procedure or batch, they must have
different names.
If a local temporary table is created in a stored procedure or application that can be executed at the same
time by several users, the Database Engine must be able to distinguish the tables created by the different
users. The Database Engine does this by internally appending a numeric suffix to each local temporary table
name. The full name of a temporary table as stored in the sysobjects table in tempdb is made up of the
table name specified in the CREATE TABLE statement and the system-generated numeric suffix. To allow for
the suffix, table_name specified for a local temporary name cannot exceed 116 characters.
Temporary tables are automatically dropped when they go out of scope, unless explicitly dropped by using
DROP TABLE:
A local temporary table created in a stored procedure is dropped automatically when the stored
procedure is finished. The table can be referenced by any nested stored procedures executed by the
stored procedure that created the table. The table cannot be referenced by the process that called the
stored procedure that created the table.
All other local temporary tables are dropped automatically at the end of the current session.
Global temporary tables are automatically dropped when the session that created the table ends and
all other tasks have stopped referencing them. The association between a task and a table is
maintained only for the life of a single Transact-SQL statement. This means that a global temporary
table is dropped at the completion of the last Transact-SQL statement that was actively referencing
the table when the creating session ended.
A local temporary table created within a stored procedure or trigger can have the same name as a
temporary table that was created before the stored procedure or trigger is called. However, if a query
references a temporary table and two temporary tables with the same name exist at that time, it is not
defined which table the query is resolved against. Nested stored procedures can also create
temporary tables with the same name as a temporary table that was created by the stored procedure
that called it. However, for modifications to resolve to the table that was created in the nested
procedure, the table must have the same structure, with the same column names, as the table created
in the calling procedure. This is shown in the following example.
CREATE PROCEDURE dbo.Test2
AS
n CREATE TABLE #t(x INT PRIMARY KEY);
INSERT INTO #t VALUES (2);
SELECT Test2Col = x FROM #t;
GO

CREATE PROCEDURE dbo.Test1


AS
CREATE TABLE #t(x INT PRIMARY KEY);
INSERT INTO #t VALUES (1);
SELECT Test1Col = x FROM #t;
EXEC Test2;
GO

CREATE TABLE #t(x INT PRIMARY KEY);


INSERT INTO #t VALUES (99);
GO

EXEC Test1;
GO

Here is the result set.

(1 row(s) affected)
Test1Col
-----------
1

(1 row(s) affected)
Test2Col
-----------
2

When you create local or global temporary tables, the CREATE TABLE syntax supports constraint definitions
except for FOREIGN KEY constraints. If a FOREIGN KEY constraint is specified in a temporary table, the
statement returns a warning message that states the constraint was skipped. The table is still created without
the FOREIGN KEY constraints. Temporary tables cannot be referenced in FOREIGN KEY constraints.
If a temporary table is created with a named constraint and the temporary table is created within the scope
of a user-defined transaction, only one user at a time can execute the statement that creates the temp table.
For example, if a stored procedure creates a temporary table with a named primary key constraint, the
stored procedure cannot be executed simultaneously by multiple users.

Database scoped global temporary tables (Azure SQL Database)


Global temporary tables for SQL Server (initiated with ## table name) are stored in tempdb and shared
among all users’ sessions across the whole SQL Server instance. For information on SQL table types, see
the above section on Create Tables.
Azure SQL Database supports global temporary tables that are also stored in tempdb and scoped to the
database level. This means that global temporary tables are shared for all users’ sessions within the same
Azure SQL database. User sessions from other Azure SQL databases cannot access global temporary tables.
Global temporary tables for Azure SQL DB follow the same syntax and semantics that SQL Server uses for
temporary tables. Similarly, global temporary stored procedures are also scoped to the database level in
Azure SQL DB. Local temporary tables (initiated with # table name) are also supported for Azure SQL
Database and follow the same syntax and semantics that SQL Server uses. See the above section on
Temporary Tables.
IMPORTANT
This feature is available for Azure SQL Database only.

Troubleshooting global temporary tables for Azure SQL DB


For the troubleshooting the tempdb, see Troubleshooting Insufficient Disk space in tempdb. To access the
troubleshooting DMVs in Azure SQL Database, you must be a server admin.
Permissions
Any user can create global temporary objects. Users can only access their own objects, unless they receive
additional permissions. .
Examples
Session A creates a global temp table ##test in Azure SQL Database testdb1 and adds 1 row

CREATE TABLE ##test ( a int, b int);


INSERT INTO ##test values (1,1);

--Obtain object ID for temp table ##test


SELECT OBJECT_ID('tempdb.dbo.##test') AS 'Object ID';

---Result
1253579504

---Obtain global temp table name for a given object ID 1253579504 in tempdb (2)
SELECT name FROM tempdb.sys.objects WHERE object_id = 1253579504

---Result
##test

Session B connects to Azure SQL Database testdb1 and can access table ##test created by session A

SELECT * FROM ##test


---Results
1,1

Session C connects to another database in Azure SQL Database testdb2 and wants to access ##test
created in testdb1. This select fails due to the database scope for the global temp tables

SELECT * FROM ##test


---Results
Msg 208, Level 16, State 0, Line 1
Invalid object name '##test'

Addressing system object in Azure SQL Database tempdb from current user database testdb1

SELECT * FROM tempdb.sys.objects


SELECT * FROM tempdb.sys.columns
SELECT * FROM tempdb.sys.database_files

Partitioned Tables
Before creating a partitioned table by using CREATE TABLE, you must first create a partition function to
specify how the table becomes partitioned. A partition function is created by using CREATE PARTITION
FUNCTION. Second, you must create a partition scheme to specify the filegroups that will hold the
partitions indicated by the partition function. A partition scheme is created by using CREATE PARTITION
SCHEME. Placement of PRIMARY KEY or UNIQUE constraints to separate filegroups cannot be specified
for partitioned tables. For more information, see Partitioned Tables and Indexes.

PRIMARY KEY Constraints


A table can contain only one PRIMARY KEY constraint.
The index generated by a PRIMARY KEY constraint cannot cause the number of indexes on the table
to exceed 999 nonclustered indexes and 1 clustered index.
If CLUSTERED or NONCLUSTERED is not specified for a PRIMARY KEY constraint, CLUSTERED is
used if there are no clustered indexes specified for UNIQUE constraints.
All columns defined within a PRIMARY KEY constraint must be defined as NOT NULL. If nullability is
not specified, all columns participating in a PRIMARY KEY constraint have their nullability set to NOT
NULL.

NOTE
For memory-optimized tables, the NULLable key column is allowed.

If a primary key is defined on a CLR user-defined type column, the implementation of the type must
support binary ordering. For more information, see CLR User-Defined Types.

UNIQUE Constraints
If CLUSTERED or NONCLUSTERED is not specified for a UNIQUE constraint, NONCLUSTERED is
used by default.
Each UNIQUE constraint generates an index. The number of UNIQUE constraints cannot cause the
number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered index.
If a unique constraint is defined on a CLR user-defined type column, the implementation of the type
must support binary or operator-based ordering. For more information, see CLR User-Defined Types.

FOREIGN KEY Constraints


When a value other than NULL is entered into the column of a FOREIGN KEY constraint, the value
must exist in the referenced column; otherwise, a foreign key violation error message is returned.
FOREIGN KEY constraints are applied to the preceding column, unless source columns are specified.
FOREIGN KEY constraints can reference only tables within the same database on the same server.
Cross-database referential integrity must be implemented through triggers. For more information,
see CREATE TRIGGER (Transact-SQL ).
FOREIGN KEY constraints can reference another column in the same table. This is referred to as a
self-reference.
The REFERENCES clause of a column-level FOREIGN KEY constraint can list only one reference
column. This column must have the same data type as the column on which the constraint is defined.
The REFERENCES clause of a table-level FOREIGN KEY constraint must have the same number of
reference columns as the number of columns in the constraint column list. The data type of each
reference column must also be the same as the corresponding column in the column list.
CASCADE, SET NULL or SET DEFAULT cannot be specified if a column of type timestamp is part of
either the foreign key or the referenced key.
CASCADE, SET NULL, SET DEFAULT and NO ACTION can be combined on tables that have
referential relationships with each other. If the Database Engine encounters NO ACTION, it stops and
rolls back related CASCADE, SET NULL and SET DEFAULT actions. When a DELETE statement
causes a combination of CASCADE, SET NULL, SET DEFAULT and NO ACTION actions, all the
CASCADE, SET NULL and SET DEFAULT actions are applied before the Database Engine checks for
any NO ACTION.
The Database Engine does not have a predefined limit on either the number of FOREIGN KEY
constraints a table can contain that reference other tables, or the number of FOREIGN KEY
constraints that are owned by other tables that reference a specific table.
Nevertheless, the actual number of FOREIGN KEY constraints that can be used is limited by the
hardware configuration and by the design of the database and application. We recommend that a
table contain no more than 253 FOREIGN KEY constraints, and that it be referenced by no more than
253 FOREIGN KEY constraints. The effective limit for you may be more or less depending on the
application and hardware. Consider the cost of enforcing FOREIGN KEY constraints when you design
your database and applications.
FOREIGN KEY constraints are not enforced on temporary tables.
FOREIGN KEY constraints can reference only columns in PRIMARY KEY or UNIQUE constraints in
the referenced table or in a UNIQUE INDEX on the referenced table.
If a foreign key is defined on a CLR user-defined type column, the implementation of the type must
support binary ordering. For more information, see CLR User-Defined Types.
Columns participating in a foreign key relationship must be defined with the same length and scale.

DEFAULT Definitions
A column can have only one DEFAULT definition.
A DEFAULT definition can contain constant values, functions, SQL standard niladic functions, or
NULL. The following table shows the niladic functions and the values they return for the default
during an INSERT statement.

SQL-92 NILADIC FUNCTION VALUE RETURNED

CURRENT_TIMESTAMP Current date and time.

CURRENT_USER Name of user performing an insert.

SESSION_USER Name of user performing an insert.

SYSTEM_USER Name of user performing an insert.

USER Name of user performing an insert.

constant_expression in a DEFAULT definition cannot refer to another column in the table, or to other
tables, views, or stored procedures.
DEFAULT definitions cannot be created on columns with a timestamp data type or columns with an
IDENTITY property.
DEFAULT definitions cannot be created for columns with alias data types if the alias data type is
bound to a default object.

CHECK Constraints
A column can have any number of CHECK constraints, and the condition can include multiple logical
expressions combined with AND and OR. Multiple CHECK constraints for a column are validated in
the order they are created.
The search condition must evaluate to a Boolean expression and cannot reference another table.
A column-level CHECK constraint can reference only the constrained column, and a table-level
CHECK constraint can reference only columns in the same table.
CHECK CONSTRAINTS and rules serve the same function of validating the data during INSERT and
UPDATE statements.
When a rule and one or more CHECK constraints exist for a column or columns, all restrictions are
evaluated.
CHECK constraints cannot be defined on text, ntext, or image columns.

Additional Constraint Information


An index created for a constraint cannot be dropped by using DROP INDEX; the constraint must be
dropped by using ALTER TABLE. An index created for and used by a constraint can be rebuilt by
using ALTER INDEX...REBUILD. For more information, see Reorganize and Rebuild Indexes.
Constraint names must follow the rules for identifiers, except that the name cannot start with a
number sign (#). If constraint_name is not supplied, a system-generated name is assigned to the
constraint. The constraint name appears in any error message about constraint violations.
When a constraint is violated in an INSERT, UPDATE, or DELETE statement, the statement is ended.
However, when SET XACT_ABORT is set to OFF, the transaction, if the statement is part of an explicit
transaction, continues to be processed. When SET XACT_ABORT is set to ON, the whole transaction
is rolled back. You can also use the ROLLBACK TRANSACTION statement with the transaction
definition by checking the @@ERROR system function.
When ALLOW_ROW_LOCKS = ON and ALLOW_PAGE_LOCK = ON, row -, page-, and table-level
locks are allowed when you access the index. The Database Engine chooses the appropriate lock and
can escalate the lock from a row or page lock to a table lock. When ALLOW_ROW_LOCKS = OFF
and ALLOW_PAGE_LOCK = OFF, only a table-level lock is allowed when you access the index.
If a table has FOREIGN KEY or CHECK CONSTRAINTS and triggers, the constraint conditions are
evaluated before the trigger is executed.
For a report on a table and its columns, use sp_help or sp_helpconstraint. To rename a table, use
sp_rename. For a report on the views and stored procedures that depend on a table, use
sys.dm_sql_referenced_entities and sys.dm_sql_referencing_entities.

Nullability Rules Within a Table Definition


The nullability of a column determines whether that column can allow a null value (NULL ) as the data in that
column. NULL is not zero or blank: NULL means no entry was made or an explicit NULL was supplied, and
it typically implies that the value is either unknown or not applicable.
When you use CREATE TABLE or ALTER TABLE to create or alter a table, database and session settings
influence and possibly override the nullability of the data type that is used in a column definition. We
recommend that you always explicitly define a column as NULL or NOT NULL for noncomputed columns
or, if you use a user-defined data type, that you allow the column to use the default nullability of the data
type. Sparse columns must always allow NULL.
When column nullability is not explicitly specified, column nullability follows the rules shown in the
following table.

COLUMN DATA TYPE RULE

Alias data type The Database Engine uses the nullability that is specified
when the data type was created. To determine the default
nullability of the data type, use sp_help.

CLR user-defined type Nullability is determined according to the column


definition.

System-supplied data type If the system-supplied data type has only one option, it
takes precedence. timestamp data types must be NOT
NULL. When any session settings are set ON by using SET:
ANSI_NULL_DFLT_ON = ON, NULL is assigned.
ANSI_NULL_DFLT_OFF = ON, NOT NULL is assigned.

When any database settings are configured by using


ALTER DATABASE:
ANSI_NULL_DEFAULT_ON = ON, NULL is assigned.
ANSI_NULL_DEFAULT_OFF = ON, NOT NULL is assigned.

To view the database setting for ANSI_NULL_DEFAULT, use


the sys.databases catalog view

When neither of the ANSI_NULL_DFLT options is set for the session and the database is set to the default
(ANSI_NULL_DEFAULTis OFF ), the default of NOT NULL is assigned.
If the column is a computed column, its nullability is always automatically determined by the Database
Engine. To find out the nullability of this type of column, use the COLUMNPROPERTY function with the
AllowsNull property.

NOTE
The SQL Server ODBC driver and Microsoft OLE DB Provider for SQL Server both default to having
ANSI_NULL_DFLT_ON set to ON. ODBC and OLE DB users can configure this in ODBC data sources, or with
connection attributes or properties set by the application.

Data Compression
System tables cannot be enabled for compression. When you are creating a table, data compression is set to
NONE, unless specified otherwise. If you specify a list of partitions or a partition that is out of range, an
error will be generated. For a more information about data compression, see Data Compression.
To evaluate how changing the compression state will affect a table, an index, or a partition, use the
sp_estimate_data_compression_savings stored procedure.

Permissions
Requires CREATE TABLE permission in the database and ALTER permission on the schema in which the
table is being created.
If any columns in the CREATE TABLE statement are defined to be of a user-defined type, REFERENCES
permission on the user-defined type is required.
If any columns in the CREATE TABLE statement are defined to be of a CLR user-defined type, either
ownership of the type or REFERENCES permission on it is required.
If any columns in the CREATE TABLE statement have an XML schema collection associated with them, either
ownership of the XML schema collection or REFERENCES permission on it is required.
Any user can create temporary tables in tempdb.

Examples
A. Create a PRIMARY KEY constraint on a column
The following example shows the column definition for a PRIMARY KEY constraint with a clustered index
on the EmployeeID column of the Employee table. Because a constraint name is not specified, the system
supplies the constraint name.

CREATE TABLE dbo.Employee (EmployeeID int


PRIMARY KEY CLUSTERED);

B. Using FOREIGN KEY constraints


A FOREIGN KEY constraint is used to reference another table. Foreign keys can be single-column keys or
multicolumn keys. This following example shows a single-column FOREIGN KEY constraint on the
SalesOrderHeader table that references the SalesPerson table. Only the REFERENCES clause is required for
a single-column FOREIGN KEY constraint.

SalesPersonID int NULL


REFERENCES SalesPerson(SalesPersonID)

You can also explicitly use the FOREIGN KEY clause and restate the column attribute. Note that the column
name does not have to be the same in both tables.

FOREIGN KEY (SalesPersonID) REFERENCES SalesPerson(SalesPersonID)

Multicolumn key constraints are created as table constraints. In the AdventureWorks2012 database, the
SpecialOfferProduct table includes a multicolumn PRIMARY KEY. The following example shows how to
reference this key from another table; an explicit constraint name is optional.

CONSTRAINT FK_SpecialOfferProduct_SalesOrderDetail FOREIGN KEY


(ProductID, SpecialOfferID)
REFERENCES SpecialOfferProduct (ProductID, SpecialOfferID)

C. Using UNIQUE constraints


UNIQUE constraints are used to enforce uniqueness on nonprimary key columns. The following example
enforces a restriction that the Name column of the Product table must be unique.

Name nvarchar(100) NOT NULL


UNIQUE NONCLUSTERED

D. Using DEFAULT definitions


Defaults supply a value (with the INSERT and UPDATE statements) when no value is supplied. For example,
the AdventureWorks2012 database could include a lookup table listing the different jobs employees can fill
in the company. Under a column that describes each job, a character string default could supply a description
when an actual description is not entered explicitly.

DEFAULT 'New Position - title not formalized yet'

In addition to constants, DEFAULT definitions can include functions. Use the following example to get the
current date for an entry.

DEFAULT (getdate())

A niladic-function scan can also improve data integrity. To keep track of the user that inserted a row, use the
niladic-function for USER. Do not enclose the niladic-functions with parentheses.

DEFAULT USER

E. Using CHECK constraints


The following example shows a restriction made to values that are entered into the CreditRating column of
the Vendor table. The constraint is unnamed.

CHECK (CreditRating >= 1 and CreditRating <= 5)

This example shows a named constraint with a pattern restriction on the character data entered into a
column of a table.

CONSTRAINT CK_emp_id CHECK (emp_id LIKE


'[A-Z][A-Z][A-Z][1-9][0-9][0-9][0-9][0-9][FM]'
OR emp_id LIKE '[A-Z]-[A-Z][1-9][0-9][0-9][0-9][0-9][FM]')

This example specifies that the values must be within a specific list or follow a specified pattern.

CHECK (emp_id IN ('1389', '0736', '0877', '1622', '1756')


OR emp_id LIKE '99[0-9][0-9]')

F. Showing the complete table definition


The following example shows the complete table definitions with all constraint definitions for table
PurchaseOrderDetail created in the AdventureWorks2012 database. Note that to run the sample, the table
schema is changed to dbo .
CREATE TABLE dbo.PurchaseOrderDetail
(
PurchaseOrderID int NOT NULL
REFERENCES Purchasing.PurchaseOrderHeader(PurchaseOrderID),
LineNumber smallint NOT NULL,
ProductID int NULL
REFERENCES Production.Product(ProductID),
UnitPrice money NULL,
OrderQty smallint NULL,
ReceivedQty float NULL,
RejectedQty float NULL,
DueDate datetime NULL,
rowguid uniqueidentifier ROWGUIDCOL NOT NULL
CONSTRAINT DF_PurchaseOrderDetail_rowguid DEFAULT (newid()),
ModifiedDate datetime NOT NULL
CONSTRAINT DF_PurchaseOrderDetail_ModifiedDate DEFAULT (getdate()),
LineTotal AS ((UnitPrice*OrderQty)),
StockedQty AS ((ReceivedQty-RejectedQty)),
CONSTRAINT PK_PurchaseOrderDetail_PurchaseOrderID_LineNumber
PRIMARY KEY CLUSTERED (PurchaseOrderID, LineNumber)
WITH (IGNORE_DUP_KEY = OFF)
)
ON PRIMARY;

G. Creating a table with an xml column typed to an XML schema collection


The following example creates a table with an xml column that is typed to XML schema collection
HRResumeSchemaCollection . The DOCUMENT keyword specifies that each instance of the xml data type in
column_name can contain only one top-level element.

CREATE TABLE HumanResources.EmployeeResumes


(LName nvarchar(25), FName nvarchar(25),
Resume xml( DOCUMENT HumanResources.HRResumeSchemaCollection) );

H. Creating a partitioned table


The following example creates a partition function to partition a table or index into four partitions. Then, the
example creates a partition scheme that specifies the filegroups in which to hold each of the four partitions.
Finally, the example creates a table that uses the partition scheme. This example assumes the filegroups
already exist in the database.

CREATE PARTITION FUNCTION myRangePF1 (int)


AS RANGE LEFT FOR VALUES (1, 100, 1000) ;
GO

CREATE PARTITION SCHEME myRangePS1


AS PARTITION myRangePF1
TO (test1fg, test2fg, test3fg, test4fg) ;
GO

CREATE TABLE PartitionTable (col1 int, col2 char(10))


ON myRangePS1 (col1) ;
GO

Based on the values of column col1 of PartitionTable , the partitions are assigned in the following ways.

FILEGROUP TEST1FG TEST2FG TEST3FG TEST4FG

Partition 1 2 3 4
FILEGROUP TEST1FG TEST2FG TEST3FG TEST4FG

Values col 1 <= 1 col1 > 1 AND col1 col1 > 100 AND col1 > 1000
<= 100 col1 <= 1,000

I. Using the uniqueidentifier data type in a column


The following example creates a table with a uniqueidentifier column. The example uses a PRIMARY KEY
constraint to protect the table against users inserting duplicated values, and it uses the NEWSEQUENTIALID()
function in the DEFAULT constraint to provide values for new rows. The ROWGUIDCOL property is applied
to the uniqueidentifier column so that it can be referenced using the $ROWGUID keyword.

CREATE TABLE dbo.Globally_Unique_Data


(guid uniqueidentifier
CONSTRAINT Guid_Default DEFAULT
NEWSEQUENTIALID() ROWGUIDCOL,
Employee_Name varchar(60)
CONSTRAINT Guid_PK PRIMARY KEY (guid) );

J. Using an expression for a computed column


The following example shows the use of an expression ( (low + high)/2 ) for calculating the myavg
computed column.

CREATE TABLE dbo.mytable


( low int, high int, myavg AS (low + high)/2 ) ;

K. Creating a computed column based on a user-defined type column


The following example creates a table with one column defined as user-defined type utf8string , assuming
that the type's assembly, and the type itself, have already been created in the current database. A second
column is defined based on utf8string , and uses method ToString() of type(class) utf8string to
compute a value for the column.

CREATE TABLE UDTypeTable


( u utf8string, ustr AS u.ToString() PERSISTED ) ;

L. Using the USER_NAME function for a computed column


The following example uses the USER_NAME() function in the myuser_name column.

CREATE TABLE dbo.mylogintable


( date_in datetime, user_id int, myuser_name AS USER_NAME() ) ;

M. Creating a table that has a FILESTREAM column


The following example creates a table that has a FILESTREAM column Photo . If a table has one or more
FILESTREAM columns, the table must have one ROWGUIDCOL column.

CREATE TABLE dbo.EmployeePhoto


(
EmployeeId int NOT NULL PRIMARY KEY,
,Photo varbinary(max) FILESTREAM NULL
,MyRowGuidColumn uniqueidentifier NOT NULL ROWGUIDCOL
UNIQUE DEFAULT NEWID()
);
N. Creating a table that uses row compression
The following example creates a table that uses row compression.

CREATE TABLE dbo.T1


(c1 int, c2 nvarchar(200) )
WITH (DATA_COMPRESSION = ROW);

For additional data compression examples, see Data Compression.


O. Creating a table that has sparse columns and a column set
The following examples show to how to create a table that has a sparse column, and a table that has two
sparse columns and a column set. The examples use the basic syntax. For more complex examples, see Use
Sparse Columns and Use Column Sets.
This example creates a table that has a sparse column.

CREATE TABLE dbo.T1


(c1 int PRIMARY KEY,
c2 varchar(50) SPARSE NULL ) ;

This example creates a table that has two sparse columns and a column set named CSet .

CREATE TABLE T1
(c1 int PRIMARY KEY,
c2 varchar(50) SPARSE NULL,
c3 int SPARSE NULL,
CSet XML COLUMN_SET FOR ALL_SPARSE_COLUMNS ) ;

P. Creating a system-versioned disk-based temporal table


Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
The following examples show how to create a temporal table linked to a new history table, and how to create
a temporal table linked to an existing history table. Note that the temporal table must have a primary key
defined to be enabled for the table to be enabled for system versioning. For examples showing how to add
or remove system versioning on an existing table, see System Versioning in Examples. For use cases, see
Temporal Tables.
This example creates a new temporal table linked to a new history table.

CREATE TABLE Department


(
DepartmentNumber char(10) NOT NULL PRIMARY KEY CLUSTERED,
DepartmentName varchar(50) NOT NULL,
ManagerID int NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
SysEndTime datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH (SYSTEM_VERSIONING = ON);

This example creates a new temporal table linked to an existing history table.
--Existing table
CREATE TABLE Department_History
(
DepartmentNumber char(10) NOT NULL,
DepartmentName varchar(50) NOT NULL,
ManagerID int NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 NOT NULL,
SysEndTime datetime2 NOT NULL
);
--Temporal table
CREATE TABLE Department
(
DepartmentNumber char(10) NOT NULL PRIMARY KEY CLUSTERED,
DepartmentName varchar(50) NOT NULL,
ManagerID INT NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
SysEndTime datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH
(SYSTEM_VERSIONING = ON
(HISTORY_TABLE = dbo.Department_History, DATA_CONSISTENCY_CHECK = ON )
);

Q. Creating a system-versioned memory-optimized temporal table


Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
The following example shows how to create a system-versioned memory-optimized temporal table linked to
a new disk-based history table.
This example creates a new temporal table linked to a new history table.

CREATE SCHEMA History


GO
CREATE TABLE dbo.Department
(
DepartmentNumber char(10) NOT NULL PRIMARY KEY NONCLUSTERED,
DepartmentName varchar(50) NOT NULL,
ManagerID int NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
SysEndTime datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH
(
MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA,
SYSTEM_VERSIONING = ON ( HISTORY_TABLE = History.DepartmentHistory )
);

This example creates a new temporal table linked to an existing history table.
--Existing table
CREATE TABLE Department_History
(
DepartmentNumber char(10) NOT NULL,
DepartmentName varchar(50) NOT NULL,
ManagerID int NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 NOT NULL,
SysEndTime datetime2 NOT NULL
);
--Temporal table
CREATE TABLE Department
(
DepartmentNumber char(10) NOT NULL PRIMARY KEY CLUSTERED,
DepartmentName varchar(50) NOT NULL,
ManagerID INT NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
SysEndTime datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH
(SYSTEM_VERSIONING = ON
(HISTORY_TABLE = dbo.Department_History, DATA_CONSISTENCY_CHECK = ON )
);

R. Creating a table with encrypted columns


The following example creates a table with two encrypted columns. For more information, see Always
Encrypted (Database Engine).

CREATE TABLE Customers (


CustName nvarchar(60)
ENCRYPTED WITH
(
COLUMN_ENCRYPTION_KEY = MyCEK,
ENCRYPTION_TYPE = RANDOMIZED,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
),
SSN varchar(11) COLLATE Latin1_General_BIN2
ENCRYPTED WITH
(
COLUMN_ENCRYPTION_KEY = MyCEK,
ENCRYPTION_TYPE = DETERMINISTIC ,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
),
Age int NULL
);

S. Create an inline filtered index


Creates a tables with an inline filtered index.

CREATE TABLE t1
(
c1 int,
index IX1 (c1) WHERE c1 > 0
)
GO

See Also
ALTER TABLE (Transact-SQL )
COLUMNPROPERTY (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE VIEW (Transact-SQL )
Data Types (Transact-SQL )
DROP INDEX (Transact-SQL )
sys.dm_sql_referenced_entities (Transact-SQL )
sys.dm_sql_referencing_entities (Transact-SQL )
DROP TABLE (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_help (Transact-SQL )
sp_helpconstraint (Transact-SQL )
sp_rename (Transact-SQL )
sp_spaceused (Transact-SQL )
CREATE TABLE (Azure SQL Data Warehouse)
5/4/2018 • 16 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a new table in SQL Data Warehouse or Parallel Data Warehouse.
To understand tables and how to use them, see Tables in SQL Data Warehouse.
NOTE: Discussions about SQL Data Warehouse in this article apply to both SQL Data Warehouse and Parallel
Data Warehouse unless otherwise noted.
Transact-SQL Syntax Conventions

Syntax
-- Create a new table.
CREATE TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
(
{ column_name <data_type> [ <column_options> ] } [ ,...n ]
)
[ WITH [ <table_option> [ ,...n ] ) ]
[;]

<column_options> ::=
[ COLLATE Windows_collation_name ]
[ NULL | NOT NULL ] -- default is NULL
[ [ CONSTRAINT constraint_name ] DEFAULT constant_expression ]

<table_option> ::=
{
CLUSTERED COLUMNSTORE INDEX --default for SQL Data Warehouse
| HEAP --default for Parallel Data Warehouse
| CLUSTERED INDEX ( { index_column_name [ ASC | DESC ] } [ ,...n ] ) -- default is ASC
}
{
DISTRIBUTION = HASH ( distribution_column_name )
| DISTRIBUTION = ROUND_ROBIN -- default for SQL Data Warehouse
| DISTRIBUTION = REPLICATE -- default for Parallel Data Warehouse
}
| PARTITION ( partition_column_name RANGE [ LEFT | RIGHT ] -- default is LEFT
FOR VALUES ( [ boundary_value [,...n] ] ) )

<data type> ::=


datetimeoffset [ ( n ) ]
| datetime2 [ ( n ) ]
| datetime
| smalldatetime
| date
| time [ ( n ) ]
| float [ ( n ) ]
| real [ ( n ) ]
| decimal [ ( precision [ , scale ] ) ]
| numeric [ ( precision [ , scale ] ) ]
| money
| smallmoney
| bigint
| int
| smallint
| tinyint
| bit
| nvarchar [ ( n | max ) ] -- max applies only to SQL Data Warehouse
| nchar [ ( n ) ]
| varchar [ ( n | max ) ] -- max applies only to SQL Data Warehouse
| char [ ( n ) ]
| varbinary [ ( n | max ) ] -- max applies only to SQL Data Warehouse
| binary [ ( n ) ]
| uniqueidentifier

Arguments
database_name
The name of the database that will contain the new table. The default is the current database.
schema_name
The schema for the table. Specifying schema is optional. If blank, the default schema will be used.
table_name
The name of the new table. To create a local temporary table, precede the table name with #. For explanations and
guidance on temporary tables, see Temporary tables in Azure SQL Data Warehouse.
column_name
The name of a table column.
Column options
COLLATE Windows_collation_name
Specifies the collation for the expression. The collation must be one of the Windows collations supported by SQL
Server. For a list of Windows collations supported by SQL Server, see Windows Collation Name (Transact-SQL ).
NULL | NOT NULL
Specifies whether NULL values are allowed in the column. The default is NULL .
[ CONSTRAINT constraint_name ] DEFAULT constant_expression
Specifies the default column value.

ARGUMENT EXPLANATION

constraint_name The optional name for the constraint. The constraint name is
unique within the database. The name can be re-used in other
databases.

constant_expression The default value for the column. The expression must be a
literal value or a a constant. For example, these constant
expressions are allowed: 'CA' , 4 . These are not allowed:
2+3 , CURRENT_TIMESTAMP .

Table structure options


For guidance on choosing the type of table, see Indexing tables in Azure SQL Data Warehouse.
CLUSTERED COLUMNSTORE INDEX
Stores the table as a clustered columnstore index. The clustered columnstore index applies to all of the table data.
This is the default for SQL Data Warehouse.
HEAP
Stores the table as a heap. This is the default for Parallel Data Warehouse.
CLUSTERED INDEX ( index_column_name [ ,...n ] )
Stores the table as a clustered index with one or more key columns. This stores the data by row. Use
index_column_name to specify the name of one or more key columns in the index. For more information, see
Rowstore Tables in the General Remarks.
LOCATION = USER_DB
This option is deprecated. It is syntactically accepted, but no longer required and no longer affects behavior.
Table distribution options
To understand how to choose the best distribution method and use distributed tables, see Distributing tables in
Azure SQL Data Warehouse.
DISTRIBUTION = HASH ( distribution_column_name )
Assigns each row to one distribution by hashing the value stored in distribution_column_name. The algorithm is
deterministic which means it always hashes the same value to the same distribution. The distribution column
should be defined as NOT NULL since all rows that have NULL will be assigned to the same distribution.
DISTRIBUTION = ROUND_ROBIN
Distributes the rows evenly across all the distributions in a round-robin fashion. This is the default for SQL Data
Warehouse.
DISTRIBUTION = REPLICATE
Stores one copy of the table on each Compute node. For SQL Data Warehouse the table is stored on a distribution
database on each Compute node. For Parallel Data Warehouse, the table is stored in a SQL Server filegroup that
spans the Compute node. This is the default for Parallel Data Warehouse.
Table partition options
For guidance on using table partitions, see Partitioning tables in SQL Data Warehouse.
PARTITION ( partition_column_name RANGE [ LEFT | RIGHT ] FOR VALUES ( [ boundary_value [,...n] ] ))
Creates one or more table partitions. These are horizontal table slices that allow you to perform operations on
subsets of rows regardless of whether the table is stored as a heap, clustered index, or clustered columnstore
index. Unlike the distribution column, table partitions do not determine the distribution where each row is stored.
Instead, table partitions determine how the rows are grouped and stored within each distribution.

ARGUMENT EXPLANATION

partition_column_name Specifies the column that SQL Data Warehouse will use to
partition the rows. This column can be any data type. SQL
Data Warehouse sorts the partition column values in
ascending order. The low-to-high ordering goes from LEFT
to RIGHT for the purpose of the RANGE specification.

RANGE LEFT Specifies the boundary value belongs to the partition on the
left (lower values). The default is LEFT.

RANGE RIGHT Specifies the boundary value belongs to the partition on the
right (higher values).

FOR VALUES ( boundary_value [,...n] ) Specifies the boundary values for the partition.
boundary_value is a constant expression. It cannot be NULL.
It must either match or be implicitly convertible to the data
type of partition_column_name. It cannot be truncated
during implicit conversion so that the size and scale of the
value do not match the data type of partition_column_name

If you specify the PARTITION clause, but do not specify a


boundary value, SQL Data Warehouse will create a partitioned
table with one partition. If applicable, you an split the table
into two partitions at a later time.

If you specify one boundary value, the resulting table has two
partitions; one for the values lower than the boundary value
and one for the values higher than the boundary value. Note
that if you move a partition into a non-partitioned table, the
non-partitioned table will receive the data, but will not have
the partition boundaries in its metadata.

See Create a partitioned table in the Examples section.


Data types
SQL Data Warehouse supports the most commonly used data types. Below is a list of the supported data types
along with their details and storage bytes. To better understand data types and how to use them, see Data types
for tables in SQL Data Warehouse.
For a table of data type conversions, see the Implicit Conversions section, of CAST and CONVERT (Transact-SQL ).
datetimeoffset [(n)]
The default value for n is 7.
datetime2 [(n)]
Same as datetime , except that you can specify the number of fractional seconds. The default value for n is 7 .

N VALUE PRECISION SCALE

0 19 0

1 21 1

2 22 2

3 23 3

4 24 4

5 25 5

6 26 6

7 27 7

datetime
Stores date and time of day with 19 to 23 characters according to the Gregorian calendar. The date can contain
year, month, and day. The time contains hour, minutes, seconds.As an option, you can display three digits for
fractional seconds. The storage size is 8 bytes.
smalldatetime
Stores a date and a time. Storage size is 4 bytes.
date
Stores a date using a maximum of 10 characters for year, month, and day according to the Gregorian calendar. The
storage size is 3 bytes. Date is stored as an integer.
time [(n)]
The default value for n is 7 .
float [(n)]
Approximate number data type for use with floating point numeric data. Floating point data is approximate, which
means that not all values in the data type range can be represented exactly. n specifies the number of bits used to
store the mantissa of the float in scientific notation. Therefore, n dictates the precision and storage size. If n is
specified, it must be a value between 1 and 53 . The default value of n is 53 .

N VALUE PRECISION STORAGE SIZE

1-24 7 digits 4 bytes

25-53 15 digits 8 bytes

SQL Data Warehouse treats n as one of two possible values. If 1 <= n <= 24 , n is treated as 24 . If 25 <= n <=
53 , n is treated as 53 .

The SQL Data Warehouse float data type complies with the ISO standard for all values of n from 1 through
53 . The synonym for double precision is float(53) .

real [(n)]
The definition of real is the same as float. The ISO synonym for real is float(24) .
decimal [ ( precision [ , scale ] ) ] | numeric [ ( precision [ , scale ] ) ]
Stores fixed precision and scale numbers.
precision
The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal
point. The precision must be a value from 1 through the maximum precision of 38 . The default precision is 18 .
scale
The maximum number of decimal digits that can be stored to the right of the decimal point. Scale must be a value
from 0 through precision. You can only specify scale if precision is specified. The default scale is 0 ; therefore, 0
<= scale <= precision. Maximum storage sizes vary, based on the precision.

PRECISION STORAGE BYTES

1-9 5

10-19 9

20-28 13

29-38 17

money | smallmoney
Data types that represent currency values.

DATA TYPE STORAGE BYTES

money 8

smallmoney 4

bigint | int | smallint | tinyint


Exact-number data types that use integer data. The storage is shown in the following table.

DATA TYPE STORAGE BYTES

bigint 8

int 4

smallint 2

tinyint 1

bit
An integer data type that can take the value of 1 , 0 , or `NULL. SQL Data Warehouse optimizes storage of bit
columns. If there are 8 or fewer bit columns in a table, the columns are stored as 1 byte. If there are from 9-16 bit
columns, the columns are stored as 2 bytes, and so on.
nvarchar [ ( n | max ) ] -- max applies only to SQL Data Warehouse.
Variable-length Unicode character data. n can be a value from 1 through 4000. max indicates that the maximum
storage size is 2^31-1 bytes (2 GB ). Storage size in bytes is two times the number of characters entered + 2 bytes.
The data entered can be 0 characters in length.
nchar [(n)]
Fixed-length Unicode character data with a length of n characters. n must be a value from 1 through 4000 . The
storage size is two times n bytes.
varchar [ ( n | max ) ] -- max applies only to SQL Data Warehouse.
Variable length, non-Unicode character data with a length of n bytes. n must be a value from 1 to 8000 . max
-
indicates that the maximum storage size is 2^31-1 bytes (2 GB ).The storage size is the actual length of data
entered + 2 bytes.
char [(n)]
Fixed-length, non-Unicode character data with a length of n bytes. n must be a value from 1 to 8000 . The
storage size is n bytes. The default for n is 1 .
varbinary [ ( n | max ) ] -- max applies only to SQL Data Warehouse.
Variable-length binary data. n can be a value from 1 to 8000 . max indicates that the maximum storage size is
2^31-1 bytes (2 GB ). The storage size is the actual length of data entered + 2 bytes. The default value for n is 7.
binary [(n)]
Fixed-length binary data with a length of n bytes. n can be a value from 1 to 8000 . The storage size is n bytes.
The default value for n is 7 .
uniqueidentifier
Is a 16-byte GUID.

Permissions
Creating a table requires permission in the db_ddladmin fixed database role, or:
CREATE TABLE permission on the database
ALTER SCHEMA permission on the schema that will contain the table.

Creating a partitioned table requires permission in the db_ddladmin fixed database role, or
ALTER ANY DATASPACE permission
The login that creates a local temporary table receives CONTROL , INSERT , SELECT , and UPDATE permissions
on the table.

General Remarks
For minimum and maximum limits, see SQL Data Warehouse capacity limits.
Determining the number of table partitions
Each user-defined table is divided into multiple smaller tables which are stored in separate locations called
distributions. SQL Data Warehouse uses 60 distributions. In Parallel Data Warehouse, the number of distributions
depends on the number of Compute nodes.
Each distribution contains all table partitions. For example, if there are 60 distributions and four table partitions,
there will be 320 partitions. If the table is a clustered columnstore index, there will be one columnstore index per
partition which means you will have 320 columnstore indexes.
We recommend using fewer table partitions to ensure each columnstore index has enough rows to take
advantage of the benefits of columnstore indexes. For further guidance, see Partitioning tables in SQL Data
Warehouse and Indexing tables in SQL Data Warehouse
Rowstore table (heap or clustered index)
A rowstore table is a table stored in row -by-row order. It is a heap or clustered index. SQL Data Warehouse
creates all rowstore tables with page compression; this is not user-configurable.
Columnstore table (columnstore index)
A columnstore table is a table stored in column-by-column order. The columnstore index is the technology that
manages data stored in a columnstore table. The clustered columnstore index does not affect how data are
distributed; it affects how the data are stored within each distribution.
To change a rowstore table to a columnstore table, drop all existing indexes on the table and create a clustered
columnstore index. For an example, see CREATE COLUMNSTORE INDEX (Transact-SQL ).
For more information, see these articles:
Columnstore indexes versioned feature summary
Indexing tables in SQL Data Warehouse
Columnstore Indexes Guide

Limitations and Restrictions


You cannot define a DEFAULT constraint on a distribution column.
Partitions
When using partitions, the partition column cannot have a Unicode-only collation. For example, the following
statement fails.
CREATE TABLE t1 ( c1 varchar(20) COLLATE Divehi_90_CI_AS_KS_WS) WITH (PARTITION (c1 RANGE FOR VALUES (N'')))

If boundary_value is a literal value that must be implicitly converted to the data type in partition_column_name, a
discrepancy will occur. The literal value is displayed through the SQL Data Warehouse system views, but the
converted value is used for Transact-SQL operations.
Temporary tables
Global temporary tables that begin with ## are not supported.
Local temporary tables have the following limitations and restrictions:
They are visible only to the current session. SQL Data Warehouse drops them automatically at the end of the
session. To drop them explicitlt, use the DROP TABLE statement.
They cannot be renamed.
They cannot have partitions or views.
Their permissions cannot be changed. GRANT , DENY , and REVOKE statements cannot be used with local
temporary tables.
Database console commands are blocked for temporary tables.
If more than one local temporary table is used within a batch, each must have a unique name. If multiple
sessions are running the same batch and creating the same local temporary table, SQL Data Warehouse
internally appends a numeric suffix to the local temporary table name to maintain a unique name for each local
temporary table.

Locking behavior
Takes an exclusive lock on the table. Takes a shared lock on the DATABASE, SCHEMA, and
SCHEMARESOLUTION objects.

Examples for columns


A. Specify a column collation
In the following example, the table MyTable is created with two different column collations. By default, the column,
mycolumn1 , has the default collation Latin1_General_100_CI_AS_KS_WS. The column, mycolumn2 has the collation
Frisian_100_CS_AS.

CREATE TABLE MyTable


(
mycolumnnn1 nvarchar,
mycolumn2 nvarchar COLLATE Frisian_100_CS_AS )
WITH ( CLUSTERED COLUMNSTORE INDEX )
;

B. Specify a DEFAULT constraint for a column


The following example shows the syntax to specify a default value for a column. The colA column has a default
constraint named constraint_colA and a default value of 0.

CREATE TABLE MyTable


(
colA int CONSTRAINT constraint_colA DEFAULT 0,
colB nvarchar COLLATE Frisian_100_CS_AS
)
WITH ( CLUSTERED COLUMNSTORE INDEX )
;

Examples for temporary tables


C. Create a local temporary table
The following creates a local temporary table named #myTable. The table is specified with a 3-part name. The
temporary table name starts with a #.

CREATE TABLE AdventureWorks.dbo.#myTable


(
id int NOT NULL,
lastName varchar(20),
zipCode varchar(6)
)
WITH
(
DISTRIBUTION = HASH (id),
CLUSTERED COLUMNSTORE INDEX
)
;

Examples for table structure


D. Create a table with a clustered columnstore index
The following example creates a distributed table with a clustered columnstore index. Each distribution will be
stored as a columnstore.
The clustered columnstore index does not affect how the data is distributed; data is always distributed by row. The
clustered columnstore index affects how the data is stored within each distribution.
CREATE TABLE MyTable
(
colA int CONSTRAINT constraint_colA DEFAULT 0,
colB nvarchar COLLATE Frisian_100_CS_AS
)
WITH
(
DISTRIBUTION = HASH ( colB ),
CLUSTERED COLUMNSTORE INDEX
)
;

Examples for table distribution


E. Create a ROUND_ROBIN table
The following example creates a ROUND_ROBIN table with three columns and without partitions. The data is
spread across all distributions. The table is created with a CLUSTERED COLUMNSTORE INDEX, which gives
better performance and data compression than a heap or rowstore clustered index.

CREATE TABLE myTable


(
id int NOT NULL,
lastName varchar(20),
zipCode varchar(6)
)
WITH ( CLUSTERED COLUMNSTORE INDEX );

F. Create a hash-distributed table


The following example creates the same table as the previous example. However, for this table, rows are
distributed (on the id column) instead of randomly spread like a ROUND_ROBIN table. The table is created with
a CLUSTERED COLUMNSTORE INDEX, which gives better performance and data compression than a heap or
rowstore clustered index.

CREATE TABLE myTable


(
id int NOT NULL,
lastName varchar(20),
zipCode varchar(6)
)
WITH
(
DISTRIBUTION = HASH (id),
CLUSTERED COLUMNSTORE INDEX
);

G. Create a replicated table


The following example creates a replicated table similar to the previous examples. Replicated tables are copied in
full to each Compute node. With this copy on each Compute node, data movement is reduced for queries. This
example is created with a CLUSTERED INDEX, which gives better data compression than a heap and may not
contain enough rows to achieve good CLUSTERED COLUMNSTORE INDEX compression.
CREATE TABLE myTable
(
id int NOT NULL,
lastName varchar(20),
zipCode varchar(6)
)
WITH
(
DISTRIBUTION = REPLICATE,
CLUSTERED INDEX (lastName)
);

Examples for table partitions


H. Create a partitioned table
The following example creates the same table as shown in example A, with the addition of RANGE LEFT
partitioning on the id column. It specifies four partition boundary values, which results in five partitions.

CREATE TABLE myTable


(
id int NOT NULL,
lastName varchar(20),
zipCode int)
WITH
(

PARTITION ( id RANGE LEFT FOR VALUES (10, 20, 30, 40 )),


CLUSTERED COLUMNSTORE INDEX
)
;

In this example, data will be sorted into the following partitions:


Partition 1: col <= 10
Partition 2: 10 < col <= 20
Partition 3: 20 < col <= 30
Partition 4: 30 < col <= 40
Partition 5: 40 < col
If this same table was partitioned RANGE RIGHT instead of RANGE LEFT (default), the data will be sorted
into the following partitions:
Partition 1: col < 10
Partition 2: 10 <= col < 20
Partition 3: 20 <= col < 30
Partition 4: 30 <= col < 40
Partition 5: 40 <= col
I. Create a partitioned table with one partition
The following example creates a partitioned table with one partition. It does not specify any boundary values,
which results in one partition.
CREATE TABLE myTable (
id int NOT NULL,
lastName varchar(20),
zipCode int)
WITH
(
PARTITION ( id RANGE LEFT FOR VALUES ( )),
CLUSTERED COLUMNSTORE INDEX
)
;

J. Create a table with date partitioning


The following example creates a new table named myTable , with partitioning on a date column. By using
RANGE RIGHT and dates for the boundary values, it puts a month of data in each partition.

CREATE TABLE myTable (


l_orderkey bigint,
l_partkey bigint,
l_suppkey bigint,
l_linenumber bigint,
l_quantity decimal(15,2),
l_extendedprice decimal(15,2),
l_discount decimal(15,2),
l_tax decimal(15,2),
l_returnflag char(1),
l_linestatus char(1),
l_shipdate date,
l_commitdate date,
l_receiptdate date,
l_shipinstruct char(25),
l_shipmode char(10),
l_comment varchar(44))
WITH
(
DISTRIBUTION = HASH (l_orderkey),
CLUSTERED COLUMNSTORE INDEX,
PARTITION ( l_shipdate RANGE RIGHT FOR VALUES
(
'1992-01-01','1992-02-01','1992-03-01','1992-04-01','1992-05-01',
'1992-06-01','1992-07-01','1992-08-01','1992-09-01','1992-10-01',
'1992-11-01','1992-12-01','1993-01-01','1993-02-01','1993-03-01',
'1993-04-01','1993-05-01','1993-06-01','1993-07-01','1993-08-01',
'1993-09-01','1993-10-01','1993-11-01','1993-12-01','1994-01-01',
'1994-02-01','1994-03-01','1994-04-01','1994-05-01','1994-06-01',
'1994-07-01','1994-08-01','1994-09-01','1994-10-01','1994-11-01',
'1994-12-01'
))
);

See also
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
DROP TABLE (Transact-SQL )
ALTER TABLE (Transact-SQL )
CREATE TABLE (SQL Graph)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new SQL graph table as either a NODE or an EDGE table.

NOTE
For standard Transact-SQL statements, see CREATE TABLE (Transact-SQL).

Transact-SQL Syntax Conventions

Syntax
CREATE TABLE
[ database_name . [ schema_name ] . | schema_name . ] table_name
( { <column_definition> } [ ,...n ] )
AS [ NODE | EDGE ]
[ ; ]

Arguments
This document lists only arguments pertaining to SQL graph. For a full list and description of supported
arguments, see CREATE TABLE (Transact-SQL )
database_name
Is the name of the database in which the table is created. database_name must specify the name of an existing
database. If not specified, database_name defaults to the current database. The login for the current connection
must be associated with an existing user ID in the database specified by database_name, and that user ID must
have CREATE TABLE permissions.
schema_name
Is the name of the schema to which the new table belongs.
table_name
Is the name of the node or edge table. Table names must follow the rules for identifiers. table_name can be a
maximum of 128 characters, except for local temporary table names (names prefixed with a single number sign (#))
that cannot exceed 116 characters.
NODE
Creates a node table.
EDGE
Creates an edge table.

Remarks
Creating a temporary table as node or edge table is not supported.
Creating a node or edge table as a temporal table is not supported.
Stretch database is not supported for node or edge table.
Node or edge tables cannot be external tables (no polybase support for graph tables).

Examples
A. Create a NODE table
The following example shows how to create a NODE table

CREATE TABLE Person (


ID INTEGER PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100)
) AS NODE;

B. Create an EDGE table


The following examples show how to create EDGE tables

CREATE TABLE friends (


id integer PRIMARY KEY,
start_date date
) AS EDGE;

-- Create a likes edge table, this table does not have any user defined attributes
CREATE TABLE likes AS EDGE;

See Also
ALTER TABLE (Transact-SQL )
INSERT (SQL Graph)]
Graph processing with SQL Server 2017
CREATE TABLE AS SELECT (Azure SQL Data
Warehouse)
5/4/2018 • 19 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
CREATE TABLE AS SELECT (CTAS ) is one of the most important T-SQL features available. It is a fully parallelized
operation that creates a new table based on the output of a SELECT statement. CTAS is the simplest and fastest
way to create a copy of a table.
For example, use CTAS to:
Re-create a table with a different hash distribution column.
Re-create a table as replicated.
Create a columnstore index on just some of the columns in the table.
Query or import external data.

NOTE
Since CTAS adds to the capabilities of creating a table, this topic tries not to repeat the CREATE TABLE topic. Instead, it
describes the differences between the CTAS and CREATE TABLE statements. For the CREATE TABLE details, see CREATE TABLE
(Azure SQL Data Warehouse) statement.

Transact-SQL Syntax Conventions

Syntax
CREATE TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
[ ( column_name [ ,...n ] ) ]
WITH (
<distribution_option> -- required
[ , <table_option> [ ,...n ] ]
)
AS <select_statement>
[;]

<distribution_option> ::=
{
DISTRIBUTION = HASH ( distribution_column_name )
| DISTRIBUTION = ROUND_ROBIN
| DISTRIBUTION = REPLICATE
}

<table_option> ::=
{
CLUSTERED COLUMNSTORE INDEX --default for SQL Data Warehouse
| HEAP --default for Parallel Data Warehouse
| CLUSTERED INDEX ( { index_column_name [ ASC | DESC ] } [ ,...n ] ) --default is ASC
}
| PARTITION ( partition_column_name RANGE [ LEFT | RIGHT ] --default is LEFT
FOR VALUES ( [ boundary_value [,...n] ] ) )

<select_statement> ::=
[ WITH <common_table_expression> [ ,...n ] ]
SELECT select_criteria

Arguments
For details, see the Arguments section in CREATE TABLE.
Column options
column_name [ ,... n ]
Column names do not allow the column options mentioned in CREATE TABLE. Instead, you can provide an
optional list of one or more column names for the new table. The columns in the new table will use the names you
specify. When you specify column names, the number of columns in the column list must match the number of
columns in the select results. If you don't specify any column names, the new target table will use the column
names in the select statement results.
You cannot specify any other column options such as data types, collation, or nullability. Each of these attributes is
derived from the results of the SELECT statement. However, you can use the SELECT statement to change the
attributes. For an example, see Use CTAS to change column attributes.
Table distribution options
DISTRIBUTION = HASH ( distribution_column_name ) | ROUND_ROBIN | REPLICATE
The CTAS statement requires a distribution option and does not have default values. This is different from
CREATE TABLE which has defaults.
For details and to understand how to choose the best distribution column, see the Table distribution options
section in CREATE TABLE.
Table partition options
The CTAS statement creates a non-partitioned table by default, even if the source table is partitioned. To create a
partitioned table with the CTAS statement, you must specify the partition option.
For details, see the Table partition options section in CREATE TABLE.
Select options
The select statement is the fundamental difference between CTAS and CREATE TABLE.
WITH common_table_expression
Specifies a temporary named result set, known as a common table expression (CTE ). For more information, see
WITH common_table_expression (Transact-SQL ).
SELECT select_criteria
Populates the new table with the results from a SELECT statement. select_criteria is the body of the SELECT
statement that determines which data to copy to the new table. For information about SELECT statements, see
SELECT (Transact-SQL ).

Permissions
CTAS requires SELECT permission on any objects referenced in the select_criteria.
For permissions to create a table, see Permissions in CREATE TABLE.

General Remarks
For details, see General Remarks in CREATE TABLE.

Limitations and Restrictions


Azure SQL Data Warehouse does not yet support auto create or auto update statistics. In order to get the best
performance from your queries, it's important to create statistics on all columns of all tables after you run CTAS
and after any substantial changes occur in the data. For more information, see CREATE STATISTICS (Transact-
SQL ).
SET ROWCOUNT (Transact-SQL ) has no effect on CTAS. To achieve a similar behavior, use TOP (Transact-SQL ).
For details, see Limitations and Restrictions in CREATE TABLE.

Locking Behavior
For details, see Locking Behavior in CREATE TABLE.

Performance
For a hash-distributed table, you can use CTAS to choose a different distribution column to achieve better
performance for joins and aggregations. If choosing a different distribution column is not your goal, you will have
the best CTAS performance if you specify the same distribution column since this will avoid re-distributing the
rows.
If you are using CTAS to create table and performance is not a factor, you can specify ROUND_ROBIN to avoid
having to decide on a distribution column.
To avoid data movement in subsequent queries, you can specify REPLICATE at the cost of increased storage for
loading a full copy of the table on each Compute node.

Examples for copying a table


A. Use CTAS to copy a table
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
Perhaps one of the most common uses of CTAS is creating a copy of a table so that you can change the DDL. If
for example you originally created your table as ROUND_ROBIN and now want change it to a table distributed on a
column, CTAS is how you would change the distribution column. CTAS can also be used to change partitioning,
indexing, or column types.
Let's say you created this table using the default distribution type of ROUND_ROBIN distributed since no distribution
column was specified in the CREATE TABLE .

CREATE TABLE FactInternetSales


(
ProductKey int NOT NULL,
OrderDateKey int NOT NULL,
DueDateKey int NOT NULL,
ShipDateKey int NOT NULL,
CustomerKey int NOT NULL,
PromotionKey int NOT NULL,
CurrencyKey int NOT NULL,
SalesTerritoryKey int NOT NULL,
SalesOrderNumber nvarchar(20) NOT NULL,
SalesOrderLineNumber tinyint NOT NULL,
RevisionNumber tinyint NOT NULL,
OrderQuantity smallint NOT NULL,
UnitPrice money NOT NULL,
ExtendedAmount money NOT NULL,
UnitPriceDiscountPct float NOT NULL,
DiscountAmount float NOT NULL,
ProductStandardCost money NOT NULL,
TotalProductCost money NOT NULL,
SalesAmount money NOT NULL,
TaxAmt money NOT NULL,
Freight money NOT NULL,
CarrierTrackingNumber nvarchar(25),
CustomerPONumber nvarchar(25)
);

Now you want to create a new copy of this table with a clustered columnstore index so that you can take
advantage of the performance of clustered columnstore tables. You also want to distribute this table on
ProductKey since you are anticipating joins on this column and want to avoid data movement during joins on
ProductKey. Lastly you also want to add partitioning on OrderDateKey so that you can quickly delete old data by
dropping old partitions. Here is the CTAS statement which would copy your old table into a new table.

CREATE TABLE FactInternetSales_new


WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH(ProductKey),
PARTITION
(
OrderDateKey RANGE RIGHT FOR VALUES
(
20000101,20010101,20020101,20030101,20040101,20050101,20060101,20070101,20080101,20090101,
20100101,20110101,20120101,20130101,20140101,20150101,20160101,20170101,20180101,20190101,
20200101,20210101,20220101,20230101,20240101,20250101,20260101,20270101,20280101,20290101
)
)
)
AS SELECT * FROM FactInternetSales;

Finally you can rename your tables to swap in your new table and then drop your old table.
RENAME OBJECT FactInternetSales TO FactInternetSales_old;
RENAME OBJECT FactInternetSales_new TO FactInternetSales;

DROP TABLE FactInternetSales_old;

Examples for column options


B. Use CTAS to change column attributes
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
This example uses CTAS to change data types, nullability, and collation for several columns in the DimCustomer2
table.

-- Original table
CREATE TABLE [dbo].[DimCustomer2] (
[CustomerKey] int NOT NULL,
[GeographyKey] int NULL,
[CustomerAlternateKey] nvarchar(15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
)
WITH (CLUSTERED COLUMNSTORE INDEX, DISTRIBUTION = HASH([CustomerKey]));

-- CTAS example to change data types, nullability, and column collations


CREATE TABLE test
WITH (HEAP, DISTRIBUTION = ROUND_ROBIN)
AS
SELECT
CustomerKey AS CustomerKeyNoChange,
CustomerKey*1 AS CustomerKeyChangeNullable,
CAST(CustomerKey AS DECIMAL(10,2)) AS CustomerKeyChangeDataTypeNullable,
ISNULL(CAST(CustomerKey AS DECIMAL(10,2)),0) AS CustomerKeyChangeDataTypeNotNullable,
GeographyKey AS GeographyKeyNoChange,
ISNULL(GeographyKey,0) AS GeographyKeyChangeNotNullable,
CustomerAlternateKey AS CustomerAlternateKeyNoChange,
CASE WHEN CustomerAlternateKey = CustomerAlternateKey
THEN CustomerAlternateKey END AS CustomerAlternateKeyNullable,
CustomerAlternateKey COLLATE Latin1_General_CS_AS_KS_WS AS CustomerAlternateKeyChangeCollation
FROM [dbo].[DimCustomer2]

-- Resulting table
CREATE TABLE [dbo].[test] (
[CustomerKeyNoChange] int NOT NULL,
[CustomerKeyChangeNullable] int NULL,
[CustomerKeyChangeDataTypeNullable] decimal(10, 2) NULL,
[CustomerKeyChangeDataTypeNotNullable] decimal(10, 2) NOT NULL,
[GeographyKeyNoChange] int NULL,
[GeographyKeyChangeNotNullable] int NOT NULL,
[CustomerAlternateKeyNoChange] nvarchar(15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[CustomerAlternateKeyNullable] nvarchar(15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[CustomerAlternateKeyChangeCollation] nvarchar(15) COLLATE Latin1_General_CS_AS_KS_WS NOT NULL
)
WITH (DISTRIBUTION = ROUND_ROBIN);

As a final step, you can use RENAME (Transact-SQL ) to switch the table names. This makes DimCustomer2 be
the new table.

RENAME OBJECT DimCustomer2 TO DimCustomer2_old;


RENAME OBJECT test TO DimCustomer2;

DROP TABLE DimCustomer2_old;


Examples for table distribution
C. Use CTAS to change the distribution method for a table
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
This simple example shows how to change the distribution method for a table. To show the mechanics of how to
do this, it changes a hash-distributed table to round-robin and then changes the round-robin table back to hash
distributed. The final table matches the original table.
In most cases you won't need to change a hash-distributed table to a round-robin table. More often, you might
need to change a round-robin table to a hash distributed table. For example, you might initially load a new table
as round-robin and then later move it to a hash-distributed table to get better join performance.
This example uses the AdventureWorksDW sample database. To load the SQL Data Warehouse version, see Load
sample data into SQL Data Warehouse

-- DimSalesTerritory is hash-distributed.
-- Copy it to a round-robin table.
CREATE TABLE [dbo].[myTable]
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = ROUND_ROBIN
)
AS SELECT * FROM [dbo].[DimSalesTerritory];

-- Switch table names

RENAME OBJECT [dbo].[DimSalesTerritory] to [DimSalesTerritory_old];


RENAME OBJECT [dbo].[myTable] TO [DimSalesTerritory];

DROP TABLE [dbo].[DimSalesTerritory_old];

Next, change it back to a hash distributed table.

-- You just made DimSalesTerritory a round-robin table.


-- Change it back to the original hash-distributed table.
CREATE TABLE [dbo].[myTable]
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH(SalesTerritoryKey)
)
AS SELECT * FROM [dbo].[DimSalesTerritory];

-- Switch table names

RENAME OBJECT [dbo].[DimSalesTerritory] to [DimSalesTerritory_old];


RENAME OBJECT [dbo].[myTable] TO [DimSalesTerritory];

DROP TABLE [dbo].[DimSalesTerritory_old];

D. Use CTAS to convert a table to a replicated table


Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
This example applies for converting round-robin or hash-distributed tables to a replicated table. This particular
example takes the previous method of changing the distribution type one step further. Since DimSalesTerritory is
a dimension and likely a smaller table, you can choose to re-create the table as replicated to avoid data movement
when joining to other tables.
-- DimSalesTerritory is hash-distributed.
-- Copy it to a replicated table.
CREATE TABLE [dbo].[myTable]
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = REPLICATE
)
AS SELECT * FROM [dbo].[DimSalesTerritory];

-- Switch table names

RENAME OBJECT [dbo].[DimSalesTerritory] to [DimSalesTerritory_old];


RENAME OBJECT [dbo].[myTable] TO [DimSalesTerritory];

DROP TABLE [dbo].[DimSalesTerritory_old];

E. Use CTAS to create a table with fewer columns


Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
The following example creates a round-robin distributed table named myTable (c, ln) . The new table only has
two columns. It uses the column aliases in the SELECT statement for the names of the columns.

CREATE TABLE myTable


WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = ROUND_ROBIN
)
AS SELECT CustomerKey AS c, LastName AS ln
FROM dimCustomer;

Examples for query hints


F. Use a Query Hint with CREATE TABLE AS SELECT (CTAS )
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
This query shows the basic syntax for using a query join hint with the CTAS statement. After the query is
submitted, SQL Data Warehouse applies the hash join strategy when it generates the query plan for each
individual distribution. For more information on the hash join query hint, see OPTION Clause (Transact-SQL ).

CREATE TABLE dbo.FactInternetSalesNew


WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = ROUND_ROBIN
)
AS SELECT T1.* FROM dbo.FactInternetSales T1 JOIN dbo.DimCustomer T2
ON ( T1.CustomerKey = T2.CustomerKey )
OPTION ( HASH JOIN );

Examples for external tables


G. Use CTAS to import data from Azure Blob storage
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
To import data from an external table, simply use CREATE TABLE AS SELECT to select from the external table.
The syntax to select data from an external table into SQL Data Warehouse is the same as the syntax for selecting
data from a regular table.
The following example defines an external table on data in an Azure blob storage account. It then uses CREATE
TABLE AS SELECT to select from the external table. This imports the data from Azure blob storage text-delimited
files and stores the data into a new SQL Data Warehouse table.

--Use your own processes to create the text-delimited files on Azure blob storage.
--Create the external table called ClickStream.
CREATE EXTERNAL TABLE ClickStreamExt (
url varchar(50),
event_date date,
user_IP varchar(50)
)
WITH (
LOCATION='/logs/clickstream/2015/',
DATA_SOURCE = MyAzureStorage,
FILE_FORMAT = TextFileFormat)
;

--Use CREATE TABLE AS SELECT to import the Azure blob storage data into a new
--SQL Data Warehouse table called ClickStreamData
CREATE TABLE ClickStreamData
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH (user_IP)
)
AS SELECT * FROM ClickStreamExt
;

H. Use CTAS to import Hadoop data from an external table


Applies to: Parallel Data Warehouse
To import data from an external table, simply use CREATE TABLE AS SELECT to select from the external table.
The syntax to select data from an external table into Parallel Data Warehouse is the same as the syntax for
selecting data from a regular table.
The following example defines an external table on a Hadoop cluster. It then uses CREATE TABLE AS SELECT to
select from the external table. This imports the data from Hadoop text-delimited files and stores the data into a
new Parallel Data Warehouse table.
-- Create the external table called ClickStream.
CREATE EXTERNAL TABLE ClickStreamExt (
url varchar(50),
event_date date,
user_IP varchar(50)
)
WITH (
LOCATION = 'hdfs://MyHadoop:5000/tpch1GB/employee.tbl',
FORMAT_OPTIONS ( FIELD_TERMINATOR = '|')
)
;

-- Use your own processes to create the Hadoop text-delimited files


-- on the Hadoop Cluster.

-- Use CREATE TABLE AS SELECT to import the Hadoop data into a new
-- table called ClickStreamPDW
CREATE TABLE ClickStreamPDW
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH (user_IP)
)
AS SELECT * FROM ClickStreamExt
;

Examples using CTAS to replace SQL Server code


Use CTAS to work around some unsupported features. Besides being able to run your code on the data
warehouse, rewriting existing code to use CTAS will usually improve performance. This is a result of its fully
parallelized design.

NOTE
Try to think "CTAS first". If you think you can solve a problem using CTAS then that is generally the best way to approach it
- even if you are writing more data as a result.

I. Use CTAS instead of SELECT..INTO


Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
SQL Server code typically uses SELECT..INTO to populate a table with the results of a SELECT statement. This is
an example of a SQL Server SELECT..INTO statement.

SELECT *
INTO #tmp_fct
FROM [dbo].[FactInternetSales]

This syntax is not supported in SQL Data Warehouse and Parallel Data Warehouse. This example shows how to
rewrite the previous SELECT..INTO statement as a CTAS statement. You can choose any of the DISTRIBUTION
options described in the CTAS syntax. This example uses the ROUND_ROBIN distribution method.
CREATE TABLE #tmp_fct
WITH
(
DISTRIBUTION = ROUND_ROBIN
)
AS
SELECT *
FROM [dbo].[FactInternetSales]
;

J. Use CTAS and implicit joins to replace ANSI joins in the FROM clause of an UPDATE statement
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
You may find you have a complex update that joins more than two tables together using ANSI joining syntax to
perform the UPDATE or DELETE.
Imagine you had to update this table:

CREATE TABLE [dbo].[AnnualCategorySales]


( [EnglishProductCategoryName] NVARCHAR(50) NOT NULL
, [CalendarYear] SMALLINT NOT NULL
, [TotalSalesAmount] MONEY NOT NULL
)
WITH
(
DISTRIBUTION = ROUND_ROBIN
)
;

The original query might have looked something like this:

UPDATE acs
SET [TotalSalesAmount] = [fis].[TotalSalesAmount]
FROM [dbo].[AnnualCategorySales] AS acs
JOIN (
SELECT [EnglishProductCategoryName]
, [CalendarYear]
, SUM([SalesAmount]) AS [TotalSalesAmount]
FROM [dbo].[FactInternetSales] AS s
JOIN [dbo].[DimDate] AS d ON s.[OrderDateKey] = d.[DateKey]
JOIN [dbo].[DimProduct] AS p ON s.[ProductKey] = p.[ProductKey]
JOIN [dbo].[DimProductSubCategory] AS u ON p.[ProductSubcategoryKey] = u.
[ProductSubcategoryKey]
JOIN [dbo].[DimProductCategory] AS c ON u.[ProductCategoryKey] = c.
[ProductCategoryKey]
WHERE [CalendarYear] = 2004
GROUP BY
[EnglishProductCategoryName]
, [CalendarYear]
) AS fis
ON [acs].[EnglishProductCategoryName] = [fis].[EnglishProductCategoryName]
AND [acs].[CalendarYear] = [fis].[CalendarYear]
;

Since SQL Data Warehouse does not support ANSI joins in the FROM clause of an UPDATE statement, you cannot
use this SQL Server code over without changing it slightly.
You can use a combination of a CTAS and an implicit join to replace this code:
-- Create an interim table
CREATE TABLE CTAS_acs
WITH (DISTRIBUTION = ROUND_ROBIN)
AS
SELECT ISNULL(CAST([EnglishProductCategoryName] AS NVARCHAR(50)),0) AS [EnglishProductCategoryName]
, ISNULL(CAST([CalendarYear] AS SMALLINT),0) AS [CalendarYear]
, ISNULL(CAST(SUM([SalesAmount]) AS MONEY),0) AS [TotalSalesAmount]
FROM [dbo].[FactInternetSales] AS s
JOIN [dbo].[DimDate] AS d ON s.[OrderDateKey] = d.[DateKey]
JOIN [dbo].[DimProduct] AS p ON s.[ProductKey] = p.[ProductKey]
JOIN [dbo].[DimProductSubCategory] AS u ON p.[ProductSubcategoryKey] = u.[ProductSubcategoryKey]
JOIN [dbo].[DimProductCategory] AS c ON u.[ProductCategoryKey] = c.[ProductCategoryKey]
WHERE [CalendarYear] = 2004
GROUP BY
[EnglishProductCategoryName]
, [CalendarYear]
;

-- Use an implicit join to perform the update


UPDATE AnnualCategorySales
SET AnnualCategorySales.TotalSalesAmount = CTAS_ACS.TotalSalesAmount
FROM CTAS_acs
WHERE CTAS_acs.[EnglishProductCategoryName] = AnnualCategorySales.[EnglishProductCategoryName]
AND CTAS_acs.[CalendarYear] = AnnualCategorySales.[CalendarYear]
;

--Drop the interim table


DROP TABLE CTAS_acs
;

K. Use CTAS to specify which data to keep instead of using ANSI joins in the FROM clause of a DELETE
statement
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
Sometimes the best approach for deleting data is to use CTAS . Rather than deleting the data simply select the
data you want to keep. This especially true for DELETE statements that use ansi joining syntax since SQL Data
Warehouse does not support ANSI joins in the FROM clause of a DELETE statement.
An example of a converted DELETE statement is available below:

CREATE TABLE dbo.DimProduct_upsert


WITH
( Distribution=HASH(ProductKey)
, CLUSTERED INDEX (ProductKey)
)
AS -- Select Data you wish to keep
SELECT p.ProductKey
, p.EnglishProductName
, p.Color
FROM dbo.DimProduct p
RIGHT JOIN dbo.stg_DimProduct s
ON p.ProductKey = s.ProductKey
;

RENAME OBJECT dbo.DimProduct TO DimProduct_old;


RENAME OBJECT dbo.DimProduct_upsert TO DimProduct;

L. Use CTAS to simplify merge statements


Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
Merge statements can be replaced, at least in part, by using CTAS . You can consolidate the INSERT and the
UPDATE into a single statement. Any deleted records would need to be closed off in a second statement.
An example of an UPSERT is available below:

CREATE TABLE dbo.[DimProduct_upsert]


WITH
( DISTRIBUTION = HASH([ProductKey])
, CLUSTERED INDEX ([ProductKey])
)
AS
-- New rows and new versions of rows
SELECT s.[ProductKey]
, s.[EnglishProductName]
, s.[Color]
FROM dbo.[stg_DimProduct] AS s
UNION ALL
-- Keep rows that are not being touched
SELECT p.[ProductKey]
, p.[EnglishProductName]
, p.[Color]
FROM dbo.[DimProduct] AS p
WHERE NOT EXISTS
( SELECT *
FROM [dbo].[stg_DimProduct] s
WHERE s.[ProductKey] = p.[ProductKey]
)
;

RENAME OBJECT dbo.[DimProduct] TO [DimProduct_old];


RENAME OBJECT dbo.[DimpProduct_upsert] TO [DimProduct];

M. Explicitly state data type and nullability of output


Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
When migrating SQL Server code to SQL Data Warehouse, you might find you run across this type of coding
pattern:

DECLARE @d decimal(7,2) = 85.455


, @f float(24) = 85.455

CREATE TABLE result


(result DECIMAL(7,2) NOT NULL
)
WITH (DISTRIBUTION = ROUND_ROBIN)

INSERT INTO result


SELECT @d*@f
;

Instinctively you might think you should migrate this code to a CTAS and you would be correct. However, there is
a hidden issue here.
The following code does NOT yield the same result:

DECLARE @d decimal(7,2) = 85.455


, @f float(24) = 85.455
;

CREATE TABLE ctas_r


WITH (DISTRIBUTION = ROUND_ROBIN)
AS
SELECT @d*@f as result
;
Notice that the column "result" carries forward the data type and nullability values of the expression. This can lead
to subtle variances in values if you aren't careful.
Try the following as an example:

SELECT result,result*@d
from result
;

SELECT result,result*@d
from ctas_r
;

The value stored for result is different. As the persisted value in the result column is used in other expressions the
error becomes even more significant.

This is particularly important for data migrations. Even though the second query is arguably more accurate there
is a problem. The data would be different compared to the source system and that leads to questions of integrity
in the migration. This is one of those rare cases where the "wrong" answer is actually the right one!
The reason we see this disparity between the two results is down to implicit type casting. In the first example the
table defines the column definition. When the row is inserted an implicit type conversion occurs. In the second
example there is no implicit type conversion as the expression defines data type of the column. Notice also that
the column in the second example has been defined as a NULLable column whereas in the first example it has
not. When the table was created in the first example column nullability was explicitly defined. In the second
example it was just left to the expression and by default this would result in a NULL definition.
To resolve these issues you must explicitly set the type conversion and nullability in the SELECT portion of the
CTAS statement. You cannot set these properties in the create table part.

The example below demonstrates how to fix the code:

DECLARE @d decimal(7,2) = 85.455


, @f float(24) = 85.455

CREATE TABLE ctas_r


WITH (DISTRIBUTION = ROUND_ROBIN)
AS
SELECT ISNULL(CAST(@d*@f AS DECIMAL(7,2)),0) as result

Note the following:


CAST or CONVERT could have been used
ISNULL is used to force NULLability not COALESCE
ISNULL is the outermost function
The second part of the ISNULL is a constant i.e. 0
NOTE
For the nullability to be correctly set it is vital to use ISNULL and not COALESCE . COALESCE is not a deterministic function
and so the result of the expression will always be NULLable. ISNULL is different. It is deterministic. Therefore when the
second part of the ISNULL function is a constant or a literal then the resulting value will be NOT NULL.

This tip is not just useful for ensuring the integrity of your calculations. It is also important for table partition
switching. Imagine you have this table defined as your fact:

CREATE TABLE [dbo].[Sales]


(
[date] INT NOT NULL
, [product] INT NOT NULL
, [store] INT NOT NULL
, [quantity] INT NOT NULL
, [price] MONEY NOT NULL
, [amount] MONEY NOT NULL
)
WITH
( DISTRIBUTION = HASH([product])
, PARTITION ( [date] RANGE RIGHT FOR VALUES
(20000101,20010101,20020101
,20030101,20040101,20050101
)
)
)
;

However, the value field is a calculated expression it is not part of the source data.
To create your partitioned dataset you might want to do this:

CREATE TABLE [dbo].[Sales_in]


WITH
( DISTRIBUTION = HASH([product])
, PARTITION ( [date] RANGE RIGHT FOR VALUES
(20000101,20010101
)
)
)
AS
SELECT
[date]
, [product]
, [store]
, [quantity]
, [price]
, [quantity]*[price] AS [amount]
FROM [stg].[source]
OPTION (LABEL = 'CTAS : Partition IN table : Create')
;

The query would run perfectly fine. The problem comes when you try to perform the partition switch. The table
definitions do not match. To make the table definitions match the CTAS needs to be modified.
CREATE TABLE [dbo].[Sales_in]
WITH
( DISTRIBUTION = HASH([product])
, PARTITION ( [date] RANGE RIGHT FOR VALUES
(20000101,20010101
)
)
)
AS
SELECT
[date]
, [product]
, [store]
, [quantity]
, [price]
, ISNULL(CAST([quantity]*[price] AS MONEY),0) AS [amount]
FROM [stg].[source]
OPTION (LABEL = 'CTAS : Partition IN table : Create');

You can see therefore that type consistency and maintaining nullability properties on a CTAS is a good
engineering best practice. It helps to maintain integrity in your calculations and also ensures that partition
switching is possible.

See Also
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL )
CREATE EXTERNAL TABLE AS SELECT (Transact-SQL )
CREATE TABLE (Azure SQL Data Warehouse) DROP TABLE (Transact-SQL )
DROP EXTERNAL TABLE (Transact-SQL )
ALTER TABLE (Transact-SQL )
ALTER EXTERNAL TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL) IDENTITY (Property)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an identity column in a table. This property is used with the CREATE TABLE and ALTER TABLE Transact-
SQL statements.

NOTE
The IDENTITY property is different from the SQL-DMO Identity property that exposes the row identity property of a
column.

Transact-SQL Syntax Conventions

Syntax
IDENTITY [ (seed , increment) ]

Arguments
seed
Is the value that is used for the very first row loaded into the table.
increment
Is the incremental value that is added to the identity value of the previous row that was loaded.
You must specify both the seed and increment or neither. If neither is specified, the default is (1,1).

Remarks
Identity columns can be used for generating key values. The identity property on a column guarantees the
following:
Each new value is generated based on the current seed & increment.
Each new value for a particular transaction is different from other concurrent transactions on the table.
The identity property on a column does not guarantee the following:
Uniqueness of the value – Uniqueness must be enforced by using a PRIMARY KEY or UNIQUE
constraint or UNIQUE index.
Consecutive values within a transaction – A transaction inserting multiple rows is not guaranteed to get
consecutive values for the rows because other concurrent inserts might occur on the table. If values must
be consecutive then the transaction should use an exclusive lock on the table or use the SERIALIZABLE
isolation level.
Consecutive values after server restart or other failures – SQL Server might cache identity values for
performance reasons and some of the assigned values can be lost during a database failure or server
restart. This can result in gaps in the identity value upon insert. If gaps are not acceptable then the
application should use its own mechanism to generate key values. Using a sequence generator with the
NOCACHE option can limit the gaps to transactions that are never committed.
Reuse of values – For a given identity property with specific seed/increment, the identity values are not
reused by the engine. If a particular insert statement fails or if the insert statement is rolled back then the
consumed identity values are lost and will not be generated again. This can result in gaps when the
subsequent identity values are generated.
These restrictions are part of the design in order to improve performance, and because they are acceptable
in many common situations. If you cannot use identity values because of these restrictions, create a
separate table holding a current value and manage access to the table and number assignment with your
application.
If a table with an identity column is published for replication, the identity column must be managed in a
way that is appropriate for the type of replication used. For more information, see Replicate Identity
Columns.
Only one identity column can be created per table.
In memory-optimized tables the seed and increment must be set to 1,1. Setting the seed or increment to a
value other than 1 results in the following error: The use of seed and increment values other than 1 is not
supported with memory optimized tables.

Examples
A. Using the IDENTITY property with CREATE TABLE
The following example creates a new table using the IDENTITY property for an automatically incrementing
identification number.

USE AdventureWorks2012;

IF OBJECT_ID ('dbo.new_employees', 'U') IS NOT NULL


DROP TABLE new_employees;
GO
CREATE TABLE new_employees
(
id_num int IDENTITY(1,1),
fname varchar (20),
minit char(1),
lname varchar(30)
);

INSERT new_employees
(fname, minit, lname)
VALUES
('Karin', 'F', 'Josephs');

INSERT new_employees
(fname, minit, lname)
VALUES
('Pirkko', 'O', 'Koskitalo');

B. Using generic syntax for finding gaps in identity values


The following example shows generic syntax for finding gaps in identity values when data is removed.
NOTE
The first part of the following Transact-SQL script is designed for illustration only. You can run the Transact-SQL script that
starts with the comment: -- Create the img table .

-- Here is the generic syntax for finding identity value gaps in data.
-- The illustrative example starts here.
SET IDENTITY_INSERT tablename ON;
DECLARE @minidentval column_type;
DECLARE @maxidentval column_type;
DECLARE @nextidentval column_type;
SELECT @minidentval = MIN($IDENTITY), @maxidentval = MAX($IDENTITY)
FROM tablename
IF @minidentval = IDENT_SEED('tablename')
SELECT @nextidentval = MIN($IDENTITY) + IDENT_INCR('tablename')
FROM tablename t1
WHERE $IDENTITY BETWEEN IDENT_SEED('tablename') AND
@maxidentval AND
NOT EXISTS (SELECT * FROM tablename t2
WHERE t2.$IDENTITY = t1.$IDENTITY +
IDENT_INCR('tablename'))
ELSE
SELECT @nextidentval = IDENT_SEED('tablename');
SET IDENTITY_INSERT tablename OFF;
-- Here is an example to find gaps in the actual data.
-- The table is called img and has two columns: the first column
-- called id_num, which is an increasing identification number, and the
-- second column called company_name.
-- This is the end of the illustration example.

-- Create the img table.


-- If the img table already exists, drop it.
-- Create the img table.
IF OBJECT_ID ('dbo.img', 'U') IS NOT NULL
DROP TABLE img;
GO
CREATE TABLE img (id_num int IDENTITY(1,1), company_name sysname);
INSERT img(company_name) VALUES ('New Moon Books');
INSERT img(company_name) VALUES ('Lucerne Publishing');
-- SET IDENTITY_INSERT ON and use in img table.
SET IDENTITY_INSERT img ON;

DECLARE @minidentval smallint;


DECLARE @nextidentval smallint;
SELECT @minidentval = MIN($IDENTITY) FROM img
IF @minidentval = IDENT_SEED('img')
SELECT @nextidentval = MIN($IDENTITY) + IDENT_INCR('img')
FROM img t1
WHERE $IDENTITY BETWEEN IDENT_SEED('img') AND 32766 AND
NOT EXISTS (SELECT * FROM img t2
WHERE t2.$IDENTITY = t1.$IDENTITY + IDENT_INCR('img'))
ELSE
SELECT @nextidentval = IDENT_SEED('img');
SET IDENTITY_INSERT img OFF;

See Also
ALTER TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL )
DBCC CHECKIDENT (Transact-SQL )
IDENT_INCR (Transact-SQL )
@@IDENTITY (Transact-SQL )
IDENTITY (Function) (Transact-SQL )
IDENT_SEED (Transact-SQL )
SELECT (Transact-SQL )
SET IDENTITY_INSERT (Transact-SQL )
Replicate Identity Columns
CREATE TRIGGER (Transact-SQL)
5/3/2018 • 24 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a DML, DDL, or logon trigger. A trigger is a special type of stored procedure that automatically executes
when an event occurs in the database server. DML triggers execute when a user tries to modify data through a
data manipulation language (DML ) event. DML events are INSERT, UPDATE, or DELETE statements on a table
or view. These triggers fire when any valid event is fired, regardless of whether or not any table rows are affected.
For more information, see DML Triggers.
DDL triggers execute in response to a variety of data definition language (DDL ) events. These events primarily
correspond to Transact-SQL CREATE, ALTER, and DROP statements, and certain system stored procedures that
perform DDL -like operations. Logon triggers fire in response to the LOGON event that is raised when a user's
session is being established. Triggers can be created directly from Transact-SQL statements or from methods of
assemblies that are created in the Microsoft .NET Framework common language runtime (CLR ) and uploaded to
an instance of SQL Server. SQL Server allows for creating multiple triggers for any specific statement.

IMPORTANT
Malicious code inside triggers can run under escalated privileges. For more information on how to mitigate this threat, see
Manage Trigger Security.

NOTE
The integration of .NET Framework CLR into SQL Server is discussed in this topic. CLR integration does not apply to Azure
SQL Database.

Transact-SQL Syntax Conventions

Syntax
-- SQL Server Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)

CREATE [ OR ALTER ] TRIGGER [ schema_name . ]trigger_name


ON { table | view }
[ WITH <dml_trigger_option> [ ,...n ] ]
{ FOR | AFTER | INSTEAD OF }
{ [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] }
[ WITH APPEND ]
[ NOT FOR REPLICATION ]
AS { sql_statement [ ; ] [ ,...n ] | EXTERNAL NAME <method specifier [ ; ] > }

<dml_trigger_option> ::=
[ ENCRYPTION ]
[ EXECUTE AS Clause ]

<method_specifier> ::=
assembly_name.class_name.method_name
-- SQL Server Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a
-- table (DML Trigger on memory-optimized tables)

CREATE [ OR ALTER ] TRIGGER [ schema_name . ]trigger_name


ON { table }
[ WITH <dml_trigger_option> [ ,...n ] ]
{ FOR | AFTER }
{ [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] }
AS { sql_statement [ ; ] [ ,...n ] }

<dml_trigger_option> ::=
[ NATIVE_COMPILATION ]
[ SCHEMABINDING ]
[ EXECUTE AS Clause ]

-- Trigger on a CREATE, ALTER, DROP, GRANT, DENY,


-- REVOKE or UPDATE statement (DDL Trigger)

CREATE [ OR ALTER ] TRIGGER trigger_name


ON { ALL SERVER | DATABASE }
[ WITH <ddl_trigger_option> [ ,...n ] ]
{ FOR | AFTER } { event_type | event_group } [ ,...n ]
AS { sql_statement [ ; ] [ ,...n ] | EXTERNAL NAME < method specifier > [ ; ] }

<ddl_trigger_option> ::=
[ ENCRYPTION ]
[ EXECUTE AS Clause ]

-- Trigger on a LOGON event (Logon Trigger)

CREATE [ OR ALTER ] TRIGGER trigger_name


ON ALL SERVER
[ WITH <logon_trigger_option> [ ,...n ] ]
{ FOR| AFTER } LOGON
AS { sql_statement [ ; ] [ ,...n ] | EXTERNAL NAME < method specifier > [ ; ] }

<logon_trigger_option> ::=
[ ENCRYPTION ]
[ EXECUTE AS Clause ]

Syntax
-- Windows Azure SQL Database Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)

CREATE [ OR ALTER ] TRIGGER [ schema_name . ]trigger_name


ON { table | view }
[ WITH <dml_trigger_option> [ ,...n ] ]
{ FOR | AFTER | INSTEAD OF }
{ [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] }
AS { sql_statement [ ; ] [ ,...n ] [ ; ] > }

<dml_trigger_option> ::=
[ EXECUTE AS Clause ]
-- Windows Azure SQL Database Syntax
-- Trigger on a CREATE, ALTER, DROP, GRANT, DENY,
-- REVOKE, or UPDATE STATISTICS statement (DDL Trigger)

CREATE [ OR ALTER ] TRIGGER trigger_name


ON { DATABASE }
[ WITH <ddl_trigger_option> [ ,...n ] ]
{ FOR | AFTER } { event_type | event_group } [ ,...n ]
AS { sql_statement [ ; ] [ ,...n ] [ ; ] }

<ddl_trigger_option> ::=
[ EXECUTE AS Clause ]

Arguments
OR ALTER
Applies to: Azure SQL Database, SQL Server (starting with SQL Server 2016 (13.x) SP1).
Conditionally alters the trigger only if it already exists.
schema_name
Is the name of the schema to which a DML trigger belongs. DML triggers are scoped to the schema of the table
or view on which they are created. schema_name cannot be specified for DDL or logon triggers.
trigger_name
Is the name of the trigger. A trigger_name must comply with the rules for identifiers, except that trigger_name
cannot start with # or ##.
table | view
Is the table or view on which the DML trigger is executed and is sometimes referred to as the trigger table or
trigger view. Specifying the fully qualified name of the table or view is optional. A view can be referenced only by
an INSTEAD OF trigger. DML triggers cannot be defined on local or global temporary tables.
DATABASE
Applies the scope of a DDL trigger to the current database. If specified, the trigger fires whenever event_type or
event_group occurs in the current database.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
Applies the scope of a DDL or logon trigger to the current server. If specified, the trigger fires whenever
event_type or event_group occurs anywhere in the current server.
WITH ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017.
Obfuscates the text of the CREATE TRIGGER statement. Using WITH ENCRYPTION prevents the trigger from
being published as part of SQL Server replication. WITH ENCRYPTION cannot be specified for CLR triggers.
EXECUTE AS
Specifies the security context under which the trigger is executed. Enables you to control which user account the
instance of SQL Server uses to validate permissions on any database objects that are referenced by the trigger.
This option is required for triggers on memory-optimized tables.
For more information, seeEXECUTE AS Clause (Transact-SQL ).
NATIVE_COMPIL ATION
Indicates that the trigger is natively compiled.
This option is required for triggers on memory-optimized tables.
SCHEMABINDING
Ensures that tables that are referenced by a trigger cannot be dropped or altered.
This option is required for triggers on memory-optimized tables and is not supported for triggers on traditional
tables.
FOR | AFTER
AFTER specifies that the DML trigger is fired only when all operations specified in the triggering SQL statement
have executed successfully. All referential cascade actions and constraint checks also must succeed before this
trigger fires.
AFTER is the default when FOR is the only keyword specified.
AFTER triggers cannot be defined on views.
INSTEAD OF
Specifies that the DML trigger is executed instead of the triggering SQL statement, therefore, overriding the
actions of the triggering statements. INSTEAD OF cannot be specified for DDL or logon triggers.
At most, one INSTEAD OF trigger per INSERT, UPDATE, or DELETE statement can be defined on a table or
view. However, you can define views on views where each view has its own INSTEAD OF trigger.
INSTEAD OF triggers are not allowed on updatable views that use WITH CHECK OPTION. SQL Server raises
an error when an INSTEAD OF trigger is added to an updatable view WITH CHECK OPTION specified. The user
must remove that option by using ALTER VIEW before defining the INSTEAD OF trigger.
{ [ DELETE ] [ , ] [ INSERT ] [ , ] [ UPDATE ] }
Specifies the data modification statements that activate the DML trigger when it is tried against this table or view.
At least one option must be specified. Any combination of these options in any order is allowed in the trigger
definition.
For INSTEAD OF triggers, the DELETE option is not allowed on tables that have a referential relationship
specifying a cascade action ON DELETE. Similarly, the UPDATE option is not allowed on tables that have a
referential relationship specifying a cascade action ON UPDATE.
WITH APPEND
Applies to: SQL Server 2008 through SQL Server 2008 R2.
Specifies that an additional trigger of an existing type should be added. WITH APPEND cannot be used with
INSTEAD OF triggers or if AFTER trigger is explicitly stated. WITH APPEND can be used only when FOR is
specified, without INSTEAD OF or AFTER, for backward compatibility reasons. WITH APPEND cannot be
specified if EXTERNAL NAME is specified (that is, if the trigger is a CLR trigger).
event_type
Is the name of a Transact-SQL language event that, after execution, causes a DDL trigger to fire. Valid events for
DDL triggers are listed in DDL Events.
event_group
Is the name of a predefined grouping of Transact-SQL language events. The DDL trigger fires after execution of
any Transact-SQL language event that belongs to event_group. Valid event groups for DDL triggers are listed in
DDL Event Groups.
After the CREATE TRIGGER has finished running, event_group also acts as a macro by adding the event types it
covers to the sys.trigger_events catalog view.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates that the trigger should not be executed when a replication agent modifies the table that is involved in
the trigger.
sql_statement
Is the trigger conditions and actions. Trigger conditions specify additional criteria that determine whether the
tried DML, DDL, or logon events cause the trigger actions to be performed.
The trigger actions specified in the Transact-SQL statements go into effect when the operation is tried.
Triggers can include any number and type of Transact-SQL statements, with exceptions. For more information,
see Remarks. A trigger is designed to check or change data based on a data modification or definition statement;
it should not return data to the user. The Transact-SQL statements in a trigger frequently include control-of-flow
language.
DML triggers use the deleted and inserted logical (conceptual) tables. They are structurally similar to the table on
which the trigger is defined, that is, the table on which the user action is tried. The deleted and inserted tables
hold the old values or new values of the rows that may be changed by the user action. For example, to retrieve all
values in the deleted table, use:

SELECT * FROM deleted;

For more information, see Use the inserted and deleted Tables.
DDL and logon triggers capture information about the triggering event by using the EVENTDATA (Transact-
SQL ) function. For more information, see Use the EVENTDATA Function.
SQL Server allows for the update of text, ntext, or image columns through the INSTEAD OF trigger on tables
or views.

IMPORTANT
ntext, text, and image data types will be removed in a future version of Microsoft SQL Server. Avoid using these data
types in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max),
and varbinary(max) instead. Both AFTER and INSTEAD OF triggers support varchar(MAX), nvarchar(MAX), and
varbinary(MAX) data in the inserted and deleted tables.

For triggers on memory-optimized tables, the only sql_statement allowed at the top level is an ATOMIC block.
The T-SQL allowed inside the ATOMIC block is limited by the T-SQL allowed inside native procs.
< method_specifier > Applies to: SQL Server 2008 through SQL Server 2017.
For a CLR trigger, specifies the method of an assembly to bind with the trigger. The method must take no
arguments and return void. class_name must be a valid SQL Server identifier and must exist as a class in the
assembly with assembly visibility. If the class has a namespace-qualified name that uses '.' to separate
namespace parts, the class name must be delimited by using [ ] or " " delimiters. The class cannot be a nested
class.

NOTE
By default, the ability of SQL Server to run CLR code is off. You can create, modify, and drop database objects that reference
managed code modules, but these references will not execute in an instance of SQL Server unless the clr enabled Option is
enabled by using sp_configure.

Remarks for DML Triggers


DML triggers are frequently used for enforcing business rules and data integrity. SQL Server provides
declarative referential integrity (DRI) through the ALTER TABLE and CREATE TABLE statements. However, DRI
does not provide cross-database referential integrity. Referential integrity refers to the rules about the
relationships between the primary and foreign keys of tables. To enforce referential integrity, use the PRIMARY
KEY and FOREIGN KEY constraints in ALTER TABLE and CREATE TABLE. If constraints exist on the trigger table,
they are checked after the INSTEAD OF trigger execution and before the AFTER trigger execution. If the
constraints are violated, the INSTEAD OF trigger actions are rolled back and the AFTER trigger is not fired.
The first and last AFTER triggers to be executed on a table can be specified by using sp_settriggerorder. Only one
first and one last AFTER trigger for each INSERT, UPDATE, and DELETE operation can be specified on a table. If
there are other AFTER triggers on the same table, they are randomly executed.
If an ALTER TRIGGER statement changes a first or last trigger, the first or last attribute set on the modified
trigger is dropped, and the order value must be reset by using sp_settriggerorder.
An AFTER trigger is executed only after the triggering SQL statement has executed successfully. This successful
execution includes all referential cascade actions and constraint checks associated with the object updated or
deleted. An AFTER trigger will not recursively fire an INSTEAD OF trigger on the same table.
If an INSTEAD OF trigger defined on a table executes a statement against the table that would ordinarily fire the
INSTEAD OF trigger again, the trigger is not called recursively. Instead, the statement is processed as if the table
had no INSTEAD OF trigger and starts the chain of constraint operations and AFTER trigger executions. For
example, if a trigger is defined as an INSTEAD OF INSERT trigger for a table, and the trigger executes an
INSERT statement on the same table, the INSERT statement executed by the INSTEAD OF trigger does not call
the trigger again. The INSERT executed by the trigger starts the process of performing constraint actions and
firing any AFTER INSERT triggers defined for the table.
If an INSTEAD OF trigger defined on a view executes a statement against the view that would ordinarily fire the
INSTEAD OF trigger again, it is not called recursively. Instead, the statement is resolved as modifications against
the base tables underlying the view. In this case, the view definition must meet all the restrictions for an
updatable view. For a definition of updatable views, see Modify Data Through a View.
For example, if a trigger is defined as an INSTEAD OF UPDATE trigger for a view, and the trigger executes an
UPDATE statement referencing the same view, the UPDATE statement executed by the INSTEAD OF trigger
does not call the trigger again. The UPDATE executed by the trigger is processed against the view as if the view
did not have an INSTEAD OF trigger. The columns changed by the UPDATE must be resolved to a single base
table. Each modification to an underlying base table starts the chain of applying constraints and firing AFTER
triggers defined for the table.
Testing for UPDATE or INSERT Actions to Specific Columns
You can design a Transact-SQL trigger to perform certain actions based on UPDATE or INSERT modifications to
specific columns. Use UPDATE () or COLUMNS_UPDATED in the body of the trigger for this purpose. UPDATE ()
tests for UPDATE or INSERT attempts on one column. COLUMNS_UPDATED tests for UPDATE or INSERT
actions that are performed on multiple columns and returns a bit pattern that indicates which columns were
inserted or updated.
Trigger Limitations
CREATE TRIGGER must be the first statement in the batch and can apply to only one table.
A trigger is created only in the current database; however, a trigger can reference objects outside the current
database.
If the trigger schema name is specified to qualify the trigger, qualify the table name in the same way.
The same trigger action can be defined for more than one user action (for example, INSERT and UPDATE ) in the
same CREATE TRIGGER statement.
INSTEAD OF DELETE/UPDATE triggers cannot be defined on a table that has a foreign key with a cascade on
DELETE/UPDATE action defined.
Any SET statement can be specified inside a trigger. The SET option selected remains in effect during the
execution of the trigger and then reverts to its former setting.
When a trigger fires, results are returned to the calling application, just like with stored procedures. To prevent
having results returned to an application because of a trigger firing, do not include either SELECT statements
that return results or statements that perform variable assignment in a trigger. A trigger that includes either
SELECT statements that return results to the user or statements that perform variable assignment requires
special handling; these returned results would have to be written into every application in which modifications to
the trigger table are allowed. If variable assignment must occur in a trigger, use a SET NOCOUNT statement at
the start of the trigger to prevent the return of any result sets.
Although a TRUNCATE TABLE statement is in effect a DELETE statement, it does not activate a trigger because
the operation does not log individual row deletions. However, only those users with permissions to execute a
TRUNCATE TABLE statement need be concerned about inadvertently circumventing a DELETE trigger this way.
The WRITETEXT statement, whether logged or unlogged, does not activate a trigger.
The following Transact-SQL statements are not allowed in a DML trigger:

ALTER DATABASE CREATE DATABASE DROP DATABASE

RESTORE DATABASE RESTORE LOG RECONFIGURE

Additionally, the following Transact-SQL statements are not allowed inside the body of a DML trigger when it is
used against the table or view that is the target of the triggering action.

CREATE INDEX (including CREATE ALTER INDEX DROP INDEX


SPATIAL INDEX and CREATE XML
INDEX)

DBCC DBREINDEX ALTER PARTITION FUNCTION DROP TABLE

ALTER TABLE when used to do the


following:

Add, modify, or drop columns.

Switch partitions.

Add or drop PRIMARY KEY or UNIQUE


constraints.

NOTE
Because SQL Server does not support user-defined triggers on system tables, we recommend that you do not create user-
defined triggers on system tables.

Optimizing DML Triggers


Triggers work in transactions (implied, or otherwise) and while they are open, they lock resources. The lock will
remain in place until the transaction is confirmed (with COMMIT) or rejected (with a ROLLBACK). The longer a
trigger runs, the higher the probability that another process will be blocked. Therefore, triggers should be written
in a way to decrease their duration whenever possible. One way to achieve this is to release a trigger when a
DML statement changes 0 rows.
To release the trigger for a command that that does not change any rows, employ the system variable
ROWCOUNT_BIG.
The following T-SQL code snippet will achieve this, and should be present at the beginning of each DML trigger:

IF (@@ROWCOUNT_BIG = 0)
RETURN;

Remarks for DDL Triggers


DDL triggers, like standard triggers, execute stored procedures in response to an event. But unlike standard
triggers, they do not execute in response to UPDATE, INSERT, or DELETE statements on a table or view. Instead,
they primarily execute in response to data definition language (DDL ) statements. These include CREATE, ALTER,
DROP, GRANT, DENY, REVOKE, and UPDATE STATISTICS statements. Certain system stored procedures that
perform DDL -like operations can also fire DDL triggers.

IMPORTANT
Test your DDL triggers to determine their responses to system stored procedure execution. For example, the CREATE TYPE
statement and the sp_addtype and sp_rename stored procedures will fire a DDL trigger that is created on a CREATE_TYPE
event.

For more information about DDL triggers, see DDL Triggers.


DDL triggers do not fire in response to events that affect local or global temporary tables and stored procedures.
Unlike DML triggers, DDL triggers are not scoped to schemas. Therefore, functions such as OBJECT_ID,
OBJECT_NAME, OBJECTPROPERTY, and OBJECTPROPERTYEX cannot be used for querying metadata about
DDL triggers. Use the catalog views instead. For more information, see Get Information About DDL Triggers.

NOTE
Server-scoped DDL triggers appear in the SQL Server Management Studio Object Explorer in the Triggers folder. This
folder is located under the Server Objects folder. Database-scoped DDL Triggers appear in the Database Triggers folder.
This folder is located under the Programmability folder of the corresponding database.

Logon Triggers
Logon triggers execute stored procedures in response to a LOGON event. This event is raised when a user
session is established with an instance of SQL Server. Logon triggers fire after the authentication phase of
logging in finishes, but before the user session is actually established. Therefore, all messages originating inside
the trigger that would typically reach the user, such as error messages and messages from the PRINT statement,
are diverted to the SQL Server error log. For more information, see Logon Triggers.
Logon triggers do not fire if authentication fails.
Distributed transactions are not supported in a logon trigger. Error 3969 is returned when a logon trigger
containing a distributed transaction is fired.
Disabling a Logon Trigger
A logon trigger can effectively prevent successful connections to the Database Engine for all users, including
members of the sysadmin fixed server role. When a logon trigger is preventing connections, members of the
sysadmin fixed server role can connect by using the dedicated administrator connection, or by starting the
Database Engine in minimal configuration mode (-f ). For more information, see Database Engine Service
Startup Options.

General Trigger Considerations


Returning Results
The ability to return results from triggers will be removed in a future version of SQL Server. Triggers that return
result sets may cause unexpected behavior in applications that are not designed to work with them. Avoid
returning result sets from triggers in new development work, and plan to modify applications that currently do
this. To prevent triggers from returning result sets, set the disallow results from triggers option to 1.
Logon triggers always disallow results sets to be returned and this behavior is not configurable. If a logon trigger
does generate a result set, the trigger fails to execute and the login attempt that fired the trigger is denied.
Multiple Triggers
SQL Server allows for multiple triggers to be created for each DML, DDL, or LOGON event. For example, if
CREATE TRIGGER FOR UPDATE is executed for a table that already has an UPDATE trigger, an additional
update trigger is created. In earlier versions of SQL Server, only one trigger for each INSERT, UPDATE, or
DELETE data modification event is allowed for each table.
Recursive Triggers
SQL Server also allows for recursive invocation of triggers when the RECURSIVE_TRIGGERS setting is enabled
using ALTER DATABASE.
Recursive triggers enable the following types of recursion to occur:
Indirect recursion
With indirect recursion, an application updates table T1. This fires trigger TR1, updating table T2. In this
scenario, trigger T2 then fires and updates table T1.
Direct recursion
With direct recursion, the application updates table T1. This fires trigger TR1, updating table T1. Because
table T1 was updated, trigger TR1 fires again, and so on.
The following example uses both indirect and direct trigger recursion Assume that two update triggers,
TR1 and TR2, are defined on table T1. Trigger TR1 updates table T1 recursively. An UPDATE statement
executes each TR1 and TR2 one time. Additionally, the execution of TR1 triggers the execution of TR1
(recursively) and TR2. The inserted and deleted tables for a specific trigger contain rows that correspond
only to the UPDATE statement that invoked the trigger.

NOTE
The previous behavior occurs only if the RECURSIVE_TRIGGERS setting is enabled by using ALTER DATABASE. There is no
defined order in which multiple triggers defined for a specific event are executed. Each trigger should be self-contained.

Disabling the RECURSIVE_TRIGGERS setting only prevents direct recursions. To disable indirect recursion also,
set the nested triggers server option to 0 by using sp_configure.
If any one of the triggers performs a ROLLBACK TRANSACTION, regardless of the nesting level, no more
triggers are executed.
Nested Triggers
Triggers can be nested to a maximum of 32 levels. If a trigger changes a table on which there is another trigger,
the second trigger is activated and can then call a third trigger, and so on. If any trigger in the chain sets off an
infinite loop, the nesting level is exceeded and the trigger is canceled. When a Transact-SQL trigger executes
managed code by referencing a CLR routine, type, or aggregate, this reference counts as one level against the
32-level nesting limit. Methods invoked from within managed code do not count against this limit
To disable nested triggers, set the nested triggers option of sp_configure to 0 (off ). The default configuration
allows for nested triggers. If nested triggers are off, recursive triggers are also disabled, regardless of the
RECURSIVE_TRIGGERS setting set by using ALTER DATABASE.
The first AFTER trigger nested inside an INSTEAD OF trigger fires even if the nested triggers server
configuration option is set to 0. However, under this setting, later AFTER triggers do not fire. We recommend
that you review your applications for nested triggers to determine whether the applications comply with your
business rules with regard to this behavior when the nested triggers server configuration option is set to 0, and
then make appropriate modifications.
Deferred Name Resolution
SQL Server allows for Transact-SQL stored procedures, triggers, and batches to refer to tables that do not exist
at compile time. This ability is called deferred name resolution.

Permissions
To create a DML trigger requires ALTER permission on the table or view on which the trigger is being created.
To create a DDL trigger with server scope (ON ALL SERVER ) or a logon trigger requires CONTROL SERVER
permission on the server. To create a DDL trigger with database scope (ON DATABASE ) requires ALTER ANY
DATABASE DDL TRIGGER permission in the current database.

Examples
A. Using a DML trigger with a reminder message
The following DML trigger prints a message to the client when anyone tries to add or change data in the
Customer table in the AdventureWorks2012 database.

CREATE TRIGGER reminder1


ON Sales.Customer
AFTER INSERT, UPDATE
AS RAISERROR ('Notify Customer Relations', 16, 10);
GO

B. Using a DML trigger with a reminder e -mail message


The following example sends an e-mail message to a specified person ( MaryM ) when the Customer table
changes.

CREATE TRIGGER reminder2


ON Sales.Customer
AFTER INSERT, UPDATE, DELETE
AS
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'AdventureWorks2012 Administrator',
@recipients = 'danw@Adventure-Works.com',
@body = 'Don''t forget to print a report for the sales force.',
@subject = 'Reminder';
GO
C. Using a DML AFTER trigger to enforce a business rule between the PurchaseOrderHeader and Vendor
tables
Because CHECK constraints can reference only the columns on which the column-level or table-level constraint
is defined, any cross-table constraints (in this case, business rules) must be defined as triggers.
The following example creates a DML trigger in the AdventureWorks2012 database. This trigger checks to make
sure the credit rating for the vendor is good (not 5) when an attempt is made to insert a new purchase order into
the PurchaseOrderHeader table. To obtain the credit rating of the vendor, the Vendor table must be referenced. If
the credit rating is too low, a message is displayed and the insertion does not execute.

-- This trigger prevents a row from being inserted in the Purchasing.PurchaseOrderHeader


-- table when the credit rating of the specified vendor is set to 5 (below average).

CREATE TRIGGER Purchasing.LowCredit ON Purchasing.PurchaseOrderHeader


AFTER INSERT
AS
IF (@@ROWCOUNT_BIG = 0)
RETURN;
IF EXISTS (SELECT *
FROM Purchasing.PurchaseOrderHeader AS p
JOIN inserted AS i
ON p.PurchaseOrderID = i.PurchaseOrderID
JOIN Purchasing.Vendor AS v
ON v.BusinessEntityID = p.VendorID
WHERE v.CreditRating = 5
)
BEGIN
RAISERROR ('A vendor''s credit rating is too low to accept new
purchase orders.', 16, 1);
ROLLBACK TRANSACTION;
RETURN
END;
GO

-- This statement attempts to insert a row into the PurchaseOrderHeader table


-- for a vendor that has a below average credit rating.
-- The AFTER INSERT trigger is fired and the INSERT transaction is rolled back.

INSERT INTO Purchasing.PurchaseOrderHeader (RevisionNumber, Status, EmployeeID,


VendorID, ShipMethodID, OrderDate, ShipDate, SubTotal, TaxAmt, Freight)
VALUES (
2
,3
,261
,1652
,4
,GETDATE()
,GETDATE()
,44594.55
,3567.564
,1114.8638 );
GO

D. Using a database -scoped DDL trigger


The following example uses a DDL trigger to prevent any synonym in a database from being dropped.
CREATE TRIGGER safety
ON DATABASE
FOR DROP_SYNONYM
AS
IF (@@ROWCOUNT = 0)
RETURN;
RAISERROR ('You must disable Trigger "safety" to drop synonyms!',10, 1)
ROLLBACK
GO
DROP TRIGGER safety
ON DATABASE;
GO

E. Using a server-scoped DDL trigger


The following example uses a DDL trigger to print a message if any CREATE DATABASE event occurs on the
current server instance, and uses the EVENTDATA function to retrieve the text of the corresponding Transact-SQL
statement. For more examples that use EVENTDATA in DDL triggers, see Use the EVENTDATA Function.
Applies to: SQL Server 2008 through SQL Server 2017.

CREATE TRIGGER ddl_trig_database


ON ALL SERVER
FOR CREATE_DATABASE
AS
PRINT 'Database Created.'
SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)')
GO
DROP TRIGGER ddl_trig_database
ON ALL SERVER;
GO

F. Using a logon trigger


The following logon trigger example denies an attempt to log in to SQL Server as a member of the login_test
login if there are already three user sessions running under that login.
Applies to: SQL Server 2008 through SQL Server 2017.

USE master;
GO
CREATE LOGIN login_test WITH PASSWORD = '3KHJ6dhx(0xVYsdf' MUST_CHANGE,
CHECK_EXPIRATION = ON;
GO
GRANT VIEW SERVER STATE TO login_test;
GO
CREATE TRIGGER connection_limit_trigger
ON ALL SERVER WITH EXECUTE AS 'login_test'
FOR LOGON
AS
BEGIN
IF ORIGINAL_LOGIN()= 'login_test' AND
(SELECT COUNT(*) FROM sys.dm_exec_sessions
WHERE is_user_process = 1 AND
original_login_name = 'login_test') > 3
ROLLBACK;
END;

G. Viewing the events that cause a trigger to fire


The following example queries the sys.triggers and sys.trigger_events catalog views to determine which
Transact-SQL language events cause trigger safety to fire. safety is created in the previous example.
SELECT TE.*
FROM sys.trigger_events AS TE
JOIN sys.triggers AS T ON T.object_id = TE.object_id
WHERE T.parent_class = 0 AND T.name = 'safety';
GO

See Also
ALTER TABLE (Transact-SQL )
ALTER TRIGGER (Transact-SQL )
COLUMNS_UPDATED (Transact-SQL )
CREATE TABLE (Transact-SQL )
DROP TRIGGER (Transact-SQL )
ENABLE TRIGGER (Transact-SQL )
DISABLE TRIGGER (Transact-SQL )
TRIGGER_NESTLEVEL (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.dm_sql_referenced_entities (Transact-SQL )
sys.dm_sql_referencing_entities (Transact-SQL )
sys.sql_expression_dependencies (Transact-SQL )
sp_help (Transact-SQL )
sp_helptrigger (Transact-SQL )
sp_helptext (Transact-SQL )
sp_rename (Transact-SQL )
sp_settriggerorder (Transact-SQL )
UPDATE () (Transact-SQL )
Get Information About DML Triggers
Get Information About DDL Triggers
sys.triggers (Transact-SQL )
sys.trigger_events (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.assembly_modules (Transact-SQL )
sys.server_triggers (Transact-SQL )
sys.server_trigger_events (Transact-SQL )
sys.server_sql_modules (Transact-SQL )
sys.server_assembly_modules (Transact-SQL )
CREATE TYPE (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an alias data type or a user-defined type in the current database in SQL Server or Azure SQL Database.
The implementation of an alias data type is based on a SQL Server native system type. A user-defined type is
implemented through a class of an assembly in the Microsoft .NET Framework common language runtime (CLR ).
To bind a user-defined type to its implementation, the CLR assembly that contains the implementation of the type
must first be registered in SQL Server by using CREATE ASSEMBLY.
The ability to run CLR code is off by default in SQL Server. You can create, modify and drop database objects that
reference managed code modules, but these references will not execute in SQL Server unless the clr enabled
Option is enabled by using sp_configure.

NOTE
The integration of .NET Framework CLR into SQL Server is discussed in this topic. CLR integration does not apply to Azure
SQL Database.

Transact-SQL Syntax Conventions

Syntax
-- Disk-Based Type Syntax
CREATE TYPE [ schema_name. ] type_name
{
FROM base_type
[ ( precision [ , scale ] ) ]
[ NULL | NOT NULL ]
| EXTERNAL NAME assembly_name [ .class_name ]
| AS TABLE ( { <column_definition> | <computed_column_definition> }
[ <table_constraint> ] [ ,...n ] )
} [ ; ]

<column_definition> ::=
column_name <data_type>
[ COLLATE collation_name ]
[ NULL | NOT NULL ]
[
DEFAULT constant_expression ]
| [ IDENTITY [ ( seed ,increment ) ]
]
[ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ]

<data type> ::=


[ type_schema_name . ] type_name
[ ( precision [ , scale ] | max |
[ { CONTENT | DOCUMENT } ] xml_schema_collection ) ]

<column_constraint> ::=
{ { PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH ( <index_option> [ ,...n ] )
]
| CHECK ( logical_expression )
}

<computed_column_definition> ::=

column_name AS computed_column_expression
[ PERSISTED [ NOT NULL ] ]
[
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH ( <index_option> [ ,...n ] )
]
| CHECK ( logical_expression )
]

<table_constraint> ::=
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
( column [ ASC | DESC ] [ ,...n ] )
[
WITH ( <index_option> [ ,...n ] )
]
| CHECK ( logical_expression )
}

<index_option> ::=
{
IGNORE_DUP_KEY = { ON | OFF }
}
-- Memory-Optimized Table Type Syntax
CREATE TYPE [schema_name. ] type_name
AS TABLE ( { <column_definition> }
| [ <table_constraint> ] [ ,... n ]
| [ <table_index> ] [ ,... n ] } )
[ WITH ( <table_option> [ ,... n ] ) ]
[ ; ]

<column_definition> ::=
column_name <data_type>
[ COLLATE collation_name ] [ NULL | NOT NULL ] [
[ IDENTITY [ (1 , 1) ]
]
[ <column_constraint> [ ... n ] ] [ <column_index> ]

<data type> ::=


[type_schema_name . ] type_name [ (precision [ , scale ]) ]

<column_constraint> ::=
{ PRIMARY KEY { NONCLUSTERED HASH WITH (BUCKET_COUNT = bucket_count)
| NONCLUSTERED } }

< table_constraint > ::=


{ PRIMARY KEY { NONCLUSTERED HASH (column [ ,... n ] )
WITH (BUCKET_COUNT = bucket_count)
| NONCLUSTERED (column [ ASC | DESC ] [ ,... n ] ) } }

<column_index> ::=
INDEX index_name
{ { [ NONCLUSTERED ] HASH WITH (BUCKET_COUNT = bucket_count)
| NONCLUSTERED } }

< table_index > ::=


INDEX constraint_name
{ { [ NONCLUSTERED ] HASH (column [ ,... n ] ) WITH (BUCKET_COUNT = bucket_count)
| [NONCLUSTERED] (column [ ASC | DESC ] [ ,... n ] )} }

<table_option> ::=
{
[MEMORY_OPTIMIZED = {ON | OFF}]
}

Arguments
schema_name
Is the name of the schema to which the alias data type or user-defined type belongs.
type_name
Is the name of the alias data type or user-defined type. Type names must comply with the rules for identifiers.
base_type
Is the SQL Server supplied data type on which the alias data type is based. base_type is sysname, with no default,
and can be one of the following values:

bigint binary( n ) bit char( n )

date datetime datetime2 datetimeoffset

decimal float image int


money nchar( n ) ntext numeric

nvarchar( n | max) real smalldatetime smallint

smallmoney sql_variant text time

tinyint uniqueidentifier varbinary( n | max) varchar( n | max)

base_type can also be any data type synonym that maps to one of these system data types.
precision
For decimal or numeric, is a non-negative integer that indicates the maximum total number of decimal digits
that can be stored, both to the left and to the right of the decimal point. For more information, see decimal and
numeric (Transact-SQL ).
scale
For decimal or numeric, is a non-negative integer that indicates the maximum number of decimal digits that can
be stored to the right of the decimal point, and it must be less than or equal to the precision. For more
information, see decimal and numeric (Transact-SQL ).
NULL | NOT NULL
Specifies whether the type can hold a null value. If not specified, NULL is the default.
assembly_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the SQL Server assembly that references the implementation of the user-defined type in the common
language runtime. assembly_name should match an existing assembly in SQL Server in the current database.

NOTE
EXTERNAL_NAME is not available in a contained database.

[. class_name ]
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the class within the assembly that implements the user-defined type. class_name must be a valid
identifier and must exist as a class in the assembly with assembly visibility. class_name is case-sensitive, regardless
of the database collation, and must exactly match the class name in the corresponding assembly. The class name
can be a namespace-qualified name enclosed in square brackets ([ ]) if the programming language that is used to
write the class uses the concept of namespaces, such as C#. If class_name is not specified, SQL Server assumes it
is the same as type_name.
<column_definition>
Defines the columns for a user-defined table type.
<data type>
Defines the data type in a column for a user-defined table type. For more information about data types, see Data
Types (Transact-SQL ). For more information about tables, see CREATE TABLE (Transact-SQL ).
<column_constraint>
Defines the column constraints for a user-defined table type. Supported constraints include PRIMARY KEY,
UNIQUE, and CHECK. For more information about tables, see CREATE TABLE (Transact-SQL ).
<computed_column_definition>
Defines a computed column expression as a column in a user-defined table type. For more information about
tables, see CREATE TABLE (Transact-SQL ).
<table_constraint>
Defines a table constraint on a user-defined table type. Supported constraints include PRIMARY KEY, UNIQUE,
and CHECK.
<index_option>
Specifies the error response to duplicate key values in a multiple-row insert operation on a unique clustered or
unique nonclustered index. For more information about index options, see CREATE INDEX (Transact-SQL ).
INDEX
You must specify column and table indexes as part of the CREATE TABLE statement. CREATE INDEX and DROP
INDEX are not supported for memory-optimized tables.
MEMORY_OPTIMIZED
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates whether the table type is memory optimized. This option is off by default; the table (type) is not a
memory optimized table (type). Memory optimized table types are memory-optimized user tables, the schema of
which is persisted on disk similar to other user tables.
BUCKET_COUNT
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates the number of buckets that should be created in the hash index. The maximum value for
BUCKET_COUNT in hash indexes is 1,073,741,824. For more information about bucket counts, see Indexes for
Memory-Optimized Tables. bucket_count is a required argument.
HASH
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates that a HASH index is created. Hash indexes are supported only on memory optimized tables.

Remarks
The class of the assembly that is referenced in assembly_name, together with its methods, should satisfy all the
requirements for implementing a user-defined type in SQL Server. For more information about these
requirements, see CLR User-Defined Types.
Additional considerations include the following:
The class can have overloaded methods, but these methods can be called only from within managed code,
not from Transact-SQL.
Any static members must be declared as const or readonly if assembly_name is SAFE or
EXTERNAL_ACCESS.
Within a database, there can be only one user-defined type registered against any specified type that has
been uploaded in SQL Server from the CLR. If a user-defined type is created on a CLR type for which a
user-defined type already exists in the database, CREATE TYPE fails with an error. This restriction is
required to avoid ambiguity during SQL Type resolution if a CLR type can be mapped to more than one
user-defined type.
If any mutator method in the type does not return void, the CREATE TYPE statement does not execute.
To modify a user-defined type, you must drop the type by using a DROP TYPE statement and then re-
create it.
Unlike user-defined types that are created by using sp_addtype, the public database role is not
automatically granted REFERENCES permission on types that are created by using CREATE TYPE. This
permission must be granted separately.
In user-defined table types, structured user-defined types that are used in column_name <data type> are
part of the database schema scope in which the table type is defined. To access structured user-defined
types in a different scope within the database, use two-part names.
In user-defined table types, the primary key on computed columns must be PERSISTED and NOT NULL.

Memory-Optimized Table Types


Beginning in SQL Server 2014 (12.x), processing data in a table type can be done in primary memory, and not on
disk. For more information, see In-Memory OLTP (In-Memory Optimization). For code samples showing how to
create memory-optimized table types, see Creating a Memory-Optimized Table and a Natively Compiled Stored
Procedure.

Permissions
Requires CREATE TYPE permission in the current database and ALTER permission on schema_name. If
schema_name is not specified, the default name resolution rules for determining the schema for the current user
apply. If assembly_name is specified, a user must either own the assembly or have REFERENCES permission on it.
If any columns in the CREATE TABLE statement are defined to be of a user-defined type, REFERENCES
permission on the user-defined type is required.

NOTE
A user creating a table with a column that uses a user-defined type needs the REFERENCES permission on the user-defined
type. If this table must be created in TempDB, then either the REFERENCES permission needs to be granted explicitly each
time before the table is created, or this data type and REFERENCES permissions need to be added to the Model database. If
this is done, then this data type and permissions will be available in TempDB permanently. Otherwise, the user-defined data
type and permissions will disappear when SQL Server is restarted. For more information, see CREATE TABLE

Examples
A. Creating an alias type based on the varchar data type
The following example creates an alias type based on the system-supplied varchar data type.

CREATE TYPE SSN


FROM varchar(11) NOT NULL ;

B. Creating a user-defined type


The following example creates a type Utf8String that references class utf8string in the assembly utf8string .
Before creating the type, assembly utf8string is registered in the local database. Replace the binary portion of
the CREATE ASSEMBLY statement with a valid description.
Applies to: SQL Server 2008 through SQL Server 2017.
CREATE ASSEMBLY utf8string
AUTHORIZATION [dbi]
FROM 0x4D... ;
GO
CREATE TYPE Utf8String
EXTERNAL NAME utf8string.[Microsoft.Samples.SqlServer.utf8string] ;
GO

C. Creating a user-defined table type


The following example creates a user-defined table type that has two columns. For more information about how
to create and use table-valued parameters, see Use Table-Valued Parameters (Database Engine).

CREATE TYPE LocationTableType AS TABLE


( LocationName VARCHAR(50)
, CostRate INT );
GO

See Also
CREATE ASSEMBLY (Transact-SQL )
DROP TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE USER (Transact-SQL)
5/3/2018 • 15 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds a user to the current database. The eleven types of users are listed below with a sample of the most basic
syntax:
Users based on logins in master This is the most common type of user.
User based on a login based on a Windows Active Directory account. CREATE USER [Contoso\Fritz];
User based on a login based on a Windows group. CREATE USER [Contoso\Sales];
User based on a login using SQL Server authentication. CREATE USER Mary;
Users that authenticate at the database Recommended to help make your database more portable.
Always allowed in SQL Database. Only allowed in a contained database in SQL Server.
User based on a Windows user that has no login. CREATE USER [Contoso\Fritz];
User based on a Windows group that has no login. CREATE USER [Contoso\Sales];
User in SQL Database or SQL Data Warehouse based on an Azure Active Directory user.
CREATE USER [Contoso\Fritz] FROM EXTERNAL PROVIDER;

Contained database user with password. (Not available in SQL Data Warehouse.)
CREATE USER Mary WITH PASSWORD = '********';

Users based on Windows principals that connect through Windows group logins
User based on a Windows user that has no login, but can connect to the Database Engine through
membership in a Windows group. CREATE USER [Contoso\Fritz];
User based on a Windows group that has no login, but can connect to the Database Engine through
membership in a different Windows group. CREATE USER [Contoso\Fritz];
Users that cannot authenticate These users cannot login to SQL Server or SQL Database.
User without a login. Cannot login but can be granted permissions. CREATE USER CustomApp WITHOUT LOGIN;
User based on a certificate. Cannot login but can be granted permissions and can sign modules.
CREATE USER TestProcess FOR CERTIFICATE CarnationProduction50;
User based on an asymmetric key. Cannot login but can be granted permissions and can sign modules.
CREATE User TestProcess FROM ASYMMETRIC KEY PacificSales09;

Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

-- Syntax Users based on logins in master


CREATE USER user_name
[
{ FOR | FROM } LOGIN login_name
]
[ WITH <limited_options_list> [ ,... ] ]
[ ; ]

--Users that authenticate at the database


CREATE USER
{
windows_principal [ WITH <options_list> [ ,... ] ]

| user_name WITH PASSWORD = 'password' [ , <options_list> [ ,... ]


| Azure_Active_Directory_principal FROM EXTERNAL PROVIDER
}

[ ; ]

--Users based on Windows principals that connect through Windows group logins
CREATE USER
{
windows_principal [ { FOR | FROM } LOGIN windows_principal ]
| user_name { FOR | FROM } LOGIN windows_principal
}
[ WITH <limited_options_list> [ ,... ] ]
[ ; ]

--Users that cannot authenticate


CREATE USER user_name
{
WITHOUT LOGIN [ WITH <limited_options_list> [ ,... ] ]
| { FOR | FROM } CERTIFICATE cert_name
| { FOR | FROM } ASYMMETRIC KEY asym_key_name
}
[ ; ]

<options_list> ::=
DEFAULT_SCHEMA = schema_name
| DEFAULT_LANGUAGE = { NONE | lcid | language name | language alias }
| SID = sid
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]

<limited_options_list> ::=
DEFAULT_SCHEMA = schema_name ]
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]

-- SQL Database syntax when connected to a federation member


CREATE USER user_name
[;]

-- Syntax for Azure SQL Data Warehouse

CREATE USER user_name


[ { { FOR | FROM } { LOGIN login_name }
| WITHOUT LOGIN
]
[ WITH DEFAULT_SCHEMA = schema_name ]
[;]

CREATE USER Azure_Active_Directory_principal FROM EXTERNAL PROVIDER


[ WITH DEFAULT_SCHEMA = schema_name ]
[;]
-- Syntax for Parallel Data Warehouse

CREATE USER user_name


[ { { FOR | FROM }
{
LOGIN login_name
}
| WITHOUT LOGIN
]
[ WITH DEFAULT_SCHEMA = schema_name ]
[;]

Arguments
user_name
Specifies the name by which the user is identified inside this database. user_name is a sysname. It can be up to
128 characters long. When creating a user based on a Windows principal, the Windows principal name becomes
the user name unless another user name is specified.
LOGIN login_name
Specifies the login for which the database user is being created. login_name must be a valid login in the server.
Can be a login based on a Windows principal (user or group), or a login using SQL Server authentication. When
this SQL Server login enters the database, it acquires the name and ID of the database user that is being created.
When creating a login mapped from a Windows principal, use the format [<domainName>\<loginName>]. For
examples, see Syntax Summary.
If the CREATE USER statement is the only statement in a SQL batch, Windows Azure SQL Database supports
the WITH LOGIN clause. If the CREATE USER statement is not the only statement in a SQL batch or is executed
in dynamic SQL, the WITH LOGIN clause is not supported.
WITH DEFAULT_SCHEMA = schema_name
Specifies the first schema that will be searched by the server when it resolves the names of objects for this
database user.
'windows_principal'
Specifies the Windows principal for which the database user is being created. The windows_principal can be a
Windows user, or a Windows group. The user will be created even if the windows_principal does not have a login.
When connecting to SQL Server, if the windows_principal does not have a login, the Windows principal must
authenticate at the Database Engine through membership in a Windows group that has a login, or the
connection string must specify the contained database as the initial catalog. When creating a user from a
Windows principal, use the format [<domainName>\<loginName>]. For examples, see Syntax Summary.
Users based on Active Directory users, are limited to names of less than 21 characters.
'Azure_Active_Directory_principal'
Applies to: SQL Database, SQL Data Warehouse.
Specifies the Azure Active Directory principal for which the database user is being created. The
Azure_Active_Directory_principal can be an Azure Active Directory user, or an Azure Active Directory group.
(Azure Active Directory users cannot have Windows Authentication logins in SQL Database; only database
users.) The connection string must specify the contained database as the initial catalog.
For users, you use the full alias of their domain principal.
CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER;

CREATE USER [alice@fabrikam.onmicrosoft.com] FROM EXTERNAL PROVIDER;


For security groups, you use the Display Name of the security group. For the Nurses security group, you
would use:
CREATE USER [Nurses] FROM EXTERNAL PROVIDER;

For more information, see Connecting to SQL Database By Using Azure Active Directory Authentication.
WITH PASSWORD = 'password'
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Can only be used in a contained database. Specifies the password for the user that is being created. Beginning
with SQL Server 2012 (11.x), stored password information is calculated using SHA-512 of the salted password.
WITHOUT LOGIN
Specifies that the user should not be mapped to an existing login.
CERTIFICATE cert_name
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies the certificate for which the database user is being created.
ASYMMETRIC KEY asym_key_name
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies the asymmetric key for which the database user is being created.
DEFAULT_L ANGUAGE = { NONE | <lcid> | <language name> | <language alias> }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Specifies the default language for the new user. If a default language is specified for the user and the default
language of the database is later changed, the users default language remains as specified. If no default language
is specified, the default language for the user will be the default language of the database. If the default language
for the user is not specified and the default language of the database is later changed, the default language of the
user will change to the new default language for the database.

IMPORTANT
DEFAULT_LANGUAGE is used only for a contained database user.

SID = sid
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Applies only to users with passwords ( SQL Server authentication) in a contained database. Specifies the SID of
the new database user. If this option is not selected, SQL Server automatically assigns a SID. Use the SID
parameter to create users in multiple databases that have the same identity (SID ). This is useful when creating
users in multiple databases to prepare for Always On failover. To determine the SID of a user, query
sys.database_principals.
ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.
Suppresses cryptographic metadata checks on the server in bulk copy operations. This enables the user to bulk
copy encrypted data between tables or databases, without decrypting the data. The default is OFF.
WARNING
Improper use of this option can lead to data corruption. For more information, see Migrate Sensitive Data Protected by
Always Encrypted.

Remarks
If FOR LOGIN is omitted, the new database user will be mapped to the SQL Server login with the same name.
The default schema will be the first schema that will be searched by the server when it resolves the names of
objects for this database user. Unless otherwise specified, the default schema will be the owner of objects created
by this database user.
If the user has a default schema, that default schema will used. If the user does not have a default schema, but the
user is a member of a group that has a default schema, the default schema of the group will be used. If the user
does not have a default schema, and is a member of more than one group, the default schema for the user will be
that of the Windows group with the lowest principal_id and an explicitly set default schema. (It is not possible to
explicitly select one of the available default schemas as the preferred schema.) If no default schema can be
determined for a user, the dbo schema will be used.
DEFAULT_SCHEMA can be set before the schema that it points to is created.
DEFAULT_SCHEMA cannot be specified when you are creating a user mapped to a certificate, or an asymmetric
key.
The value of DEFAULT_SCHEMA is ignored if the user is a member of the sysadmin fixed server role. All
members of the sysadmin fixed server role have a default schema of dbo .
The WITHOUT LOGIN clause creates a user that is not mapped to a SQL Server login. It can connect to other
databases as guest. Permissions can be assigned to this user without login and when the security context is
changed to a user without login, the original users receives the permissions of the user without login. See
example D. Creating and using a user without a login.
Only users that are mapped to Windows principals can contain the backslash character (\).
CREATE USER cannot be used to create a guest user because the guest user already exists inside every database.
You can enable the guest user by granting it CONNECT permission, as shown:

GRANT CONNECT TO guest;


GO

Information about database users is visible in the sys.database_principals catalog view.

Syntax Summary
Users based on logins in master
The following list shows possible syntax for users based on logins. The default schema options are not listed.
CREATE USER [Domain1\WindowsUserBarry]
CREATE USER [Domain1\WindowsUserBarry] FOR LOGIN Domain1\WindowsUserBarry
CREATE USER [Domain1\WindowsUserBarry] FROM LOGIN Domain1\WindowsUserBarry
CREATE USER [Domain1\WindowsGroupManagers]
CREATE USER [Domain1\WindowsGroupManagers] FOR LOGIN [Domain1\WindowsGroupManagers]
CREATE USER [Domain1\WindowsGroupManagers] FROM LOGIN [Domain1\WindowsGroupManagers]
CREATE USER SQLAUTHLOGIN
CREATE USER SQLAUTHLOGIN FOR LOGIN SQLAUTHLOGIN
CREATE USER SQLAUTHLOGIN FROM LOGIN SQLAUTHLOGIN

Users that authenticate at the database


The following list shows possible syntax for users that can only be used in a contained database. The users
created will not be related to any logins in the master database. The default schema and language options are
not listed.

IMPORTANT
This syntax grants users access to the database and also grants new access to the Database Engine.

CREATE USER [Domain1\WindowsUserBarry]


CREATE USER [Domain1\WindowsGroupManagers]
CREATE USER Barry WITH PASSWORD = 'sdjklalie8rew8337!$d'

Users based on Windows principals without logins in master


The following list shows possible syntax for users that have access to the Database Engine through a Windows
group but do not have a login in master. This syntax can be used in all types of databases. The default schema
and language options are not listed.
This syntax is similar to users based on logins in master, but this category of user does not have a login in master.
The user must have access to the Database Engine through a Windows group login.
This syntax is similar to contained database users based on Windows principals, but this category of user does
not get new access to the Database Engine.
CREATE USER [Domain1\WindowsUserBarry]
CREATE USER [Domain1\WindowsUserBarry] FOR LOGIN Domain1\WindowsUserBarry
CREATE USER [Domain1\WindowsUserBarry] FROM LOGIN Domain1\WindowsUserBarry
CREATE USER [Domain1\WindowsGroupManagers]
CREATE USER [Domain1\WindowsGroupManagers] FOR LOGIN [Domain1\WindowsGroupManagers]
CREATE USER [Domain1\WindowsGroupManagers] FROM LOGIN [Domain1\WindowsGroupManagers]

Users that cannot authenticate


The following list shows possible syntax for users that cannot login to SQL Server.
CREATE USER RIGHTSHOLDER WITHOUT LOGIN
CREATE USER CERTUSER FOR CERTIFICATE SpecialCert
CREATE USER CERTUSER FROM CERTIFICATE SpecialCert
CREATE USER KEYUSER FOR ASYMMETRIC KEY SecureKey
CREATE USER KEYUSER FROM ASYMMETRIC KEY SecureKey

Security
Creating a user grants access to a database but does not automatically grant any access to the objects in a
database. After creating a user, common actions are to add users to database roles which have permission to
access database objects, or grant object permissions to the user. For information about designing a permissions
system, see Getting Started with Database Engine Permissions.
Special Considerations for Contained Databases
When connecting to a contained database, if the user does not have a login in the master database, the
connection string must include the contained database name as the initial catalog. The initial catalog parameter is
always required for a contained database user with password.
In a contained database, creating users helps separate the database from the instance of the Database Engine so
that the database can easily be moved to another instance of SQL Server. For more information, see Contained
Databases and Contained Database Users - Making Your Database Portable. To change a database user from a
user based on a SQL Server authentication login to a contained database user with password, see
sp_migrate_user_to_contained (Transact-SQL ).
In a contained database, users do not have to have logins in the master database. Database Engine
administrators should understand that access to a contained database can be granted at the database level,
instead of the Database Engine level. For more information, see Security Best Practices with Contained
Databases.
When using contained database users on Azure SQL Database, configure access using a database-level firewall
rule, instead of a server-level firewall rule. For more information, see sp_set_database_firewall_rule (Azure SQL
Database).
For SQL Database and SQL Data Warehouse contained database users, SSMS can support Multi-Factor
Authentication. For more information, see SSMS support for Azure AD MFA with SQL Database and SQL Data
Warehouse.
Permissions
Requires ALTER ANY USER permission on the database.

Examples
A. Creating a database user based on a SQL Server login
The following example first creates a SQL Server login named AbolrousHazem , and then creates a corresponding
database user AbolrousHazem in AdventureWorks2012 .

CREATE LOGIN AbolrousHazem


WITH PASSWORD = '340$Uuxwp7Mcxo7Khy';

Change to a user database. For example, in SQL Server use the USE AdventureWorks2012 statement. In Azure SQL
Data Warehouse and Parallel Data Warehouse, you must make a new connection to the user database.

CREATE USER AbolrousHazem FOR LOGIN AbolrousHazem;


GO

B. Creating a database user with a default schema


The following example first creates a server login named WanidaBenshoof with a password, and then creates a
corresponding database user Wanida , with the default schema Marketing .

CREATE LOGIN WanidaBenshoof


WITH PASSWORD = '8fdKJl3$nlNv3049jsKK';
USE AdventureWorks2012;
CREATE USER Wanida FOR LOGIN WanidaBenshoof
WITH DEFAULT_SCHEMA = Marketing;
GO

C. Creating a database user from a certificate


The following example creates a database user JinghaoLiu from certificate CarnationProduction50 .
Applies to: SQL Server 2008 through SQL Server 2017.

USE AdventureWorks2012;
CREATE CERTIFICATE CarnationProduction50
WITH SUBJECT = 'Carnation Production Facility Supervisors',
EXPIRY_DATE = '11/11/2011';
GO
CREATE USER JinghaoLiu FOR CERTIFICATE CarnationProduction50;
GO

D. Creating and using a user without a login


The following example creates a database user CustomApp that does not map to a SQL Server login. The example
then grants a user adventure-works\tengiz0 permission to impersonate the CustomApp user.

USE AdventureWorks2012 ;
CREATE USER CustomApp WITHOUT LOGIN ;
GRANT IMPERSONATE ON USER::CustomApp TO [adventure-works\tengiz0] ;
GO

To use the CustomApp credentials, the user adventure-works\tengiz0 executes the following statement.

EXECUTE AS USER = 'CustomApp' ;


GO

To revert back to the adventure-works\tengiz0 credentials, the user executes the following statement.

REVERT ;
GO

E. Creating a contained database user with password


The following example creates a contained database user with password. This example can only be executed in a
contained database.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017. This example works in SQL Database if
DEFAULT_L ANGUAGE is removed.

USE AdventureWorks2012 ;
GO
CREATE USER Carlo
WITH PASSWORD='RN92piTCh%$!~3K9844 Bl*'
, DEFAULT_LANGUAGE=[Brazilian]
, DEFAULT_SCHEMA=[dbo]
GO

F. Creating a contained database user for a domain login


The following example creates a contained database user for a login named Fritz in a domain named Contoso.
This example can only be executed in a contained database.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
USE AdventureWorks2012 ;
GO
CREATE USER [Contoso\Fritz] ;
GO

G. Creating a contained database user with a specific SID


The following example creates a SQL Server authenticated contained database user named CarmenW. This
example can only be executed in a contained database.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.

USE AdventureWorks2012 ;
GO
CREATE USER CarmenW WITH PASSWORD = 'a8ea v*(Rd##+'
, SID = 0x01050000000000090300000063FF0451A9E7664BA705B10E37DDC4B7;

H. Creating a user to copy encrypted data


The following example creates a user that can copy data that is protected by the Always Encrypted feature from
one set of tables, containing encrypted columns, to another set of tables with encrypted columns (in the same or
a different database). For more information, see Migrate Sensitive Data Protected by Always Encrypted.
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.

CREATE USER [Chin]


WITH
DEFAULT_SCHEMA = dbo
, ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = ON ;

Next steps
Once the user is created, consider adding the user to a database role using the ALTER ROLE statement.
You might also want to GRANT Object Permissions to the role so they can access tables. For general information
about the SQL Server security model, see Permissions.

See Also
Create a Database User
sys.database_principals (Transact-SQL )
ALTER USER (Transact-SQL )
DROP USER (Transact-SQL )
CREATE LOGIN (Transact-SQL )
EVENTDATA (Transact-SQL )
Contained Databases
Connecting to SQL Database By Using Azure Active Directory Authentication
Getting Started with Database Engine Permissions
CREATE VIEW (Transact-SQL)
5/3/2018 • 20 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a virtual table whose contents (columns and rows) are defined by a query. Use this statement to create a
view of the data in one or more tables in the database. For example, a view can be used for the following
purposes:
To focus, simplify, and customize the perception each user has of the database.
As a security mechanism by allowing users to access data through the view, without granting the users
permissions to directly access the underlying base tables.
To provide a backward compatible interface to emulate a table whose schema has changed.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

CREATE [ OR ALTER ] VIEW [ schema_name . ] view_name [ (column [ ,...n ] ) ]


[ WITH <view_attribute> [ ,...n ] ]
AS select_statement
[ WITH CHECK OPTION ]
[ ; ]

<view_attribute> ::=
{
[ ENCRYPTION ]
[ SCHEMABINDING ]
[ VIEW_METADATA ]
}

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

CREATE VIEW [ schema_name . ] view_name [ ( column_name [ ,...n ] ) ]


AS <select_statement>
[;]

<select_statement> ::=
[ WITH <common_table_expression> [ ,...n ] ]
SELECT <select_criteria>

Arguments
OR ALTER
Applies to: Azure SQL Database and SQL Server (starting with SQL Server 2016 (13.x) SP1).
Conditionally alters the view only if it already exists.
schema_name
Is the name of the schema to which the view belongs.
view_name
Is the name of the view. View names must follow the rules for identifiers. Specifying the view owner name is
optional.
column
Is the name to be used for a column in a view. A column name is required only when a column is derived from an
arithmetic expression, a function, or a constant; when two or more columns may otherwise have the same name,
typically because of a join; or when a column in a view is specified a name different from that of the column from
which it is derived. Column names can also be assigned in the SELECT statement.
If column is not specified, the view columns acquire the same names as the columns in the SELECT statement.

NOTE
In the columns for the view, the permissions for a column name apply across a CREATE VIEW or ALTER VIEW statement,
regardless of the source of the underlying data. For example, if permissions are granted on the SalesOrderID column in a
CREATE VIEW statement, an ALTER VIEW statement can name the SalesOrderID column with a different column name,
such as OrderRef, and still have the permissions associated with the view using SalesOrderID.

AS
Specifies the actions the view is to perform.
select_statement
Is the SELECT statement that defines the view. The statement can use more than one table and other views.
Appropriate permissions are required to select from the objects referenced in the SELECT clause of the view that
is created.
A view does not have to be a simple subset of the rows and columns of one particular table. A view can be
created that uses more than one table or other views with a SELECT clause of any complexity.
In an indexed view definition, the SELECT statement must be a single table statement or a multitable JOIN with
optional aggregation.
The SELECT clauses in a view definition cannot include the following:
An ORDER BY clause, unless there is also a TOP clause in the select list of the SELECT statement

IMPORTANT
The ORDER BY clause is used only to determine the rows that are returned by the TOP or OFFSET clause in the view
definition. The ORDER BY clause does not guarantee ordered results when the view is queried, unless ORDER BY is
also specified in the query itself.

The INTO keyword


The OPTION clause
A reference to a temporary table or a table variable.
Because select_statement uses the SELECT statement, it is valid to use <join_hint> and <table_hint> hints
as specified in the FROM clause. For more information, see FROM (Transact-SQL ) and SELECT (Transact-
SQL ).
Functions and multiple SELECT statements separated by UNION or UNION ALL can be used in
select_statement.
CHECK OPTION
Forces all data modification statements executed against the view to follow the criteria set within
select_statement. When a row is modified through a view, the WITH CHECK OPTION makes sure the
data remains visible through the view after the modification is committed.

NOTE
Any updates performed directly to a view's underlying tables are not verified against the view, even if CHECK OPTION is
specified.

ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Encrypts the entries in sys.syscomments that contain the text of the CREATE VIEW statement. Using WITH
ENCRYPTION prevents the view from being published as part of SQL Server replication.
SCHEMABINDING
Binds the view to the schema of the underlying table or tables. When SCHEMABINDING is specified, the base
table or tables cannot be modified in a way that would affect the view definition. The view definition itself must
first be modified or dropped to remove dependencies on the table that is to be modified. When you use
SCHEMABINDING, the select_statement must include the two-part names (schema.object) of tables, views, or
user-defined functions that are referenced. All referenced objects must be in the same database.
Views or tables that participate in a view created with the SCHEMABINDING clause cannot be dropped unless
that view is dropped or changed so that it no longer has schema binding. Otherwise, the Database Engine raises
an error. Also, executing ALTER TABLE statements on tables that participate in views that have schema binding
fail when these statements affect the view definition.
VIEW_METADATA
Specifies that the instance of SQL Server will return to the DB -Library, ODBC, and OLE DB APIs the metadata
information about the view, instead of the base table or tables, when browse-mode metadata is being requested
for a query that references the view. Browse-mode metadata is additional metadata that the instance of SQL
Server returns to these client-side APIs. This metadata enables the client-side APIs to implement updatable
client-side cursors. Browse-mode metadata includes information about the base table that the columns in the
result set belong to.
For views created with VIEW_METADATA, the browse-mode metadata returns the view name and not the base
table names when it describes columns from the view in the result set.
When a view is created by using WITH VIEW_METADATA, all its columns, except a timestamp column, are
updatable if the view has INSTEAD OF INSERT or INSTEAD OF UPDATE triggers. For more information about
updatable views, see Remarks.

Remarks
A view can be created only in the current database. The CREATE VIEW must be the first statement in a query
batch. A view can have a maximum of 1,024 columns.
When querying through a view, the Database Engine checks to make sure that all the database objects
referenced anywhere in the statement exist and that they are valid in the context of the statement, and that data
modification statements do not violate any data integrity rules. A check that fails returns an error message. A
successful check translates the action into an action against the underlying table or tables.
If a view depends on a table or view that was dropped, the Database Engine produces an error message when
anyone tries to use the view. If a new table or view is created and the table structure does not change from the
previous base table to replace the one dropped, the view again becomes usable. If the new table or view structure
changes, the view must be dropped and re-created.
If a view is not created with the SCHEMABINDING clause, sp_refreshview should be run when changes are
made to the objects underlying the view that affect the definition of the view. Otherwise, the view might produce
unexpected results when it is queried.
When a view is created, information about the view is stored in the following catalog views: sys.views,
sys.columns, and sys.sql_expression_dependencies. The text of the CREATE VIEW statement is stored in the
sys.sql_modules catalog view.
A query that uses an index on a view defined with numeric or float expressions may have a result that is
different from a similar query that does not use the index on the view. This difference may be caused by
rounding errors during INSERT, DELETE, or UPDATE actions on underlying tables.
The Database Engine saves the settings of SET QUOTED_IDENTIFIER and SET ANSI_NULLS when a view is
created. These original settings are used to parse the view when the view is used. Therefore, any client-session
settings for SET QUOTED_IDENTIFIER and SET ANSI_NULLS do not affect the view definition when the view is
accessed.

Updatable Views
You can modify the data of an underlying base table through a view, as long as the following conditions are true:
Any modifications, including UPDATE, INSERT, and DELETE statements, must reference columns from
only one base table.
The columns being modified in the view must directly reference the underlying data in the table columns.
The columns cannot be derived in any other way, such as through the following:
An aggregate function: AVG, COUNT, SUM, MIN, MAX, GROUPING, STDEV, STDEVP, VAR, and
VARP.
A computation. The column cannot be computed from an expression that uses other columns.
Columns that are formed by using the set operators UNION, UNION ALL, CROSSJOIN, EXCEPT,
and INTERSECT amount to a computation and are also not updatable.
The columns being modified are not affected by GROUP BY, HAVING, or DISTINCT clauses.
TOP is not used anywhere in the select_statement of the view together with the WITH CHECK OPTION
clause.
The previous restrictions apply to any subqueries in the FROM clause of the view, just as they apply to the
view itself. Generally, the Database Engine must be able to unambiguously trace modifications from the
view definition to one base table. For more information, see Modify Data Through a View.
If the previous restrictions prevent you from modifying data directly through a view, consider the
following options:
INSTEAD OF Triggers
INSTEAD OF triggers can be created on a view to make a view updatable. The INSTEAD OF trigger is
executed instead of the data modification statement on which the trigger is defined. This trigger lets the
user specify the set of actions that must happen to process the data modification statement. Therefore, if
an INSTEAD OF trigger exists for a view on a specific data modification statement (INSERT, UPDATE, or
DELETE ), the corresponding view is updatable through that statement. For more information about
INSTEAD OF triggers, see DML Triggers.
Partitioned Views
If the view is a partitioned view, the view is updatable, subject to certain restrictions. When it is needed,
the Database Engine distinguishes local partitioned views as the views in which all participating tables and
the view are on the same instance of SQL Server, and distributed partitioned views as the views in which
at least one of the tables in the view resides on a different or remote server.

Partitioned Views
A partitioned view is a view defined by a UNION ALL of member tables structured in the same way, but stored
separately as multiple tables in either the same instance of SQL Server or in a group of autonomous instances of
SQL Server servers, called federated database servers.

NOTE
The preferred method for partitioning data local to one server is through partitioned tables. For more information, see
Partitioned Tables and Indexes.

In designing a partitioning scheme, it must be clear what data belongs to each partition. For example, the data
for the Customers table is distributed in three member tables in three server locations: Customers_33 on
Server1 , Customers_66 on Server2 , and Customers_99 on Server3 .

A partitioned view on Server1 is defined in the following way:

--Partitioned view as defined on Server1


CREATE VIEW Customers
AS
--Select from local member table.
SELECT *
FROM CompanyData.dbo.Customers_33
UNION ALL
--Select from member table on Server2.
SELECT *
FROM Server2.CompanyData.dbo.Customers_66
UNION ALL
--Select from mmeber table on Server3.
SELECT *
FROM Server3.CompanyData.dbo.Customers_99;

Generally, a view is said to be a partitioned view if it is of the following form:

SELECT <select_list1>
FROM T1
UNION ALL
SELECT <select_list2>
FROM T2
UNION ALL
...
SELECT <select_listn>
FROM Tn;

Conditions for Creating Partitioned Views


1. The select list

All columns in the member tables should be selected in the column list of the view definition.
The columns in the same ordinal position of each select list should be of the same type,
including collations. It is not sufficient for the columns to be implicitly convertible types, as is
generally the case for UNION.
Also, at least one column (for example <col> ) must appear in all the select lists in the same ordinal
position. This <col> should be defined in a way that the member tables T1, ..., Tn have CHECK
constraints C1, ..., Cn defined on <col> , respectively.
Constraint C1 defined on table T1 must be of the following form:

C1 ::= < simple_interval > [ OR < simple_interval > OR ...]


< simple_interval > :: =
< col > { < | > | \<= | >= | = < value >}
| < col > BETWEEN < value1 > AND < value2 >
| < col > IN ( value_list )
| < col > { > | >= } < value1 > AND
< col > { < | <= } < value2 >

The constraints should be in such a way that any specified value of <col> can satisfy, at most, one
of the constraints C1, ..., Cn so that the constraints should form a set of disjointed or
nonoverlapping intervals. The column <col> on which the disjointed constraints are defined is
called the partitioning column. Note that the partitioning column may have different names in the
underlying tables. The constraints should be in an enabled and trusted state for them to meet the
previously mentioned conditions of the partitioning column. If the constraints are disabled, re-
enable constraint checking by using the CHECK CONSTRAINT constraint_name option of ALTER
TABLE, and using the WITH CHECK option to validate them.
The following examples show valid sets of constraints:

{ [col < 10], [col between 11 and 20] , [col > 20] }
{ [col between 11 and 20], [col between 21 and 30], [col between 31 and 100] }

The same column cannot be used multiple times in the select list.
2. Partitioning column
The partitioning column is a part of the PRIMARY KEY of the table.
It cannot be a computed, identity, default, or timestamp column.
If there is more than one constraint on the same column in a member table, the Database Engine
ignores all the constraints and does not consider them when determining whether the view is a
partitioned view. To meet the conditions of the partitioned view, there should be only one
partitioning constraint on the partitioning column.
There are no restrictions on the updatability of the partitioning column.
3. Member tables, or underlying tables T1, ..., Tn

The tables can be either local tables or tables from other computers that are running SQL Server
that are referenced either through a four-part name or an OPENDATASOURCE - or
OPENROWSET-based name. The OPENDATASOURCE and OPENROWSET syntax can specify a
table name, but not a pass-through query. For more information, see OPENDATASOURCE
(Transact-SQL ) and OPENROWSET (Transact-SQL ).
If one or more of the member tables are remote, the view is called distributed partitioned view, and
additional conditions apply. They are described later in this section.
The same table cannot appear two times in the set of tables that are being combined with the
UNION ALL statement.
The member tables cannot have indexes created on computed columns in the table.
The member tables should have all PRIMARY KEY constraints on the same number of columns.
All member tables in the view should have the same ANSI padding setting. This can be set by using
either the user options option in sp_configure or the SET statement.

Conditions for Modifying Data in Partitioned Views


The following restrictions apply to statements that modify data in partitioned views:
The INSERT statement must supply values for all the columns in the view, even if the underlying member
tables have a DEFAULT constraint for those columns or if they allow for null values. For those member
table columns that have DEFAULT definitions, the statements cannot explicitly use the keyword DEFAULT.
The value being inserted into the partitioning column should satisfy at least one of the underlying
constraints; otherwise, the insert action will fail with a constraint violation.
UPDATE statements cannot specify the DEFAULT keyword as a value in the SET clause, even if the
column has a DEFAULT value defined in the corresponding member table.
Columns in the view that are an identity column in one or more of the member tables cannot be modified
by using an INSERT or UPDATE statement.
If one of the member tables contains a timestamp column, the data cannot be modified by using an
INSERT or UPDATE statement.
If one of the member tables contains a trigger or an ON UPDATE CASCADE/SET NULL/SET DEFAULT
or ON DELETE CASCADE/SET NULL/SET DEFAULT constraint, the view cannot be modified.
INSERT, UPDATE, and DELETE actions against a partitioned view are not allowed if there is a self-join
with the same view or with any of the member tables in the statement.
Bulk importing data into a partitioned view is unsupported by bcp or the BULK INSERT and INSERT ...
SELECT * FROM OPENROWSET(BULK...) statements. However, you can insert multiple rows into a
partitioned view by using the INSERT statement.

NOTE
To update a partitioned view, the user must have INSERT, UPDATE, and DELETE permissions on the member tables.

Additional Conditions for Distributed Partitioned Views


For distributed partitioned views (when one or more member tables are remote), the following additional
conditions apply:
A distributed transaction will be started to guarantee atomicity across all nodes affected by the update.
The XACT_ABORT SET option should be set to ON for INSERT, UPDATE, or DELETE statements to work.
Any columns in remote tables of type smallmoney that are referenced in a partitioned view are mapped
as money. Therefore, the corresponding columns (in the same ordinal position in the select list) in the
local tables must also be of type money.
Under database compatibility level 110 and higher, any columns in remote tables of type smalldatetime
that are referenced in a partitioned view are mapped as smalldatetime. Corresponding columns (in the
same ordinal position in the select list) in the local tables must be smalldatetime. This is a change in
behavior from earlier versions of SQL Server in which any columns in remote tables of type
smalldatetime that are referenced in a partitioned view are mapped as datetime and corresponding
columns in local tables must be of type datetime. For more information, see ALTER DATABASE
Compatibility Level (Transact-SQL ).
Any linked server in the partitioned view cannot be a loopback linked server. This is a linked server that
points to the same instance of SQL Server.
The setting of the SET ROWCOUNT option is ignored for INSERT, UPDATE, and DELETE actions that
involve updatable partitioned views and remote tables.
When the member tables and partitioned view definition are in place, the SQL Server query optimizer
builds intelligent plans that use queries efficiently to access data from member tables. With the CHECK
constraint definitions, the query processor maps the distribution of key values across the member tables.
When a user issues a query, the query processor compares the map to the values specified in the WHERE
clause, and builds an execution plan with a minimal amount of data transfer between member servers.
Therefore, although some member tables may be located in remote servers, the instance of SQL Server
resolves distributed queries so that the amount of distributed data that has to be transferred is minimal.

Considerations for Replication


To create partitioned views on member tables that are involved in replication, the following considerations apply:
If the underlying tables are involved in merge replication or transactional replication with updating
subscriptions, the uniqueidentifier column should also be included in the select list.
Any INSERT actions into the partitioned view must provide a NEWID () value for the uniqueidentifier
column. Any UPDATE actions against the uniqueidentifier column must supply NEWID () as the value
because the DEFAULT keyword cannot be used.
The replication of updates made by using the view is the same as when tables are replicated in two
different databases: the tables are served by different replication agents and the order of the updates is
not guaranteed.

Permissions
Requires CREATE VIEW permission in the database and ALTER permission on the schema in which the view is
being created.

Examples
The following examples use the AdventureWorks 2012 or AdventureWorksDW database.
A. Using a simple CREATE VIEW
The following example creates a view by using a simple SELECT statement. A simple view is helpful when a
combination of columns is queried frequently. The data from this view comes from the HumanResources.Employee
and Person.Person tables of the AdventureWorks2012 database. The data provides name and hire date
information for the employees of Adventure Works Cycles. The view could be created for the person in charge of
tracking work anniversaries but without giving this person access to all the data in these tables.

CREATE VIEW hiredate_view


AS
SELECT p.FirstName, p.LastName, e.BusinessEntityID, e.HireDate
FROM HumanResources.Employee e
JOIN Person.Person AS p ON e.BusinessEntityID = p.BusinessEntityID ;
GO

B. Using WITH ENCRYPTION


The following example uses the WITH ENCRYPTION option and shows computed columns, renamed columns, and
multiple columns.
Applies to: SQL Server 2008 through SQL Server 2017 and SQL Database.

CREATE VIEW Purchasing.PurchaseOrderReject


WITH ENCRYPTION
AS
SELECT PurchaseOrderID, ReceivedQty, RejectedQty,
RejectedQty / ReceivedQty AS RejectRatio, DueDate
FROM Purchasing.PurchaseOrderDetail
WHERE RejectedQty / ReceivedQty > 0
AND DueDate > CONVERT(DATETIME,'20010630',101) ;
GO

C. Using WITH CHECK OPTION


The following example shows a view named SeattleOnly that references five tables and allows for data
modifications to apply only to employees who live in Seattle.

CREATE VIEW dbo.SeattleOnly


AS
SELECT p.LastName, p.FirstName, e.JobTitle, a.City, sp.StateProvinceCode
FROM HumanResources.Employee e
INNER JOIN Person.Person p
ON p.BusinessEntityID = e.BusinessEntityID
INNER JOIN Person.BusinessEntityAddress bea
ON bea.BusinessEntityID = e.BusinessEntityID
INNER JOIN Person.Address a
ON a.AddressID = bea.AddressID
INNER JOIN Person.StateProvince sp
ON sp.StateProvinceID = a.StateProvinceID
WHERE a.City = 'Seattle'
WITH CHECK OPTION ;
GO

D. Using built-in functions within a view


The following example shows a view definition that includes a built-in function. When you use functions, you
must specify a column name for the derived column.

CREATE VIEW Sales.SalesPersonPerform


AS
SELECT TOP (100) SalesPersonID, SUM(TotalDue) AS TotalSales
FROM Sales.SalesOrderHeader
WHERE OrderDate > CONVERT(DATETIME,'20001231',101)
GROUP BY SalesPersonID;
GO

E. Using partitioned data


The following example uses tables named SUPPLY1 , SUPPLY2 , SUPPLY3 , and SUPPLY4 . These tables correspond to
the supplier tables from four offices, located in different countries/regions.
--Create the tables and insert the values.
CREATE TABLE dbo.SUPPLY1 (
supplyID INT PRIMARY KEY CHECK (supplyID BETWEEN 1 and 150),
supplier CHAR(50)
);
CREATE TABLE dbo.SUPPLY2 (
supplyID INT PRIMARY KEY CHECK (supplyID BETWEEN 151 and 300),
supplier CHAR(50)
);
CREATE TABLE dbo.SUPPLY3 (
supplyID INT PRIMARY KEY CHECK (supplyID BETWEEN 301 and 450),
supplier CHAR(50)
);
CREATE TABLE dbo.SUPPLY4 (
supplyID INT PRIMARY KEY CHECK (supplyID BETWEEN 451 and 600),
supplier CHAR(50)
);
GO
INSERT dbo.SUPPLY1 VALUES ('1', 'CaliforniaCorp'), ('5', 'BraziliaLtd')
, ('231', 'FarEast'), ('280', 'NZ')
, ('321', 'EuroGroup'), ('442', 'UKArchip')
, ('475', 'India'), ('521', 'Afrique');
GO
--Create the view that combines all supplier tables.
CREATE VIEW dbo.all_supplier_view
WITH SCHEMABINDING
AS
SELECT supplyID, supplier
FROM dbo.SUPPLY1
UNION ALL
SELECT supplyID, supplier
FROM dbo.SUPPLY2
UNION ALL
SELECT supplyID, supplier
FROM dbo.SUPPLY3
UNION ALL
SELECT supplyID, supplier
FROM dbo.SUPPLY4;

Examples: SQL Data Warehouse and Parallel Data Warehouse


F. Creating a simple view
The following example creates a view by selecting only some of the columns from the source table.

CREATE VIEW DimEmployeeBirthDates AS


SELECT FirstName, LastName, BirthDate
FROM DimEmployee;

G. Create a view by joining two tables


The following example creates a view by using a SELECT statement with an OUTER JOIN . The results of the join
query populate the view.

CREATE VIEW view1


AS
SELECT fis.CustomerKey, fis.ProductKey, fis.OrderDateKey,
fis.SalesTerritoryKey, dst.SalesTerritoryRegion
FROM FactInternetSales AS fis
LEFT OUTER JOIN DimSalesTerritory AS dst
ON (fis.SalesTerritoryKey=dst.SalesTerritoryKey);
See Also
ALTER TABLE (Transact-SQL )
ALTER VIEW (Transact-SQL )
DELETE (Transact-SQL )
DROP VIEW (Transact-SQL )
INSERT (Transact-SQL )
Create a Stored Procedure
sys.dm_sql_referenced_entities (Transact-SQL )
sys.dm_sql_referencing_entities (Transact-SQL )
sp_help (Transact-SQL )
sp_helptext (Transact-SQL )
sp_refreshview (Transact-SQL )
sp_rename (Transact-SQL )
sys.views (Transact-SQL )
UPDATE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL)
5/4/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a Resource Governor workload group and associates the workload group with a Resource Governor
resource pool. Resource Governor is not available in every edition of Microsoft SQL Server. For a list of features
that are supported by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.
Transact-SQL Syntax Conventions.

Syntax
CREATE WORKLOAD GROUP group_name
[ WITH
( [ IMPORTANCE = { LOW | MEDIUM | HIGH } ]
[ [ , ] REQUEST_MAX_MEMORY_GRANT_PERCENT = value ]
[ [ , ] REQUEST_MAX_CPU_TIME_SEC = value ]
[ [ , ] REQUEST_MEMORY_GRANT_TIMEOUT_SEC = value ]
[ [ , ] MAX_DOP = value ]
[ [ , ] GROUP_MAX_REQUESTS = value ] )
]
[ USING {
[ pool_name | "default" ]
[ [ , ] EXTERNAL external_pool_name | "default" ] ]
} ]
[ ; ]

Arguments
group_name
Is the user-defined name for the workload group. group_name is alphanumeric, can be up to 128 characters,
must be unique within an instance of SQL Server, and must comply with the rules for identifiers.
IMPORTANCE = { LOW | MEDIUM | HIGH }
Specifies the relative importance of a request in the workload group. Importance is one of the following, with
MEDIUM being the default:
LOW
MEDIUM (default)
HIGH

NOTE
Internally each importance setting is stored as a number that is used for calculations.

IMPORTANCE is local to the resource pool; workload groups of different importance inside the same resource
pool affect each other, but do not affect workload groups in another resource pool.
REQUEST_MAX_MEMORY_GRANT_PERCENT =value
Specifies the maximum amount of memory that a single request can take from the pool. This percentage is
relative to the resource pool size specified by MAX_MEMORY_PERCENT.
NOTE
The amount specified only refers to query execution grant memory.

value must be 0 or a positive integer. The allowed range for value is from 0 through 100. The default setting for
value is 25.
Note the following:
Setting value to 0 prevents queries with SORT and HASH JOIN operations in user-defined workload
groups from running.
We do not recommend setting value greater than 70 because the server may be unable to set aside
enough free memory if other concurrent queries are running. This may eventually lead to query time-out
error 8645.

NOTE
If the query memory requirements exceed the limit that is specified by this parameter, the server does the following:
For user-defined workload groups, the server tries to reduce the query degree of parallelism until the memory requirement
falls under the limit, or until the degree of parallelism equals 1. If the query memory requirement is still greater than the
limit, error 8657 occurs.
For internal and default workload groups, the server permits the query to obtain the required memory.
Be aware that both cases are subject to time-out error 8645 if the server has insufficient physical memory.

REQUEST_MAX_CPU_TIME_SEC =value
Specifies the maximum amount of CPU time, in seconds, that a request can use. value must be 0 or a positive
integer. The default setting for value is 0, which means unlimited.

NOTE
By default, Resource Governor will not prevent a request from continuing if the maximum time is exceeded. However, an
event will be generated. For more information, see CPU Threshold Exceeded Event Class.

IMPORTANT
Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3, and using trace flag 2422, Resource Governor
will abort a request when the maximum time is exceeded.

REQUEST_MEMORY_GRANT_TIMEOUT_SEC =value
Specifies the maximum time, in seconds, that a query can wait for a memory grant (work buffer memory) to
become available.

NOTE
A query does not always fail when memory grant time-out is reached. A query will only fail if there are too many
concurrent queries running. Otherwise, the query may only get the minimum memory grant, resulting in reduced query
performance.

value must be 0 or a positive integer. The default setting for value, 0, uses an internal calculation based on query
cost to determine the maximum time.
MAX_DOP =value
Specifies the maximum degree of parallelism (DOP ) for parallel requests. value must be 0 or a positive integer.
The allowed range for value is from 0 through 64. The default setting for value, 0, uses the global setting.
MAX_DOP is handled as follows:
MAX_DOP as a query hint is effective as long as it does not exceed workload group MAX_DOP. If the
MAXDOP query hint value exceeds the value that is configured by using the Resource Governor, the
Database Engine uses the Resource Governor MAXDOP value.
MAX_DOP as a query hint always overrides sp_configure 'max degree of parallelism'.
Workload group MAX_DOP overrides sp_configure 'max degree of parallelism'.
If the query is marked as serial at compile time, it cannot be changed back to parallel at run time
regardless of the workload group or sp_configure setting.
After DOP is configured, it can only be lowered on grant memory pressure. Workload group
reconfiguration is not visible while waiting in the grant memory queue.
GROUP_MAX_REQUESTS =value
Specifies the maximum number of simultaneous requests that are allowed to execute in the workload
group. value must be a 0 or a positive integer. The default setting for value, 0, allows unlimited requests.
When the maximum concurrent requests are reached, a user in that group can log in, but is placed in a
wait state until concurrent requests are dropped below the value specified.
USING { pool_name | "default" }
Associates the workload group with the user-defined resource pool identified by pool_name. This in effect
puts the workload group in the resource pool. If pool_name is not provided, or if the USING argument is
not used, the workload group is put in the predefined Resource Governor default pool.
"default" is a reserved word and when used with USING, must be enclosed by quotation marks ("") or
brackets ([]).

NOTE
Predefined workload groups and resource pools all use lower case names, such as "default". This should be taken into
account for servers that use case-sensitive collation. Servers with case-insensitive collation, such as
SQL_Latin1_General_CP1_CI_AS, will treat "default" and "Default" as the same.

EXTERNAL external_pool_name | “default“


Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017).
Workload group can specify an external resource pool. You can define a workload group and associate with 2
pools:
A resource pool for SQL Server workloads and queries
An external resource pool for external processes. For more information, see sp_execute_external_script
(Transact-SQL ).

Remarks
REQUEST_MEMORY_GRANT_PERCENT: Index creation is allowed to use more workspace memory than what
is initially granted for improved performance. This special handling is supported by Resource Governor in SQL
Server 2017. However, the initial grant and any additional memory grant are limited by resource pool and
workload group settings.
Index Creation on a Partitioned Table
The memory consumed by index creation on non-aligned partitioned table is proportional to the number of
partitions involved. If the total required memory exceeds the per-query limit
(REQUEST_MAX_MEMORY_GRANT_PERCENT) imposed by the Resource Governor workload group setting,
this index creation may fail to execute. Because the "default" workload group allows a query to exceed the per-
query limit with the minimum required memory, the user may be able to run the same index creation in "default"
workload group, if the "default" resource pool has enough total memory configured to run such query.

Permissions
Requires CONTROL SERVER permission.

Examples
The following example shows how to create a workload group named newReports . It uses the Resource
Governor default settings and is in the Resource Governor default pool. The example specifies the default pool,
but this is not required.

CREATE WORKLOAD GROUP newReports


USING "default" ;
GO

See Also
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
CREATE XML INDEX (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an XML index on a specified table. An index can be created before there is data in the table. XML indexes
can be created on tables in another database by specifying a qualified database name.

NOTE
To create a relational index, see CREATE INDEX (Transact-SQL). For information about how to create a spatial index, see
CREATE SPATIAL INDEX (Transact-SQL).

Transact-SQL Syntax Conventions

Syntax
Create XML Index
CREATE [ PRIMARY ] XML INDEX index_name
ON <object> ( xml_column_name )
[ USING XML INDEX xml_index_name
[ FOR { VALUE | PATH | PROPERTY } ] ]
[ WITH ( <xml_index_option> [ ,...n ] ) ]
[ ; ]

<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_name
}

<xml_index_option> ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = OFF
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
}

Arguments
[PRIMARY ] XML
Creates an XML index on the specified xml column. When PRIMARY is specified, a clustered index is created
with the clustered key formed from the clustering key of the user table and an XML node identifier. Each table can
have up to 249 XML indexes. Note the following when you create an XML index:
A clustered index must exist on the primary key of the user table.
The clustering key of the user table is limited to 15 columns.
Each xml column in a table can have one primary XML index and multiple secondary XML indexes.
A primary XML index on an xml column must exist before a secondary XML index can be created on the
column.
An XML index can only be created on a single xml column. You cannot create an XML index on a non-xml
column, nor can you create a relational index on an xml column.
You cannot create an XML index, either primary or secondary, on an xml column in a view, on a table-
valued variable with xml columns, or xml type variables.
You cannot create a primary XML index on a computed xml column.
The SET option settings must be the same as those required for indexed views and computed column
indexes. Specifically, the option ARITHABORT must be set to ON when an XML index is created and when
inserting, deleting, or updating values in the xml column.
For more information, see XML Indexes (SQL Server).
index_name
Is the name of the index. Index names must be unique within a table but do not have to be unique within a
database. Index names must follow the rules of identifiers.
Primary XML index names cannot start with the following characters: #, ##, @, or @@.
xml_column_name
Is the xml column on which the index is based. Only one xml column can be specified in a single XML
index definition; however, multiple secondary XML indexes can be created on an xml column.
USING XML INDEX xml_index_name
Specifies the primary XML index to use in creating a secondary XML index.
FOR { VALUE | PATH | PROPERTY }
Specifies the type of secondary XML index.
VALUE
Creates a secondary XML index on columns where key columns are (node value and path) of the primary
XML index.
PATH
Creates a secondary XML index on columns built on path values and node values in the primary XML
index. In the PATH secondary index, the path and node values are key columns that allow efficient seeks
when searching for paths.
PROPERTY
Creates a secondary XML index on columns (PK, path and node value) of the primary XML index where
PK is the primary key of the base table.
<object>::=
Is the fully qualified or nonfully qualified object to be indexed.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to be indexed.
<xml_index_option> ::=
Specifies the options to use when you create the index.
PAD_INDEX = { ON | OFF }
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by fillfactor is applied to the intermediate-level pages of the
index.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity, leaving sufficient space for at least one row of the
maximum size the index can have, considering the set of keys on the intermediate pages.
The PAD_INDEX option is useful only when FILLFACTOR is specified, because PAD_INDEX uses the
percentage specified by FILLFACTOR. If the percentage specified for FILLFACTOR is not large enough to
allow for one row, the Database Engine internally overrides the percentage to allow for the minimum. The
number of rows on an intermediate index page is never less than two, regardless of how low the value of
fillfactor.
FILLFACTOR =fillfactor
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index
page during index creation or rebuild. fillfactor must be an integer value from 1 to 100. The default is 0. If
fillfactor is 100 or 0, the Database Engine creates indexes with leaf pages filled to capacity.

NOTE
Fill factor values 0 and 100 are the same in all respects.

The FILLFACTOR setting applies only when the index is created or rebuilt. The Database Engine does not
dynamically keep the specified percentage of empty space in the pages. To view the fill factor setting, use the
sys.indexes catalog view.

IMPORTANT
Creating a clustered index with a FILLFACTOR less than 100 affects the amount of storage space the data occupies because
the Database Engine redistributes the data when it creates the clustered index.

For more information, see Specify Fill Factor for an Index.


SORT_IN_TEMPDB = { ON | OFF }
Specifies whether to store temporary sort results in tempdb. The default is OFF.
ON
The intermediate sort results that are used to build the index are stored in tempdb. This may reduce the time
required to create an index if tempdb is on a different set of disks than the user database. However, this increases
the amount of disk space that is used during the index build.
OFF
The intermediate sort results are stored in the same database as the index.
In addition to the space required in the user database to create the index, tempdb must have about the same
amount of additional space to hold the intermediate sort results. For more information, see SORT_IN_TEMPDB
Option For Indexes.
IGNORE_DUP_KEY =OFF
Has no effect for XML indexes because the index type is never unique. Do not set this option to ON, or else an
error is raised.
DROP_EXISTING = { ON | OFF }
Specifies that the named, preexisting XML index is dropped and rebuilt. The default is OFF.
ON
The existing index is dropped and rebuilt. The index name specified must be the same as a currently existing
index; however, the index definition can be modified. For example, you can specify different columns, sort order,
partition scheme, or index options.
OFF
An error is displayed if the specified index name already exists.
The index type cannot be changed by using DROP_EXISTING. Also, a primary XML index cannot be redefined as
a secondary XML index, or vice versa.
ONLINE =OFF
Specifies that underlying tables and associated indexes are not available for queries and data modification during
the index operation. In this version of SQL Server, online index builds are not supported for XML indexes. If this
option is set to ON for a XML index, an error is raised. Either omit the ONLINE option or set ONLINE to OFF.
An offline index operation that creates, rebuilds, or drops a XML index, acquires a Schema modification (Sch-M )
lock on the table. This prevents all user access to the underlying table for the duration of the operation.

NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016.

ALLOW_ROW_LOCKS = { ON | OFF }
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when accessing the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
MAXDOP =max_degree_of_parallelism
Overrides the Configure the max degree of parallelism Server Configuration Option configuration option for the
duration of the index operation. Use MAXDOP to limit the number of processors used in a parallel plan execution.
The maximum is 64 processors.
IMPORTANT
Although the MAXDOP option is syntactically supported for all XML indexes, for a primary XML index, CREATE XML INDEX
uses only a single processor.

max_degree_of_parallelism can be:


1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number or fewer
based on the current system workload.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.

NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016.

Remarks
Computed columns derived from xml data types can be indexed either as a key or included nonkey column as
long as the computed column data type is allowable as an index key column or nonkey column. You cannot create
a primary XML index on a computed xml column.
To view information about XML indexes, use the sys.xml_indexes catalog view.
For more information about XML indexes, see XML Indexes (SQL Server).

Additional Remarks on Index Creation


For more information about index creation, see the "Remarks" section in CREATE INDEX (Transact-SQL ).

Examples
A. Creating a primary XML index
The following example creates a primary XML index on the CatalogDescription column in the
Production.ProductModel table.

USE AdventureWorks2012;
GO
IF EXISTS (SELECT * FROM sys.indexes
WHERE name = N'PXML_ProductModel_CatalogDescription')
DROP INDEX PXML_ProductModel_CatalogDescription
ON Production.ProductModel;
GO
CREATE PRIMARY XML INDEX PXML_ProductModel_CatalogDescription
ON Production.ProductModel (CatalogDescription);
GO

B. Creating a secondary XML index


The following example creates a secondary XML index on the CatalogDescription column in the
Production.ProductModel table.

USE AdventureWorks2012;
GO
IF EXISTS (SELECT name FROM sys.indexes
WHERE name = N'IXML_ProductModel_CatalogDescription_Path')
DROP INDEX IXML_ProductModel_CatalogDescription_Path
ON Production.ProductModel;
GO
CREATE XML INDEX IXML_ProductModel_CatalogDescription_Path
ON Production.ProductModel (CatalogDescription)
USING XML INDEX PXML_ProductModel_CatalogDescription FOR PATH ;
GO

See Also
ALTER INDEX (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
CREATE TABLE (Transact-SQL )
Data Types (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP INDEX (Transact-SQL )
XML Indexes (SQL Server)
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
sys.xml_indexes (Transact-SQL )
EVENTDATA (Transact-SQL )
XML Indexes (SQL Server)
CREATE XML INDEX (Selective XML Indexes)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new secondary selective XML index on a single path that is already indexed by an existing selective XML
index. You can also create primary selective XML indexes. For information, see Create, Alter, and Drop Selective
XML Indexes.
Transact-SQL Syntax Conventions

Syntax
CREATE XML INDEX index_name
ON <table_object> ( xml_column_name )
USING XML INDEX sxi_index_name
FOR ( <xquery_or_sql_values_path> )
[WITH ( <index_options> )]

<table_object> ::=
{ [database_name. [schema_name ] . | schema_name. ] table_name }

<xquery_or_sql_values_path>::=
<path_name>

<path_name> ::=
character string literal

<xmlnamespace_list> ::=
<xmlnamespace_item> [, <xmlnamespace_list>]

<xmlnamespace_item> ::=
xmlnamespace_uri AS xmlnamespace_prefix

<index_options> ::=
(
| PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = OFF
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
)

Arguments
index_name
Is the name of the new index to create. Index names must be unique within a table, but do not have to be unique
within a database. Index names must follow the rules of identifiers.
ON <table_object> Is the table that contains the XML column to index. You can use the following formats:
database_name.schema_name.table_name

database_name..table_name

schema_name.table_name

xml_column_name
Is the name of the XML column that contains the path to index.
USING XML INDEX sxi_index_name
Is the name of the existing selective XML index.
FOR ( <xquery_or_sql_values_path> ) Is the name of the indexed path on which to create the secondary
selective XML index. The path to index is the assigned name from the CREATE SELECTIVE XML INDEX
statement. For more information, see CREATE SELECTIVE XML INDEX (Transact-SQL ).
WITH <index_options> For information about the index options, see CREATE XML INDEX.

Remarks
There can be multiple secondary selective XML indexes on every XML column in the base table.

Limitations and Restrictions


A selective XML index on an XML column must exist before secondary selective XML indexes can be created on
the column.

Security
Permissions
Requires ALTER permission on the table or view. User must be a member of the sysadmin fixed server role or the
db_ddladmin and db_owner fixed database roles.

Examples
The following example creates a secondary selective XML index on the path pathabc . The path to index is the
assigned name from the CREATE SELECTIVE XML INDEX (Transact-SQL ).

CREATE XML INDEX filt_sxi_index_c


ON Tbl(xmlcol)
USING XML INDEX sxi_index
FOR ( pathabc );

See Also
Selective XML Indexes (SXI)
Create, Alter, and Drop Secondary Selective XML Indexes
CREATE XML SCHEMA COLLECTION (Transact-
SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Imports the schema components into a database.
Transact-SQL Syntax Conventions

Syntax
CREATE XML SCHEMA COLLECTION [ <relational_schema>. ]sql_identifier AS Expression

Arguments
relational_schema
Identifies the relational schema name. If not specified, default relational schema is assumed.
sql_identifier
Is the SQL identifier for the XML schema collection.
Expression
Is a string constant or scalar variable. Is varchar, varbinary, nvarchar, or xml type.

Remarks
You can also add new namespaces to the collection or add new components to existing namespaces in the
collection by using ALTER XML SCHEMA COLLECTION.
To remove collections, use DROP XML SCHEMA COLLECTION (Transact-SQL ).

Permissions
To create an XML SCHEMA COLLECTION requires at least one of the following sets of permissions:
CONTROL permission on the server
ALTER ANY DATABASE permission on the server
ALTER permission on the database
CONTROL permission in the database
ALTER ANY SCHEMA permission and CREATE XML SCHEMA COLLECTION permission in the database
ALTER or CONTROL permission on the relational schema and CREATE XML SCHEMA COLLECTION
permission in the database

Examples
A. Creating XML schema collection in the database
The following example creates the XML schema collection ManuInstructionsSchemaCollection . The collection has
only one schema namespace.

-- Create a sample database in which to load the XML schema collection.


CREATE DATABASE SampleDB;
GO
USE SampleDB;
GO
CREATE XML SCHEMA COLLECTION ManuInstructionsSchemaCollection AS
N'<?xml version="1.0" encoding="UTF-16"?>
<xsd:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelManuInstructions"
xmlns ="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelManuInstructions"
elementFormDefault="qualified"
attributeFormDefault="unqualified"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" >

<xsd:complexType name="StepType" mixed="true" >


<xsd:choice minOccurs="0" maxOccurs="unbounded" >
<xsd:element name="tool" type="xsd:string" />
<xsd:element name="material" type="xsd:string" />
<xsd:element name="blueprint" type="xsd:string" />
<xsd:element name="specs" type="xsd:string" />
<xsd:element name="diag" type="xsd:string" />
</xsd:choice>
</xsd:complexType>

<xsd:element name="root">
<xsd:complexType mixed="true">
<xsd:sequence>
<xsd:element name="Location" minOccurs="1" maxOccurs="unbounded">
<xsd:complexType mixed="true">
<xsd:sequence>
<xsd:element name="step" type="StepType" minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
<xsd:attribute name="LocationID" type="xsd:integer" use="required"/>
<xsd:attribute name="SetupHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="MachineHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="LaborHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="LotSize" type="xsd:decimal" use="optional"/>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>' ;
GO
-- Verify - list of collections in the database.
SELECT *
FROM sys.xml_schema_collections;
-- Verify - list of namespaces in the database.
SELECT name
FROM sys.xml_schema_namespaces;

-- Use it. Create a typed xml variable. Note collection name specified.
DECLARE @x xml (ManuInstructionsSchemaCollection);
GO
--Or create a typed xml column.
CREATE TABLE T (
i int primary key,
x xml (ManuInstructionsSchemaCollection));
GO
-- Clean up
DROP TABLE T;
GO
DROP XML SCHEMA COLLECTION ManuInstructionsSchemaCollection;
Go
USE master;
GO
DROP DATABASE SampleDB;

Alternatively, you can assign the schema collection to a variable and specify the variable in the
CREATE XML SCHEMA COLLECTION statement as follows:

DECLARE @MySchemaCollection nvarchar(max)


Set @MySchemaCollection = N' copy the schema collection here'
CREATE XML SCHEMA COLLECTION MyCollection AS @MySchemaCollection

The variable in the example is of nvarchar(max) type. The variable can also be of xml data type, in which case, it is
implicitly converted to a string.
For more information, see View a Stored XML Schema Collection.
You may store schema collections in an xml type column. In this case, to create XML schema collection, perform
the following:
1. Retrieve the schema collection from the column by using a SELECT statement and assign it to a variable of
xml type, or a varchar type.
2. Specify the variable name in the CREATE XML SCHEMA COLLECTION statement.
The CREATE XML SCHEMA COLLECTION stores only the schema components that SQL Server
understands; everything in the XML schema is not stored in the database. Therefore, if you want the XML
schema collection back exactly the way it was supplied, we recommend that you save your XML schemas
in a database column or some other folder on your computer.
B. Specifying multiple schema namespaces in a schema collection
You can specify multiple XML schemas when you create an XML schema collection. For example:

CREATE XML SCHEMA COLLECTION MyCollection AS N'


<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<!-- Contents of schema here -->
</xsd:schema>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<!-- Contents of schema here -->
</xsd:schema>';

The following example creates the XML schema collection ProductDescriptionSchemaCollection that includes two
XML schema namespaces.
CREATE XML SCHEMA COLLECTION ProductDescriptionSchemaCollection AS
'<xsd:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelWarrAndMain"
xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain"
elementFormDefault="qualified"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" >
<xsd:element name="Warranty" >
<xsd:complexType>
<xsd:sequence>
<xsd:element name="WarrantyPeriod" type="xsd:string" />
<xsd:element name="Description" type="xsd:string" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
<xs:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelDescription"
xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription"
elementFormDefault="qualified"
xmlns:mstns="http://tempuri.org/XMLSchema.xsd"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:wm="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain" >
<xs:import
namespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain" />
<xs:element name="ProductDescription" type="ProductDescription" />
<xs:complexType name="ProductDescription">
<xs:sequence>
<xs:element name="Summary" type="Summary" minOccurs="0" />
</xs:sequence>
<xs:attribute name="ProductModelID" type="xs:string" />
<xs:attribute name="ProductModelName" type="xs:string" />
</xs:complexType>
<xs:complexType name="Summary" mixed="true" >
<xs:sequence>
<xs:any processContents="skip" namespace="http://www.w3.org/1999/xhtml" minOccurs="0"
maxOccurs="unbounded" />
</xs:sequence>
</xs:complexType>
</xs:schema>'
;
GO -- Clean up
DROP XML SCHEMA COLLECTION ProductDescriptionSchemaCollection;
GO

C. Importing a schema that does not specify a target namespace


If a schema that does not contain a targetNamespace attribute is imported in a collection, its components are
associated with the empty string target namespace as shown in the following example. Note that not associating
one or more schemas imported in the collection causes multiple schema components (potentially unrelated) to be
associated with the default empty string namespace.
-- Create a collection that contains a schema with no target namespace.
CREATE XML SCHEMA COLLECTION MySampleCollection AS '
<schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:ns="http://ns">
<element name="e" type="dateTime"/>
</schema>';
go
-- Query will return the names of all the collections that
--contain a schema with no target namespace.
SELECT sys.xml_schema_collections.name
FROM sys.xml_schema_collections
JOIN sys.xml_schema_namespaces
ON sys.xml_schema_collections.xml_collection_id =
sys.xml_schema_namespaces.xml_collection_id
WHERE sys.xml_schema_namespaces.name='';

D. Using an XML schema collection and batches


A schema collection cannot be referenced in the same batch where it is created. If you try to reference a collection
in the same batch where it was created, you will get an error saying the collection does not exist. The following
example works; however, if you remove GO and, therefore, try to reference the XML schema collection to type an
xml variable in the same batch, it will return an error.

CREATE XML SCHEMA COLLECTION mySC AS '


<schema xmlns="http://www.w3.org/2001/XMLSchema">
<element name="root" type="string"/>
</schema>
';
GO
CREATE TABLE T (Col1 xml (mySC));
GO

See Also
ALTER XML SCHEMA COLLECTION (Transact-SQL )
DROP XML SCHEMA COLLECTION (Transact-SQL )
EVENTDATA (Transact-SQL )
Compare Typed XML to Untyped XML
DROP XML SCHEMA COLLECTION (Transact-SQL )
Requirements and Limitations for XML Schema Collections on the Server
Collations
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Is a clause that can be applied to a database definition or a column definition to define the collation, or to a
character string expression to apply a collation cast.

IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

Transact-SQL Syntax Conventions

Syntax
COLLATE { <collation_name> | database_default }
<collation_name> :: =
{ Windows_collation_name } | { SQL_collation_name }

Arguments
collation_name
Is the name of the collation to be applied to the expression, column definition, or database definition.
collation_name can be only a specified Windows_collation_name or a SQL_collation_name. collation_name must
be a literal value. collation_name cannot be represented by a variable or expression.
Windows_collation_name is the collation name for a Windows Collation Name.
SQL_collation_name is the collation name for a SQL Server Collation Name.
When applying a collation at the database definition level, Unicode-only Windows collations cannot be used with
the COLL ATE clause.
database_default
Causes the COLL ATE clause to inherit the collation of the current database.

Remarks
The COLL ATE clause can be specified at several levels. These include the following:
1. Creating or altering a database.
You can use the COLL ATE clause of the CREATE DATABASE or ALTER DATABASE statement to specify
the default collation of the database. You can also specify a collation when you create a database using
SQL Server Management Studio. If you do not specify a collation, the database is assigned the default
collation of the instance of SQL Server.
NOTE
Windows Unicode-only collations can only be used with the COLLATE clause to apply collations to the nchar,
nvarchar, and ntext data types on column-level and expression-level data; they cannot be used with the COLLATE
clause to change the collation of a database or server instance.

2. Creating or altering a table column.


You can specify collations for each character string column using the COLL ATE clause of the CREATE
TABLE or ALTER TABLE statement. You can also specify a collation when you create a table using SQL
Server Management Studio. If you do not specify a collation, the column is assigned the default collation of
the database.
You can also use the database_default option in the COLL ATE clause to specify that a column in a
temporary table use the collation default of the current user database for the connection instead of
tempdb.
3. Casting the collation of an expression.
You can use the COLL ATE clause to apply a character expression to a certain collation. Character literals
and variables are assigned the default collation of the current database. Column references are assigned
the definition collation of the column.
The collation of an identifier depends on the level at which it is defined. Identifiers of instance-level objects,
such as logins and database names, are assigned the default collation of the instance. Identifiers of objects
within a database, such as tables, views, and column names, are assigned the default collation of the
database. For example, two tables with names different only in case may be created in a database with
case-sensitive collation, but may not be created in a database with case-insensitive collation. For more
information, see Database Identifiers.
Variables, GOTO labels, temporary stored procedures, and temporary tables can be created when the
connection context is associated with one database, and then referenced when the context has been
switched to another database. The identifiers for variables, GOTO labels, temporary stored procedures, and
temporary tables are in the default collation of the server instance.
The COLL ATE clause can be applied only for the char, varchar, text, nchar, nvarchar, and ntext data
types.
COLL ATE uses collate_name to refer to the name of either the SQL Server collation or the Windows
collation to be applied to the expression, column definition, or database definition. collation_name can be
only a specified Windows_collation_name or a SQL_collation_name and the parameter must contain a
literal value. collation_name cannot be represented by a variable or expression.
Collations are generally identified by a collation name, except in Setup. In Setup, you instead specify the
root collation designator (the collation locale) for Windows collations, and then specify sort options that
are sensitive or insensitive to case or accents.
You can execute the system function fn_helpcollations to retrieve a list of all the valid collation names for
Windows collations and SQL Server collations:

SELECT name, description


FROM fn_helpcollations();

SQL Server can support only code pages that are supported by the underlying operating system. When you
perform an action that depends on collations, the SQL Server collation used by the referenced object must use a
code page supported by the operating system running on the computer. These actions can include the following:
Specifying a default collation for a database when you create or alter the database.
Specifying a collation for a column when you create or alter a table.
When restoring or attaching a database, the default collation of the database and the collation of any char,
varchar, and text columns or parameters in the database must be supported by the operating system.

NOTE
Azure SQL Database Managed Instance server collation is SQL_Latin1_General_CP1_CI_AS and cannot be changed.

NOTE
Code page translations are supported for char and varchar data types, but not for text data type. Data loss during code
page translations is not reported.

NOTE
If the collation specified or the collation used by the referenced object uses a code page not supported by Windows, SQL
Server displays an error.

Examples
A. Specifying collation during a select
The following example creates a simple table and inserts 4 rows. Then the example applies two collations when
selecting data from the table, demonstrating how Chiapas is sorted differently.

CREATE TABLE Locations


(Place varchar(15) NOT NULL);
GO
INSERT Locations(Place) VALUES ('Chiapas'),('Colima')
, ('Cinco Rios'), ('California');
GO
--Apply an typical collation
SELECT Place FROM Locations
ORDER BY Place
COLLATE Latin1_General_CS_AS_KS_WS ASC;
GO
-- Apply a Spanish collation
SELECT Place FROM Locations
ORDER BY Place
COLLATE Traditional_Spanish_ci_ai ASC;
GO

Here are the results from the first query.

Place
-------------
California
Chiapas
Cinco Rios
Colima

Here are the results from the second query.


Place
-------------
California
Cinco Rios
Colima
Chiapas

B. Additional examples
For additional examples that use COLLATE, see CREATE DATABASE (SQL Server Transact-SQL ) example G.
Creating a database and specifying a collation name and options, and ALTER TABLE (Transact-SQL )
example V. Changing column collation.

See Also
ALTER TABLE (Transact-SQL )
Collation and Unicode Support
Collation Precedence (Transact-SQL )
Constants (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
CREATE TABLE (Transact-SQL )
DECL ARE @local_variable (Transact-SQL )
table (Transact-SQL )
SQL Server Collation Name (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Is a single string that specifies the collation name for a SQL Server collation.
SQL Server supports Windows collations. SQL Server also supports a limited number (<80) of collations called
SQL Server collations which were developed before SQL Server supported Windows collations. SQL Server
collations are still supported for backward compatibility, but should not be used for new development work. For
more information about Windows collations, see Windows Collation Name (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
<SQL_collation_name> :: =
SQL_SortRules[_Pref]_CPCodepage_<ComparisonStyle>

<ComparisonStyle> ::=
_CaseSensitivity_AccentSensitivity | _BIN

Arguments
SortRules
A string identifying the alphabet or language whose sorting rules are applied when dictionary sorting is specified.
Examples are Latin1_General or Polish.
Pref
Specifies uppercase preference. Even if comparison is case-insensitive, the uppercase version of a letter sorts
before the lowercase version, when there is no other distinction.
Codepage
Specifies a one- to four-digit number that identifies the code page used by the collation. CP1 specifies code page
1252, for all other code pages the complete code page number is specified. For example, CP1251 specifies code
page 1251 and CP850 specifies code page 850.
CaseSensitivity
CI specifies case-insensitive, CS specifies case-sensitive.
AccentSensitivity
AI specifies accent-insensitive, AS specifies accent-sensitive.
BIN
Specifies the binary sort order to be used.

Remarks
To list the SQL Server collations supported by your server, execute the following query.
SELECT * FROM sys.fn_helpcollations()
WHERE name LIKE 'SQL%';

NOTE
For Sort Order ID 80, use any of the Window collations with the code page of 1250, and binary order. For example:
Albanian_BIN, Croatian_BIN, Czech_BIN, Romanian_BIN, Slovak_BIN, Slovenian_BIN.

See Also
ALTER TABLE (Transact-SQL )
Constants (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
CREATE TABLE (Transact-SQL )
DECL ARE @local_variable (Transact-SQL )
table (Transact-SQL )
sys.fn_helpcollations (Transact-SQL )
Windows Collation Name (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the Windows collation name in the COLL ATE clause in SQL Server. The Windows collation name is
composed of the collation designator and the comparison styles.
Transact-SQL Syntax Conventions

Syntax
<Windows_collation_name> :: =
CollationDesignator_<ComparisonStyle>

<ComparisonStyle> :: =
{ CaseSensitivity_AccentSensitivity [ _KanatypeSensitive ] [ _WidthSensitive ]
}
| { _BIN | _BIN2 }

Arguments
CollationDesignator
Specifies the base collation rules used by the Windows collation. The base collation rules cover the following:
The sorting rules that are applied when dictionary sorting is specified. Sorting rules are based on alphabet
or language.
The code page used to store non-Unicode character data.
Some examples are:
Latin1_General or French: both use code page 1252.
Turkish: uses code page 1254.
CaseSensitivity
CI specifies case-insensitive, CS specifies case-sensitive.
AccentSensitivity
AI specifies accent-insensitive, AS specifies accent-sensitive.
KanatypeSensitive
Omitted specifies kanatype-insensitive, KS specifies kanatype-sensitive.
WidthSensitivity
Omitted specifies width-insensitive, WS specifies width-sensitive.
BIN
Specifies the backward-compatible binary sort order to be used.
BIN2
Specifies the binary sort order that uses code-point comparison semantics.
Remarks
Depending on the version of the collations some code points may be undefined. For example compare:

SELECT LOWER(nchar(504) COLLATE Latin1_General_CI_AS);


SELECT LOWER (nchar(504) COLLATE Latin1_General_100_CI_AS);
GO

The first line returns an uppercase character when the collation is Latin1_General_CI_AS, because this code point
is undefined in this collation.
When working with some languages, it can be critical to avoid the older collations. For example, this is true for
Telegu.
In some cases Windows collations and SQL Server collations can generate different query plans for the same
query.

Examples
The following are some examples of Windows collation names:
Latin1_General_100_
Collation uses the Latin1 General dictionary sorting rules, code page 1252. Is case-insensitive and accent-
sensitive. Collation uses the Latin1 General dictionary sorting rules and maps to code page 1252. Shows
the version number of the collation if it is a Windows collation: _90 or _100. Is case-insensitive (CI), and
accent-sensitive (AS ).
Estonian_CS_AS
Collation uses the Estonian dictionary sorting rules, code page 1257. Is case-sensitive and accent-sensitive.
Latin1_General_BIN
Collation uses code page 1252 and binary sorting rules. The Latin1 General dictionary sorting rules are
ignored.

Windows Collations
To list the Windows collations supported by your instance of SQL Server, execute the following query.

SELECT * FROM sys.fn_helpcollations() WHERE name NOT LIKE 'SQL%';

The following table lists all Windows collations supported in SQL Server 2017.

WINDOWS LOCALE COLLATION VERSION 100 COLLATION VERSION 90

Alsatian (France) Latin1_General_100_ Not available

Amharic (Ethiopia) Latin1_General_100_ Not available

Armenian (Armenia) Cyrillic_General_100_ Not available

Assamese (India) Assamese_100_ 1 Not available


WINDOWS LOCALE COLLATION VERSION 100 COLLATION VERSION 90

Bashkir (Russia) Bashkir_100_ Not available

Basque (Basque) Latin1_General_100_ Not available

Bengali (Bangladesh) Bengali_100_1 Not available

Bengali (India) Bengali_100_1 Not available

Bosnian (Bosnia and Herzegovina, Bosnian_Cyrillic_100_ Not available


Cyrillic)

Bosnian (Bosnia and Herzegovina, Bosnian_Latin_100_ Not available


Latin)

Breton (France) Breton_100_ Not available

Chinese (Macao SAR) Chinese_Traditional_Pinyin_100_ Not available

Chinese (Macao SAR) Chinese_Traditional_Stroke_Order_100_ Not available

Chinese (Singapore) Chinese_Simplified_Stroke_Order_100_ Not available

Corsican (France) Corsican_100_ Not available

Croatian (Bosnia and Herzegovina, Croatian_100_ Not available


Latin)

Dari (Afghanistan) Dari_100_ Not available

English (India) Latin1_General_100_ Not available

English (Malaysia) Latin1_General_100_ Not available

English (Singapore) Latin1_General_100_ Not available

Filipino (Philippines) Latin1_General_100_ Not available

Frisian (Netherlands) Frisian_100_ Not available

Georgian (Georgia) Cyrillic_General_100_ Not available

Greenlandic (Greenland) Danish_Greenlandic_100_ Not available

Gujarati (India) Indic_General_100_1 Indic_General_90_

Hausa (Nigeria, Latin) Latin1_General_100_ Not available

Hindi (India) Indic_General_100_1 Indic_General_90_

Igbo (Nigeria) Latin1_General_100_ Not available


WINDOWS LOCALE COLLATION VERSION 100 COLLATION VERSION 90

Inuktitut (Canada, Latin) Latin1_General_100_ Not available

Inuktitut (Syllabics) Canada Latin1_General_100_ Not available

Irish (Ireland) Latin1_General_100_ Not available

Japanese (Japan XJIS) Japanese_XJIS_100_ Japanese_90_, Japanese_

Japanese (Japan) Japanese_Bushu_Kakusu_100_ Not available

Kannada (India) Indic_General_100_1 Indic_General_90_

Khmer (Cambodia) Khmer_100_1 Not available

K'iche (Guatemala) Modern_Spanish_100_ Not available

Kinyarwanda (Rwanda) Latin1_General_100_ Not available

Konkani (India) Indic_General_100_1 Indic_General_90_

Lao (Lao PDR) Lao_100_1 Not available

Lower Sorbian (Germany) Latin1_General_100_ Not available

Luxembourgish (Luxembourg) Latin1_General_100_ Not available

Malayalam (India) Indic_General_100_1 Not available

Maltese (Malta) Maltese_100_ Not available

Maori (New Zealand) Maori_100_ Not available

Mapudungun (Chile) Mapudungan_100_ Not available

Marathi (India) Indic_General_100_1 Indic_General_90_

Mohawk (Canada) Mohawk_100_ Not available

Mongolian (PRC) Cyrillic_General_100_ Not available

Nepali (Nepal) Nepali_100_1 Not available

Norwegian (Bokmål, Norway) Norwegian_100_ Not available

Norwegian (Nynorsk, Norway) Norwegian_100_ Not available

Occitan (France) French_100_ Not available

Oriya (India) Indic_General_100_1 Not available


WINDOWS LOCALE COLLATION VERSION 100 COLLATION VERSION 90

Pashto (Afghanistan) Pashto_100_1 Not available

Persian (Iran) Persian_100_ Not available

Punjabi (India) Indic_General_100_1 Indic_General_90_

Quechua (Bolivia) Latin1_General_100_ Not available

Quechua (Ecuador) Latin1_General_100_ Not available

Quechua (Peru) Latin1_General_100_ Not available

Romansh (Switzerland) Romansh_100_ Not available

Sami (Inari, Finland) Sami_Sweden_Finland_100_ Not available

Sami (Lule,Norway) Sami_Norway_100_ Not available

Sami (Lule, Sweden) Sami_Sweden_Finland_100_ Not available

Sami (Northern, Finland) Sami_Sweden_Finland_100_ Not available

Sami (Northern,Norway) Sami_Norway_100_ Not available

Sami (Northern, Sweden) Sami_Sweden_Finland_100_ Not available

Sami (Skolt, Finland) Sami_Sweden_Finland_100_ Not available

Sami (Southern, Norway) Sami_Norway_100_ Not available

Sami (Southern, Sweden) Sami_Sweden_Finland_100_ Not available

Sanskrit (India) Indic_General_100_1 Indic_General_90_

Serbian (Bosnia and Herzegovina, Serbian_Cyrillic_100_ Not available


Cyrillic)

Serbian (Bosnia and Herzegovina, Latin) Serbian_Latin_100_ Not available

Serbian (Serbia, Cyrillic) Serbian_Cyrillic_100_ Not available

Serbian (Serbia, Latin) Serbian_Latin_100_ Not available

Sesotho sa Leboa/Northern Sotho Latin1_General_100_ Not available


(South Africa)

Setswana/Tswana (South Africa) Latin1_General_100_ Not available

Sinhala (Sri Lanka) Indic_General_100_1 Not available


WINDOWS LOCALE COLLATION VERSION 100 COLLATION VERSION 90

Swahili (Kenya) Latin1_General_100_ Not available

Syriac (Syria) Syriac_100_1 Syriac_90_

Tajik (Tajikistan) Cyrillic_General_100_ Not available

Tamazight (Algeria, Latin) Tamazight_100_ Not available

Tamil (India) Indic_General_100_1 Indic_General_90_

Telugu (India) Indic_General_100_1 Indic_General_90_

Tibetan (PRC) Tibetan_100_1 Not available

Turkmen (Turkmenistan) Turkmen_100_ Not available

Uighur (PRC) Uighur_100_ Not available

Upper Sorbian (Germany) Upper_Sorbian_100_ Not available

Urdu (Pakistan) Urdu_100_ Not available

Welsh (United Kingdom) Welsh_100_ Not available

Wolof (Senegal) French_100_ Not available

Xhosa/isiXhosa (South Africa) Latin1_General_100_ Not available

Yakut (Russia) Yakut_100_ Not available

Yi (PRC) Latin1_General_100_ Not available

Yoruba (Nigeria) Latin1_General_100_ Not available

Zulu/isiZulu (South Africa) Latin1_General_100_ Not available

Deprecated, not available at server Hindi Hindi


level in SQL Server 2008 or later

Deprecated, not available at server Korean_Wansung_Unicode Korean_Wansung_Unicode


level in SQL Server 2008 or later

Deprecated, not available at server Lithuanian_Classic Lithuanian_Classic


level in SQL Server 2008 or later

Deprecated, not available at server Macedonian Macedonian


level in SQL Server 2008 or later

1Unicode-only Windows collations can only be applied to column-level or expression-level data. They cannot be
used as server or database collations.
2Like the Chinese ( Taiwan) collation, Chinese ( Macau) uses the rules of Simplified Chinese; unlike Chinese
(Taiwan), it uses code page 950.

See Also
Collation and Unicode Support
ALTER TABLE (Transact-SQL )
Constants (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
CREATE TABLE (Transact-SQL )
DECL ARE @local_variable (Transact-SQL )
table (Transact-SQL )
sys.fn_helpcollations (Transact-SQL )
Collation Precedence (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Collation precedence, also known as collation coercion rules, determines the following:
The collation of the final result of an expression that is evaluated to a character string.
The collation that is used by collation-sensitive operators that use character string inputs but do not return a
character string, such as LIKE and IN.
The collation precedence rules apply only to the character string data types: char, varchar, text, nchar,
nvarchar, and ntext. Objects that have other data types do not participate in collation evaluations.

Collation Labels
The following table lists and describes the four categories in which the collations of all objects are identified. The
name of each category is called the collation label.

COLLATION LABEL TYPES OF OBJECTS

Coercible-default Any Transact-SQL character string variable, parameter, literal,


or the output of a catalog built-in function, or a built-in
function that does not take string inputs but produces a
string output.

If the object is declared in a user-defined function, stored


procedure, or trigger, the object is assigned the default
collation of the database in which the function, stored
procedure, or trigger is created. If the object is declared in a
batch, the object is assigned the default collation of the
current database for the connection.

Implicit X A column reference. The collation of the expression (X) is taken


from the collation defined for the column in the table or view.

Even if the column was explicitly assigned a collation by using


a COLLATE clause in the CREATE TABLE or CREATE VIEW
statement, the column reference is classified as implicit.

Explicit X An expression that is explicitly cast to a specific collation (X) by


using a COLLATE clause in the expression.

No-collation Indicates that the value of an expression is the result of an


operation between two strings that have conflicting collations
of the implicit collation label. The expression result is defined
as not having a collation.

Collation Rules
The collation label of a simple expression that references only one character string object is the collation label of
the referenced object.
The collation label of a complex expression that references two operand expressions with the same collation label
is the collation label of the operand expressions.
The collation label of the final result of a complex expression that references two operand expressions with
different collations is based on the following rules:
Explicit takes precedence over implicit. Implicit takes precedence over Coercible-default:
Explicit > Implicit > Coercible-default
Combining two Explicit expressions that have been assigned different collations generates an error:
Explicit X + Explicit Y = Error
Combining two Implicit expressions that have different collations yields a result of No-collation:
Implicit X + Implicit Y = No-collation
Combining an expression with No-collation with an expression of any label, except Explicit collation (see the
following rule), yields a result that has the No-collation label:
No-collation + anything = No-collation
Combining an expression with No-collation with an expression that has an Explicit collation, yields an
expression with an Explicit label:
No-collation + Explicit X = Explicit
The following table summarizes the rules.

OPERAND COERCION
LABEL EXPLICIT X IMPLICIT X COERCIBLE-DEFAULT NO-COLLATION

Explicit Y Generates Error Result is Explicit Y Result is Explicit Y Result is Explicit Y

Implicit Y Result is Explicit X Result is No-collation Result is Implicit Y Result is No-collation

Coercible-default Result is Explicit X Result is Implicit X Result is Coercible- Result is No-collation


default

No-collation Result is Explicit X Result is No-collation Result is No-collation Result is No-collation

The following additional rules also apply to collation precedence:


You cannot have multiple COLL ATE clauses on an expression that is already an explicit expression. For
example, the following WHERE clause is not valid because a COLLATE clause is specified for an expression
that is already an explicit expression:
WHERE ColumnA = ( 'abc' COLLATE French_CI_AS) COLLATE French_CS_AS

Code page conversions for text data types are not allowed. You cannot cast a text expression from one
collation to another if they have the different code pages. The assignment operator cannot assign values
when the collation of the right text operand has a different code page than the left text operand.
Collation precedence is determined after data type conversion. The operand from which the resulting
collation is taken can be different from the operand that supplies the data type of the final result. For
example, consider the following batch:
CREATE TABLE TestTab
(PrimaryKey int PRIMARY KEY,
CharCol char(10) COLLATE French_CI_AS
)

SELECT *
FROM TestTab
WHERE CharCol LIKE N'abc'

The Unicode data type of the simple expression N'abc' has a higher data type precedence. Therefore, the resulting
expression has the Unicode data type assigned to N'abc' . However, the expression CharCol has a collation label
of Implicit, and N'abc' has a lower coercion label of Coercible-default. Therefore, the collation that is used is the
French_CI_AS collation of CharCol .

Examples of Collation Rules


The following examples show how the collation rules work. To run the examples, create the following test table.

USE tempdb;
GO

CREATE TABLE TestTab (


id int,
GreekCol nvarchar(10) collate greek_ci_as,
LatinCol nvarchar(10) collate latin1_general_cs_as
)
INSERT TestTab VALUES (1, N'A', N'a');
GO

Collation Conflict and Error


The predicate in the following query has collation conflict and generates an error.

SELECT *
FROM TestTab
WHERE GreekCol = LatinCol;

Here is the result set.

Msg 448, Level 16, State 9, Line 2


Cannot resolve collation conflict between 'Latin1_General_CS_AS' and 'Greek_CI_AS' in equal to operation.

Explicit Label vs. Implicit Label


The predicate in the following query is evaluated in collation greek_ci_as because the right expression has the
Explicit label. This takes precedence over the Implicit label of the left expression.

SELECT *
FROM TestTab
WHERE GreekCol = LatinCol COLLATE greek_ci_as;

Here is the result set.

id GreekCol LatinCol
----------- -------------------- --------------------
1 A a

(1 row affected)
No-Collation Labels
The CASE expressions in the following queries have a No-collation label; therefore, they cannot appear in the
select list or be operated on by collation-sensitive operators. However, the expressions can be operated on by
collation-insensitive operators.

SELECT (CASE WHEN id > 10 THEN GreekCol ELSE LatinCol END)


FROM TestTab;

Here is the result set.

Msg 451, Level 16, State 1, Line 1


Cannot resolve collation conflict for column 1 in SELECT statement.

SELECT PATINDEX((CASE WHEN id > 10 THEN GreekCol ELSE LatinCol END), 'a')
FROM TestTab;

Here is the result set.

Msg 446, Level 16, State 9, Server LEIH2, Line 1


Cannot resolve collation conflict for patindex operation.

SELECT (CASE WHEN id > 10 THEN GreekCol ELSE LatinCol END) COLLATE Latin1_General_CI_AS
FROM TestTab;

Here is the result set.

--------------------
a

(1 row affected)

Collation Sensitive and Collation Insensitive


Operators and functions are either collation sensitive or insensitive.
Collation sensitive
This means that specifying a No-collation operand is a compile-time error. The expression result cannot be No-
collation.
Collation insensitive
This means that the operands and result can be No-collation.
Operators and Collation
The comparison operators, and the MAX, MIN, BETWEEN, LIKE, and IN operators, are collation sensitive. The
string used by the operators is assigned the collation label of the operand that has the higher precedence. The
UNION operator is also collation sensitive, and all string operands and the final result is assigned the collation of
the operand with the highest precedence. The collation precedence of the UNION operands and result are
evaluated column by column.
The assignment operator is collation insensitive and the right expression is cast to the left collation.
The string concatenation operator is collation sensitive, the two string operands and the result are assigned the
collation label of the operand with the highest collation precedence. The UNION ALL and CASE operators are
collation insensitive, and all string operands and the final results are assigned the collation label of the operand
with the highest precedence. The collation precedence of the UNION ALL operands and result are evaluated
column by column.
Functions and Collation
THE CAST, CONVERT, and COLL ATE functions are collation sensitive for char, varchar, and text data types. If the
input and output of the CAST and CONVERT functions are character strings, the output string has the collation
label of the input string. If the input is not a character string, the output string is Coercible-default and assigned the
collation of the current database for the connection, or the database that contains the user-defined function, stored
procedure, or trigger in which the CAST or CONVERT is referenced.
For the built-in functions that return a string but do not take a string input, the result string is Coercible-default and
is assigned either the collation of the current database, or the collation of the database that contains the user-
defined function, stored procedure, or trigger in which the function is referenced.
The following functions are collation-sensitive and their output strings have the collation label of the input string:

CHARINDEX REPLACE

DIFFERENCE REVERSE

ISNUMERIC RIGHT

LEFT SOUNDEX

LEN STUFF

LOWER SUBSTRING

PATINDEX UPPER

See Also
COLL ATE (Transact-SQL )
Data Type Conversion (Database Engine)
Operators (Transact-SQL )
Expressions (Transact-SQL )
DELETE (Transact-SQL)
5/30/2018 • 14 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more rows from a table or view in SQL Server.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

[ WITH <common_table_expression> [ ,...n ] ]


DELETE
[ TOP ( expression ) [ PERCENT ] ]
[ FROM ]
{ { table_alias
| <object>
| rowset_function_limited
[ WITH ( table_hint_limited [ ...n ] ) ] }
| @table_variable
}
[ <OUTPUT Clause> ]
[ FROM table_source [ ,...n ] ]
[ WHERE { <search_condition>
| { [ CURRENT OF
{ { [ GLOBAL ] cursor_name }
| cursor_variable_name
}
]
}
}
]
[ OPTION ( <Query Hint> [ ,...n ] ) ]
[; ]

<object> ::=
{
[ server_name.database_name.schema_name.
| database_name. [ schema_name ] .
| schema_name.
]
table_or_view_name
}

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DELETE FROM [database_name . [ schema ] . | schema. ] table_name


[ WHERE <search_condition> ]
[ OPTION ( <query_options> [ ,...n ] ) ]
[; ]

Arguments
WITH <common_table_expression>
Specifies the temporary named result set, also known as common table expression, defined within the scope of
the DELETE statement. The result set is derived from a SELECT statement.
Common table expressions can also be used with the SELECT, INSERT, UPDATE, and CREATE VIEW statements.
For more information, see WITH common_table_expression (Transact-SQL ).
TOP (expression) [ PERCENT ]
Specifies the number or percent of random rows that will be deleted. expression can be either a number or a
percent of the rows. The rows referenced in the TOP expression used with INSERT, UPDATE, or DELETE are not
arranged in any order. For more information, see TOP (Transact-SQL ).
FROM
An optional keyword that can be used between the DELETE keyword and the target table_or_view_name, or
rowset_function_limited.
table_alias
The alias specified in the FROM table_source clause representing the table or view from which the rows are to be
deleted.
server_name
Applies to: SQL Server 2008 through SQL Server 2017.
The name of the server (using a linked server name or the OPENDATASOURCE function as the server name) on
which the table or view is located. If server_name is specified, database_name and schema_name are required.
database_name
The name of the database.
schema_name
The name of the schema to which the table or view belongs.
table_or_view_name
The name of the table or view from which the rows are to be removed.
A table variable, within its scope, also can be used as a table source in a DELETE statement.
The view referenced by table_or_view_name must be updatable and reference exactly one base table in the
FROM clause of the view definition. For more information about updatable views, see CREATE VIEW (Transact-
SQL ).
rowset_function_limited
Applies to: SQL Server 2008 through SQL Server 2017.
Either the OPENQUERY or OPENROWSET function, subject to provider capabilities.
WITH ( <table_hint_limited> [... n] )
Specifies one or more table hints that are allowed for a target table. The WITH keyword and the parentheses are
required. NOLOCK and READUNCOMMITTED are not allowed. For more information about table hints, see
Table Hints (Transact-SQL ).
<OUTPUT_Clause>
Returns deleted rows, or expressions based on them, as part of the DELETE operation. The OUTPUT clause is not
supported in any DML statements targeting views or remote tables. For more information, see OUTPUT Clause
(Transact-SQL ).
FROM table_source
Specifies an additional FROM clause. This Transact-SQL extension to DELETE allows specifying data from
<table_source> and deleting the corresponding rows from the table in the first FROM clause.
This extension, specifying a join, can be used instead of a subquery in the WHERE clause to identify rows to be
removed.
For more information, see FROM (Transact-SQL ).
WHERE
Specifies the conditions used to limit the number of rows that are deleted. If a WHERE clause is not supplied,
DELETE removes all the rows from the table.
There are two forms of delete operations based on what is specified in the WHERE clause:
Searched deletes specify a search condition to qualify the rows to delete. For example, WHERE
column_name = value.
Positioned deletes use the CURRENT OF clause to specify a cursor. The delete operation occurs at the
current position of the cursor. This can be more accurate than a searched DELETE statement that uses a
WHERE search_condition clause to qualify the rows to be deleted. A searched DELETE statement deletes
multiple rows if the search condition does not uniquely identify a single row.
<search_condition>
Specifies the restricting conditions for the rows to be deleted. There is no limit to the number of predicates that
can be included in a search condition. For more information, see Search Condition (Transact-SQL ).
CURRENT OF
Specifies that the DELETE is performed at the current position of the specified cursor.
GLOBAL
Specifies that cursor_name refers to a global cursor.
cursor_name
Is the name of the open cursor from which the fetch is made. If both a global and a local cursor with the name
cursor_name exist, this argument refers to the global cursor if GLOBAL is specified; otherwise, it refers to the local
cursor. The cursor must allow updates.
cursor_variable_name
The name of a cursor variable. The cursor variable must reference a cursor that allows updates.
OPTION ( <query_hint> [ ,... n] )
Keywords that indicate which optimizer hints are used to customize the way the Database Engine processes the
statement. For more information, see Query Hints (Transact-SQL ).

Best Practices
To delete all the rows in a table, use TRUNCATE TABLE. TRUNCATE TABLE is faster than DELETE and uses fewer
system and transaction log resources. TRUNCATE TABLE has restrictions, for example, the table cannot
participate in replication. For more information, see TRUNCATE TABLE (Transact-SQL )
Use the @@ROWCOUNT function to return the number of deleted rows to the client application. For more
information, see @@ROWCOUNT (Transact-SQL ).

Error Handling
You can implement error handling for the DELETE statement by specifying the statement in a TRY…CATCH
construct.
The DELETE statement may fail if it violates a trigger or tries to remove a row referenced by data in another table
with a FOREIGN KEY constraint. If the DELETE removes multiple rows, and any one of the removed rows violates
a trigger or constraint, the statement is canceled, an error is returned, and no rows are removed.
When a DELETE statement encounters an arithmetic error (overflow, divide by zero, or a domain error) occurring
during expression evaluation, the Database Engine handles these errors as if SET ARITHABORT is set ON. The
rest of the batch is canceled, and an error message is returned.

Interoperability
DELETE can be used in the body of a user-defined function if the object modified is a table variable.
When you delete a row that contains a FILESTREAM column, you also delete its underlying file system files. The
underlying files are removed by the FILESTREAM garbage collector. For more information, see Access
FILESTREAM Data with Transact-SQL.
The FROM clause cannot be specified in a DELETE statement that references, either directly or indirectly, a view
with an INSTEAD OF trigger defined on it. For more information about INSTEAD OF triggers, see CREATE
TRIGGER (Transact-SQL ).

Limitations and Restrictions


When TOP is used with DELETE, the referenced rows are not arranged in any order and the ORDER BY clause
can not be directly specified in this statement. If you need to use TOP to delete rows in a meaningful
chronological order, you must use TOP together with an ORDER BY clause in a subselect statement. See the
Examples section that follows in this topic.
TOP cannot be used in a DELETE statement against partitioned views.

Locking Behavior
By default, a DELETE statement always acquires an exclusive (X) lock on the table it modifies, and holds that lock
until the transaction completes. With an exclusive (X) lock, no other transactions can modify data; read operations
can take place only with the use of the NOLOCK hint or read uncommitted isolation level. You can specify table
hints to override this default behavior for the duration of the DELETE statement by specifying another locking
method, however, we recommend that hints be used only as a last resort by experienced developers and database
administrators. For more information, see Table Hints (Transact-SQL ).
When rows are deleted from a heap the Database Engine may use row or page locking for the operation. As a
result, the pages made empty by the delete operation remain allocated to the heap. When empty pages are not
deallocated, the associated space cannot be reused by other objects in the database.
To delete rows in a heap and deallocate pages, use one of the following methods.
Specify the TABLOCK hint in the DELETE statement. Using the TABLOCK hint causes the delete operation
to take an exclusive lock on the table instead of a row or page lock. This allows the pages to be deallocated.
For more information about the TABLOCK hint, see Table Hints (Transact-SQL ).
Use TRUNCATE TABLE if all rows are to be deleted from the table.
Create a clustered index on the heap before deleting the rows. You can drop the clustered index after the
rows are deleted. This method is more time consuming than the previous methods and uses more
temporary resources.

NOTE
Empty pages can be removed from a heap at any time by using the ALTER TABLE <table_name> REBUILD statement.

Logging Behavior
The DELETE statement is always fully logged.

Security
Permissions
DELETE permissions are required on the target table. SELECT permissions are also required if the statement
contains a WHERE clause.
DELETE permissions default to members of the sysadmin fixed server role, the db_owner and db_datawriter
fixed database roles, and the table owner. Members of the sysadmin, db_owner, and the db_securityadmin
roles, and the table owner can transfer permissions to other users.

Examples
CATEGORY FEATURED SYNTAX ELEMENTS

Basic syntax DELETE

Limiting the rows deleted WHERE • FROM • cursor •

Deleting rows from a remote table Linked server • OPENQUERY rowset function •
OPENDATASOURCE rowset function

Capturing the results of the DELETE statement OUTPUT clause

Basic Syntax
Examples in this section demonstrate the basic functionality of the DELETE statement using the minimum
required syntax.
A. Using DELETE with no WHERE clause
The following example deletes all rows from the SalesPersonQuotaHistory table in the AdventureWorks2012
database because a WHERE clause is not used to limit the number of rows deleted.

DELETE FROM Sales.SalesPersonQuotaHistory;


GO

Limiting the Rows Deleted


Examples in this section demonstrate how to limit the number of rows that will be deleted.
B. Using the WHERE clause to delete a set of rows
The following example deletes all rows from the ProductCostHistory table in the AdventureWorks2012 database
in which the value in the StandardCost column is more than 1000.00 .

DELETE FROM Production.ProductCostHistory


WHERE StandardCost > 1000.00;
GO

The following example shows a more complex WHERE clause. The WHERE clause defines two conditions that
must be met to determine the rows to delete. The value in the StandardCost column must be between 12.00 and
14.00 and the value in the column SellEndDate must be null. The example also prints the value from the
@@ROWCOUNT function to return the number of deleted rows.
DELETE Production.ProductCostHistory
WHERE StandardCost BETWEEN 12.00 AND 14.00
AND EndDate IS NULL;
PRINT 'Number of rows deleted is ' + CAST(@@ROWCOUNT as char(3));

C. Using a cursor to determine the row to delete


The following example deletes a single row from the EmployeePayHistory table in the AdventureWorks2012
database using a cursor named my_cursor . The delete operation affects only the single row currently fetched
from the cursor.

DECLARE complex_cursor CURSOR FOR


SELECT a.BusinessEntityID
FROM HumanResources.EmployeePayHistory AS a
WHERE RateChangeDate <>
(SELECT MAX(RateChangeDate)
FROM HumanResources.EmployeePayHistory AS b
WHERE a.BusinessEntityID = b.BusinessEntityID) ;
OPEN complex_cursor;
FETCH FROM complex_cursor;
DELETE FROM HumanResources.EmployeePayHistory
WHERE CURRENT OF complex_cursor;
CLOSE complex_cursor;
DEALLOCATE complex_cursor;
GO

D. Using joins and subqueries to data in one table to delete rows in another table
The following examples show two ways to delete rows in one table based on data in another table. In both
examples, rows from the SalesPersonQuotaHistory table in the AdventureWorks2012 database are deleted based
on the year-to-date sales stored in the SalesPerson table. The first DELETE statement shows the ISO -compatible
subquery solution, and the second DELETE statement shows the Transact-SQL FROM extension to join the two
tables.

-- SQL-2003 Standard subquery

DELETE FROM Sales.SalesPersonQuotaHistory


WHERE BusinessEntityID IN
(SELECT BusinessEntityID
FROM Sales.SalesPerson
WHERE SalesYTD > 2500000.00);
GO

-- Transact-SQL extension

DELETE FROM Sales.SalesPersonQuotaHistory


FROM Sales.SalesPersonQuotaHistory AS spqh
INNER JOIN Sales.SalesPerson AS sp
ON spqh.BusinessEntityID = sp.BusinessEntityID
WHERE sp.SalesYTD > 2500000.00;
GO
-- No need to mention target table more than once.

DELETE spqh
FROM
Sales.SalesPersonQuotaHistory AS spqh
INNER JOIN Sales.SalesPerson AS sp
ON spqh.BusinessEntityID = sp.BusinessEntityID
WHERE sp.SalesYTD > 2500000.00;

E. Using TOP to limit the number of rows deleted


When a TOP (n) clause is used with DELETE, the delete operation is performed on a random selection of n
number of rows. The following example deletes 20 random rows from the PurchaseOrderDetail table in the
AdventureWorks2012 database that have due dates that are earlier than July 1, 2006.

DELETE TOP (20)


FROM Purchasing.PurchaseOrderDetail
WHERE DueDate < '20020701';
GO

If you have to use TOP to delete rows in a meaningful chronological order, you must use TOP together with
ORDER BY in a subselect statement. The following query deletes the 10 rows of the PurchaseOrderDetail table
that have the earliest due dates. To ensure that only 10 rows are deleted, the column specified in the subselect
statement ( PurchaseOrderID ) is the primary key of the table. Using a nonkey column in the subselect statement
may result in the deletion of more than 10 rows if the specified column contains duplicate values.

DELETE FROM Purchasing.PurchaseOrderDetail


WHERE PurchaseOrderDetailID IN
(SELECT TOP 10 PurchaseOrderDetailID
FROM Purchasing.PurchaseOrderDetail
ORDER BY DueDate ASC);
GO

Deleting Rows From a Remote Table


Examples in this section demonstrate how to delete rows from a remote table by using a linked server or a rowset
function to reference the remote table. A remote table exists on a different server or instance of SQL Server.
Applies to: SQL Server 2008 through SQL Server 2017.
F. Deleting data from a remote table by using a linked server
The following example deletes rows from a remote table. The example begins by creating a link to the remote
data source by using sp_addlinkedserver. The linked server name, MyLinkServer , is then specified as part of the
four-part object name in the form server.catalog.schema.object.

USE master;
GO
-- Create a link to the remote data source.
-- Specify a valid server name for @datasrc as 'server_name' or 'server_name\instance_name'.

EXEC sp_addlinkedserver @server = N'MyLinkServer',


@srvproduct = N' ',
@provider = N'SQLNCLI',
@datasrc = N'server_name',
@catalog = N'AdventureWorks2012';
GO
-- Specify the remote data source using a four-part name
-- in the form linked_server.catalog.schema.object.

DELETE MyLinkServer.AdventureWorks2012.HumanResources.Department
WHERE DepartmentID > 16;
GO

G. Deleting data from a remote table by using the OPENQUERY function


The following example deletes rows from a remote table by specifying the OPENQUERY rowset function. The
linked server name created in the previous example is used in this example.

DELETE OPENQUERY (MyLinkServer, 'SELECT Name, GroupName


FROM AdventureWorks2012.HumanResources.Department
WHERE DepartmentID = 18');
GO

H. Deleting data from a remote table by using the OPENDATASOURCE function


The following example deletes rows from a remote table by specifying the OPENDATASOURCE rowset function.
Specify a valid server name for the data source by using the format server_name or server_name\instance_name.

DELETE FROM OPENDATASOURCE('SQLNCLI',


'Data Source= <server_name>; Integrated Security=SSPI')
.AdventureWorks2012.HumanResources.Department
WHERE DepartmentID = 17;'

Capturing the results of the DELETE statement


I. Using DELETE with the OUTPUT clause
The following example shows how to save the results of a DELETE statement to a table variable in the
AdventureWorks2012 database.

DELETE Sales.ShoppingCartItem
OUTPUT DELETED.*
WHERE ShoppingCartID = 20621;

--Verify the rows in the table matching the WHERE clause have been deleted.
SELECT COUNT(*) AS [Rows in Table]
FROM Sales.ShoppingCartItem
WHERE ShoppingCartID = 20621;
GO

J. Using OUTPUT with <from_table_name> in a DELETE statement


The following example deletes rows in the ProductProductPhoto table in the AdventureWorks2012 database
based on search criteria defined in the FROM clause of the DELETE statement. The OUTPUT clause returns columns
from the table being deleted, DELETED.ProductID , DELETED.ProductPhotoID , and columns from the Product table.
This is used in the FROM clause to specify the rows to delete.
DECLARE @MyTableVar table (
ProductID int NOT NULL,
ProductName nvarchar(50)NOT NULL,
ProductModelID int NOT NULL,
PhotoID int NOT NULL);

DELETE Production.ProductProductPhoto
OUTPUT DELETED.ProductID,
p.Name,
p.ProductModelID,
DELETED.ProductPhotoID
INTO @MyTableVar
FROM Production.ProductProductPhoto AS ph
JOIN Production.Product as p
ON ph.ProductID = p.ProductID
WHERE p.ProductModelID BETWEEN 120 and 130;

--Display the results of the table variable.


SELECT ProductID, ProductName, ProductModelID, PhotoID
FROM @MyTableVar
ORDER BY ProductModelID;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


K. Delete all rows from a table
The following example deletes all rows from the Table1 table because a WHERE clause is not used to limit the
number of rows deleted.

DELETE FROM Table1;

L. DELETE a set of rows from a table


The following example deletes all rows from the Table1 table that have a value greater than 1000.00 in the
StandardCost column.

DELETE FROM Table1


WHERE StandardCost > 1000.00;

M. Using LABEL with a DELETE statement


The following example uses a label with the DELETE statement.

DELETE FROM Table1


OPTION ( LABEL = N'label1' );

N. Using a label and a query hint with the DELETE statement


This query shows the basic syntax for using a query join hint with the DELETE statement. For more information
on join hints and how to use the OPTION clause, see OPTION (SQL Server PDW ).
-- Uses AdventureWorks

DELETE FROM dbo.FactInternetSales


WHERE ProductKey IN (
SELECT T1.ProductKey FROM dbo.DimProduct T1
JOIN dbo.DimProductSubcategory T2
ON T1.ProductSubcategoryKey = T2.ProductSubcategoryKey
WHERE T2.EnglishProductSubcategoryName = 'Road Bikes' )
OPTION ( LABEL = N'CustomJoin', HASH JOIN ) ;

See Also
CREATE TRIGGER (Transact-SQL )
INSERT (Transact-SQL )
SELECT (Transact-SQL )
TRUNCATE TABLE (Transact-SQL )
UPDATE (Transact-SQL )
WITH common_table_expression (Transact-SQL )
@@ROWCOUNT (Transact-SQL )
DISABLE TRIGGER (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Disables a trigger.
Transact-SQL Syntax Conventions

Syntax
DISABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL }
ON { object_name | DATABASE | ALL SERVER } [ ; ]

Arguments
schema_name
Is the name of the schema to which the trigger belongs. schema_name cannot be specified for DDL or logon
triggers.
trigger_name
Is the name of the trigger to be disabled.
ALL
Indicates that all triggers defined at the scope of the ON clause are disabled.
Cau t i on

SQL Server creates triggers in databases that are published for merge replication. Specifying ALL in published
databases disables these triggers, which disrupts replication. Verify that the current database is not published for
merge replication before specifying ALL.
object_name
Is the name of the table or view on which the DML trigger trigger_name was created to execute.
DATABASE
For a DDL trigger, indicates that trigger_name was created or modified to execute with database scope.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
For a DDL trigger, indicates that trigger_name was created or modified to execute with server scope. ALL SERVER
also applies to logon triggers.

NOTE
This option is not available in a contained database.

Remarks
Triggers are enabled by default when they are created. Disabling a trigger does not drop it. The trigger still exists
as an object in the current database. However, the trigger does not fire when any Transact-SQL statements on
which it was programmed are executed. Triggers can be re-enabled by using ENABLE TRIGGER. DML triggers
defined on tables can be also be disabled or enabled by using ALTER TABLE.
Changing the trigger by using the ALTER TRIGGER statement enables the trigger.

Permissions
To disable a DML trigger, at a minimum, a user must have ALTER permission on the table or view on which the
trigger was created.
To disable a DDL trigger with server scope (ON ALL SERVER ) or a logon trigger, a user must have CONTROL
SERVER permission on the server. To disable a DDL trigger with database scope (ON DATABASE ), at a minimum,
a user must have ALTER ANY DATABASE DDL TRIGGER permission in the current database.

Examples
The following examples are described in the AdventureWorks2012 database.
A. Disabling a DML trigger on a table
The following example disables trigger uAddress that was created on table Address .

DISABLE TRIGGER Person.uAddress ON Person.Address;


GO

B. Disabling a DDL trigger


The following example creates a DDL trigger safety with database scope, and then disables it.

CREATE TRIGGER safety


ON DATABASE
FOR DROP_TABLE, ALTER_TABLE
AS
PRINT 'You must disable Trigger "safety" to drop or alter tables!'
ROLLBACK;
GO
DISABLE TRIGGER safety ON DATABASE;
GO

C. Disabling all triggers that were defined with the same scope
The following example disables all DDL triggers that were created at the server scope.

DISABLE Trigger ALL ON ALL SERVER;


GO

See Also
ENABLE TRIGGER (Transact-SQL )
ALTER TRIGGER (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
DROP TRIGGER (Transact-SQL )
sys.triggers (Transact-SQL )
DROP AGGREGATE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a user-defined aggregate function from the current database. User-defined aggregate functions are
created by using CREATE AGGREGATE.
Transact-SQL Syntax Conventions

Syntax
DROP AGGREGATE [ IF EXISTS ] [ schema_name . ] aggregate_name

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the aggregate only if it already exists.
schema_name
Is the name of the schema to which the user-defined aggregate function belongs.
aggregate_name
Is the name of the user-defined aggregate function you want to drop.

Remarks
DROP AGGREGATE does not execute if there are any views, functions, or stored procedures created with schema
binding that reference the user-defined aggregate function you want to drop.

Permissions
To execute DROP AGGREGATE, at a minimum, a user must have ALTER permission on the schema to which the
user-defined aggregate belongs, or CONTROL permission on the aggregate.

Examples
The following example drops the aggregate Concatenate .

DROP AGGREGATE dbo.Concatenate;

See Also
CREATE AGGREGATE (Transact-SQL )
Create User-defined Aggregates
DROP APPLICATION ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an application role from the current database.
Transact-SQL Syntax Conventions

Syntax
DROP APPLICATION ROLE rolename

Arguments
rolename
Specifies the name of the application role to be dropped.

Remarks
If the application role owns any securables it cannot be dropped. Before dropping an application role that owns
securables, you must first transfer ownership of the securables, or drop them.
Cau t i on

Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER AUTHORIZATION.
In such databases you must instead use the new catalog views. The new catalog views take into account the
separation of principals and schemas that was introduced in SQL Server 2005. For more information about
catalog views, see Catalog Views (Transact-SQL ).

Permissions
Requires ALTER ANY APPLICATION ROLE permission on the database.

Examples
Drop application role "weekly_ledger" from the database.

DROP APPLICATION ROLE weekly_ledger;


GO

See Also
Application Roles
CREATE APPLICATION ROLE (Transact-SQL )
ALTER APPLICATION ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP ASSEMBLY (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an assembly and all its associated files from the current database. Assemblies are created by using
CREATE ASSEMBLY and modified by using ALTER ASSEMBLY.
Transact-SQL Syntax Conventions

Syntax
DROP ASSEMBLY [ IF EXISTS ] assembly_name [ ,...n ]
[ WITH NO DEPENDENTS ]
[ ; ]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the assembly only if it already exists.
assembly_name
Is the name of the assembly you want to drop.
WITH NO DEPENDENTS
If specified, drops only assembly_name and none of the dependent assemblies that are referenced by the
assembly. If not specified, DROP ASSEMBLY drops assembly_name and all dependent assemblies.

Remarks
Dropping an assembly removes an assembly and all its associated files, such as source code and debug files, from
the database.
If WITH NO DEPENDENTS is not specified, DROP ASSEMBLY drops assembly_name and all dependent
assemblies. If an attempt to drop any dependent assemblies fails, DROP ASSEMBLY returns an error.
DROP ASSEMBLY returns an error if the assembly is referenced by another assembly that exists in the database
or if it is used by common language runtime (CLR ) functions, procedures, triggers, user-defined types or
aggregates in the current database.
DROP ASSEMBLY does not interfere with any code referencing the assembly that is currently running. However,
after DROP ASSEMBLY executes, any attempts to invoke the assembly code will fail.

Permissions
Requires ownership of the assembly, or CONTROL permission on it.

Examples
The following example assumes the assembly HelloWorld is already created in the instance of SQL Server.

DROP ASSEMBLY Helloworld ;

See Also
CREATE ASSEMBLY (Transact-SQL )
ALTER ASSEMBLY (Transact-SQL )
EVENTDATA (Transact-SQL )
Getting Information About Assemblies
DROP ASYMMETRIC KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an asymmetric key from the database.
Transact-SQL Syntax Conventions

Syntax
DROP ASYMMETRIC KEY key_name [ REMOVE PROVIDER KEY ]

Arguments
key_name
Is the name of the asymmetric key to be dropped from the database.
REMOVE PROVIDER KEY
Removes an Extenisble Key Management (EKM ) key from an EKM device. For more information about Extensible
Key Management, see Extensible Key Management (EKM ).

Remarks
An asymmetric key with which a symmetric key in the database has been encrypted, or to which a user or login is
mapped, cannot be dropped. Before you drop such a key, you must drop any user or login that is mapped to the
key. You must also drop or change any symmetric key encrypted with the asymmetric key. You can use the DROP
ENCRYPTION option of ALTER SYMMETRIC KEY to remove encryption by an asymmetric key.
Metadata of asymmetric keys can be accessed by using the sys.asymmetric_keys catalog view. The keys
themselves cannot be directly viewed from inside the database.
If the asymmetric key is mapped to an Extensible Key Management (EKM ) key on an EKM device and the
REMOVE PROVIDER KEY option is not specified, the key will be dropped from the database but not the device. A
warning will be issued.

Permissions
Requires CONTROL permission on the asymmetric key.

Examples
The following example removes the asymmetric key MirandaXAsymKey6 from the AdventureWorks2012 database.

USE AdventureWorks2012;
DROP ASYMMETRIC KEY MirandaXAsymKey6;

See Also
CREATE ASYMMETRIC KEY (Transact-SQL )
ALTER ASYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
ALTER SYMMETRIC KEY (Transact-SQL )
DROP AVAILABILITY GROUP (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes the specified availability group and all of its replicas. If a server instance that hosts one of the availability
replicas is offline when you delete an availability group, after coming online, the server instance will drop the local
availability replica. Dropping an availability group also deletes the associated availability group listener, if any.

IMPORTANT
If possible, remove the availability group only while connected to the server instance that hosts the primary replica. When
the availability group is dropped from the primary replica, changes are allowed in the former primary databases (without
high availability protection). Deleting an availability group from a secondary replica leaves the primary replica in the
RESTORING state, and changes are not allowed on the databases.

For information about alternative ways to drop an availability group, see Remove an Availability Group (SQL
Server).
Transact-SQL Syntax Conventions

Syntax
DROP AVAILABILITY GROUP group_name
[ ; ]

Arguments
group_name
Specifies the name of the availability group to be dropped.

Limitations and Recommendations


Executing DROP AVAILABILITY GROUP requires that the Always On Availability Groups feature is
enabled on the server instance. For more information, see Enable and Disable AlwaysOn Availability
Groups (SQL Server).
DROP AVAILABILITY GROUP cannot be executed as part of batches or within transactions. Also,
expressions and variables are not supported.
You can drop an availability group from any Windows Server Failover Clustering (WSFC ) node that
possesses the correct security credentials for the availability group. This enables you to delete an availability
group when none of its availability replicas remain.
IMPORTANT
Avoid dropping an availability group when the Windows Server Failover Clustering (WSFC) cluster has no quorum. If
you must drop an availability group while the cluster lacks quorum, the metadata availability group that is stored in
the cluster is not removed. After the cluster regains quorum, you will need to drop the availability group again to
remove it from the WSFC cluster.

On a secondary replica, DROP AVAILABILITY GROUP should only be used only for emergency
purposes. This is because dropping an availability group takes the availability group offline. If you drop the
availability group from a secondary replica, the primary replica cannot determine whether the OFFLINE
state occurred because of quorum loss, a forced failover, or a DROP AVAILABILITY GROUP command.
The primary replica transitions to the RESTORING state to prevent a possible split-brain situation. For
more information, see How It Works: DROP AVAIL ABILITY GROUP Behaviors (CSS SQL Server
Engineers blog).

Security
Permissions
Requires ALTER AVAILABILITY GROUP permission on the availability group, CONTROL AVAILABILITY
GROUP permission, ALTER ANY AVAILABILITY GROUP permission, or CONTROL SERVER permission. To
drop an availability group that is not hosted by the local server instance you need CONTROL SERVER
permission or CONTROL permission on that availability group.

Examples
The following example drops the AccountsAG availability group.

DROP AVAILABILITY GROUP AccountsAG;

Related Content
How It Works: DROP AVAIL ABILITY GROUP Behaviors (CSS SQL Server Engineers blog)

See Also
ALTER AVAIL ABILITY GROUP (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
Remove an Availability Group (SQL Server)
DROP BROKER PRIORITY (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a conversation priority from the current database.
Transact-SQL Syntax Conventions

Syntax
DROP BROKER PRIORITY ConversationPriorityName
[;]

Arguments
ConversationPriorityName
Specifies the name of the conversation priority to be removed.

Remarks
When you drop a conversation priority, any existing conversations continue to operate with the priority levels they
were assigned from the conversation priority.

Permissions
Permission for creating a conversation priority defaults to members of the db_ddladmin or db_owner fixed
database roles, and to the sysadmin fixed server role. Requires ALTER permission on the database.

Examples
The following example drops the conversation priority named InitiatorAToTargetPriority .

DROP BROKER PRIORITY InitiatorAToTargetPriority;

See Also
ALTER BROKER PRIORITY (Transact-SQL )
CREATE BROKER PRIORITY (Transact-SQL )
sys.conversation_priorities (Transact-SQL )
DROP CERTIFICATE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a certificate from the database.

IMPORTANT
A backup of the certificate used for database encryption should be retained even if the encryption is no longer enabled on a
database. Even though the database is not encrypted anymore, parts of the transaction log may still remain protected, and
the certificate may be needed for some operations until the full backup of the database is performed. The certificate is also
needed to be able to restore from the backups created at the time the database was encrypted.

Transact-SQL Syntax Conventions

Syntax
DROP CERTIFICATE certificate_name

Arguments
certificate_name
Is the unique name by which the certificate is known in the database.

Remarks
Certificates can only be dropped if no entities are associated with them.

Permissions
Requires CONTROL permission on the certificate.

Examples
The following example drops the certificate Shipping04 from the AdventureWorks database.

USE AdventureWorks2012;
DROP CERTIFICATE Shipping04;

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


The following example drops the certificate Shipping04 .

USE master;
DROP CERTIFICATE Shipping04;
See Also
BACKUP CERTIFICATE (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
ALTER CERTIFICATE (Transact-SQL )
Encryption Hierarchy
EVENTDATA (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a column encryption key from a database.
Transact-SQL Syntax Conventions

Syntax
DROP COLUMN ENCRYPTION KEY key_name [;]

Arguments
key_name
Is the name by which the column encryption key to be dropped from the database.

Remarks
A column encryption key cannot be dropped if it is used to encrypt any column in the database. All columns using
the column encryption key must first be dropped.

Permissions
Requires ALTER ANY COLUMN ENCRYPTION KEY permission on the database.

Examples
A. Dropping a column encryption key
The following example drops a column encryption key called MyCEK .

DROP COLUMN ENCRYPTION KEY MyCEK;


GO

See Also
Always Encrypted (Database Engine)
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
ALTER COLUMN ENCRYPTION KEY (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL )
DROP COLUMN MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a column master key from a database. This is a metadata operation.
Transact-SQL Syntax Conventions

Syntax
DROP COLUMN MASTER KEY key_name;

Arguments
key_name
The name of the column master key.

Remarks
The column master key can only be dropped if there are no column encryption key values encrypted with the
column master key. To drop column encryption key values, use the DROP COLUMN ENCRYPTION KEY
statement.

Permissions
Requires ALTER ANY COLUMN MASTER KEY permission on the database.

Examples
A. Dropping a column master key
The following example drops a column master key called MyCMK .

DROP COLUMN MASTER KEY MyCMK;


GO

See Also
CREATE COLUMN MASTER KEY (Transact-SQL )
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL )
Always Encrypted (Database Engine)
sys.column_master_keys (Transact-SQL )
DROP CONTRACT (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing contract from a database.
Transact-SQL Syntax Conventions

Syntax
DROP CONTRACT contract_name
[ ; ]

Arguments
contract_name
The name of the contract to drop. Server, database, and schema names cannot be specified.

Remarks
You cannot drop a contract if any services or conversation priorities refer to the contract.
When you drop a contract, Service Broker ends any existing conversations that use the contract with an error.

Permissions
Permission for dropping a contract defaults to the owner of the contract, members of the db_ddladmin or
db_owner fixed database roles, and members of the sysadmin fixed server role.

Examples
The following example removes the contract //Adventure-Works.com/Expenses/ExpenseSubmission from the database.

DROP CONTRACT
[//Adventure-Works.com/Expenses/ExpenseSubmission] ;

See Also
ALTER BROKER PRIORITY (Transact-SQL )
ALTER SERVICE (Transact-SQL )
CREATE CONTRACT (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
DROP SERVICE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP CREDENTIAL (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a credential from the server.
Transact-SQL Syntax Conventions

Syntax
DROP CREDENTIAL credential_name

Arguments
credential_name
Is the name of the credential to remove from the server.

Remarks
To drop the secret associated with a credential without dropping the credential itself, use ALTER CREDENTIAL.
Information about credentials is visible in the sys.credentials catalog view.

WARNING
Proxies are associated with a credential. Deleting a credential that is used by a proxy leaves the associated proxy in an
unusable state. When dropping a credential used by a proxy, delete the proxy (by using sp_delete_proxy (Transact-SQL) and
recreate the associated proxy by using sp_add_proxy (Transact-SQL).

Permissions
Requires ALTER ANY CREDENTIAL permission. If dropping a system credential, requires CONTROL SERVER
permission.

Examples
The following example removes the credential called Saddles .

DROP CREDENTIAL Saddles;


GO

See Also
Credentials (Database Engine)
CREATE CREDENTIAL (Transact-SQL )
ALTER CREDENTIAL (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
DROP CRYPTOGRAPHIC PROVIDER (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a cryptographic provider within SQL Server.
Transact-SQL Syntax Conventions

Syntax
DROP CRYPTOGRAPHIC PROVIDER provider_name

Arguments
provider_name
Is the name of the Extensible Key Management provider.

Remarks
To delete an Extensible Key Management (EKM ) provider, all sessions that use the provider must be stopped.
An EKM provider can only be dropped if there are no credentials mapped to it.
If there are keys mapped to an EKM provider when it is dropped the GUIDs for the keys remain stored in SQL
Server. If a provider is created later with the same key GUIDs, the keys will be reused.

Permissions
Requires CONTROL permission on the symmetric key.

Examples
The following example drops a cryptographic provider called SecurityProvider .

/* First, disable provider to perform the upgrade.


This will terminate all open cryptographic sessions. */
ALTER CRYPTOGRAPHIC PROVIDER SecurityProvider
SET ENABLED = OFF;
GO
/* Drop the provider. */
DROP CRYPTOGRAPHIC PROVIDER SecurityProvider;
GO

See Also
Extensible Key Management (EKM )
CREATE CRYPTOGRAPHIC PROVIDER (Transact-SQL )
ALTER CRYPTOGRAPHIC PROVIDER (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
DROP DATABASE (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more user databases or database snapshots from an instance of SQL Server.
Transact-SQL Syntax Conventions

Syntax
-- SQL Server Syntax
DROP DATABASE [ IF EXISTS ] { database_name | database_snapshot_name } [ ,...n ] [;]

-- Azure SQL Database, Azure SQL Data Warehouse and Parallel Data Warehouse Syntax
DROP DATABASE database_name [;]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the database only if it already exists.
database_name
Specifies the name of the database to be removed. To display a list of databases, use the sys.databases catalog
view.
database_snapshot_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the name of a database snapshot to be removed.

General Remarks
A database can be dropped regardless of its state: offline, read-only, suspect, and so on. To display the current
state of a database, use the sys.databases catalog view.
A dropped database can be re-created only by restoring a backup. Database snapshots cannot be backed up
and, therefore, cannot be restored.
When a database is dropped, the master database should be backed up.
Dropping a database deletes the database from an instance of SQL Server and deletes the physical disk files
used by the database. If the database or any one of its files is offline when it is dropped, the disk files are not
deleted. These files can be deleted manually by using Windows Explorer. To remove a database from the current
server without deleting the files from the file system, use sp_detach_db.
WARNING
Dropping a database that has FILE_SNAPSHOT backups associated with it will succeed, but the database files that have
associated snapshots will not be deleted to avoid invalidating the backups referring to these database files. The file will be
truncated, but will not be physically deleted in order to keep the FILE_SNAPSHOT backups intact. For more information,
see SQL Server Backup and Restore with Microsoft Azure Blob Storage Service. Applies to: SQL Server 2016 (13.x)
through current version.

SQL Server
Dropping a database snapshot deletes the database snapshot from an instance of SQL Server and deletes the
physical NTFS File System sparse files used by the snapshot. For information about using sparse files by
database snapshots, see Database Snapshots (SQL Server). Dropping a database snapshot clears the plan cache
for the instance of SQL Server. Clearing the plan cache causes a recompilation of all subsequent execution plans
and can cause a sudden, temporary decrease in query performance. For each cleared cachestore in the plan
cache, the SQL Server error log contains the following informational message: " SQL Server has encountered
%d occurrence(s) of cachestore flush for the '%s' cachestore (part of plan cache) due to some database
maintenance or reconfigure operations". This message is logged every five minutes as long as the cache is
flushed within that time interval.

Interoperability
SQL Server
To drop a database published for transactional replication, or published or subscribed to merge replication, you
must first remove replication from the database. If a database is damaged or replication cannot first be removed
or both, in most cases you still can drop the database by using ALTER DATABASE to set the database offline and
then dropping it.
If the database is involved in log shipping, remove log shipping before dropping the database. For more
information, see About Log Shipping (SQL Server).

Limitations and Restrictions


System databases cannot be dropped.
The DROP DATABASE statement must run in autocommit mode and is not allowed in an explicit or implicit
transaction. Autocommit mode is the default transaction management mode.
You cannot drop a database currently being used. This means open for reading or writing by any user. One way
to remove users from the database is to use ALTER DATABASE to set the database to SINGLE_USER.

WARNING
This is not a fail-proof approach, since first consecutive connection made by any thread will receive the SINGLE_USER
thread, causing your connection to fail. Sql server does not provide a built-in way to drop databases under load.

SQL Server
Any database snapshots on a database must be dropped before the database can be dropped.
Dropping a database enable for Stretch Database does not remove the remote data. If you want to delete the
remote data, you have to remove it manually.
Azure SQL Database
You must be connected to the master database to drop a database.
The DROP DATABASE statement must be the only statement in a SQL batch and you can drop only one
database at a time.
Azure SQL Data Warehouse
You must be connected to the master database to drop a database.
The DROP DATABASE statement must be the only statement in a SQL batch and you can drop only one
database at a time.

Permissions
SQL Server
Requires the CONTROL permission on the database, or ALTER ANY DATABASE permission, or membership
in the db_owner fixed database role.
Azure SQL Database
Only the server-level principal login (created by the provisioning process) or members of the dbmanager
database role can drop a database.
Parallel Data Warehouse
Requires the CONTROL permission on the database, or ALTER ANY DATABASE permission, or membership
in the db_owner fixed database role.

Examples
A. Dropping a single database
The following example removes the Sales database.

DROP DATABASE Sales;

B. Dropping multiple databases


Applies to: SQL Server 2008 through SQL Server 2017.
The following example removes each of the listed databases.

DROP DATABASE Sales, NewSales;

C. Dropping a database snapshot


Applies to: SQL Server 2008 through SQL Server 2017.
The following example removes a database snapshot, named sales_snapshot0600 , without affecting the source
database.

DROP DATABASE sales_snapshot0600;

See Also
ALTER DATABASE (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
EVENTDATA (Transact-SQL )
sys.databases (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a database audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions

Syntax
DROP DATABASE AUDIT SPECIFICATION audit_specification_name
[ ; ]

Arguments
audit_specification_name
Name of an existing audit specification object.

Remarks
A DROP DATABASE AUDIT SPECIFICATION removes the metadata for the audit specification, but not the audit
data collected before the DROP command was issued. You must set the state of a database audit specification to
OFF using ALTER DATABASE AUDIT SPECIFICATION before it can be dropped.

Permissions
Users with the ALTER ANY DATABASE AUDIT permission can drop database audit specifications.

Examples
A. Dropping a Database Audit Specification
The following example drops an audit called HIPAA_Audit_DB_Specification .

DROP DATABASE AUDIT SPECIFICATION HIPAA_Audit_DB_Specification;


GO

For a full example of creating an audit, see SQL Server Audit (Database Engine).

See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
sys.dm_audit_class_type_map (Transact-SQL )
Create a Server Audit and Server Audit Specification
DROP DATABASE ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a database encryption key that is used in transparent database encryption. For more information about
transparent database encryption, see Transparent Data Encryption (TDE ).

IMPORTANT
The backup of the certificate that was protecting the database encryption key should be retained even if the encryption is no
longer enabled on a database. Even though the database is not encrypted anymore, parts of the transaction log may still
remain protected, and the certificate may be needed for some operations until the full backup of the database is performed.

Transact-SQL Syntax Conventions

Syntax
DROP DATABASE ENCRYPTION KEY

Remarks
If the database is encrypted, you must first remove encryption from the database by using the ALTER DATABASE
statement. Wait for decryption to complete before removing the database encryption key. For more information
about the ALTER DATABASE statement, see ALTER DATABASE SET Options (Transact-SQL ). To view the state of
the database, use the sys.dm_database_encryption_keys dynamic management view.

Permissions
Requires CONTROL permission on the database.

Examples
The following example removes the database encryption and drops the database encryption key.

ALTER DATABASE AdventureWorks2012


SET ENCRYPTION OFF;
GO
/* Wait for decryption operation to complete, look for a
value of 1 in the query below. */
SELECT encryption_state
FROM sys.dm_database_encryption_keys;
GO
USE AdventureWorks2012;
GO
DROP DATABASE ENCRYPTION KEY;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


The following example removes the TDE encryption and then drops the database encryption key.

ALTER DATABASE AdventureWorksPDW2012


SET ENCRYPTION OFF;
GO
/* Wait for decryption operation to complete, look for a
value of 1 in the query below. */
WITH dek_encryption_state AS
(
SELECT ISNULL(db_map.database_id, dek.database_id) AS database_id, encryption_state
FROM sys.dm_pdw_nodes_database_encryption_keys AS dek
INNER JOIN sys.pdw_nodes_pdw_physical_databases AS node_db_map
ON dek.database_id = node_db_map.database_id AND dek.pdw_node_id = node_db_map.pdw_node_id
LEFT JOIN sys.pdw_database_mappings AS db_map
ON node_db_map .physical_name = db_map.physical_name
INNER JOIN sys.dm_pdw_nodes AS nodes
ON nodes.pdw_node_id = dek.pdw_node_id
WHERE dek.encryptor_thumbprint <> 0x
)
SELECT TOP 1 encryption_state
FROM dek_encryption_state
WHERE dek_encryption_state.database_id = DB_ID('AdventureWorksPDW2012 ')
ORDER BY (CASE encryption_state WHEN 3 THEN -1 ELSE encryption_state END) DESC;
GO
USE AdventureWorksPDW2012;
GO
DROP DATABASE ENCRYPTION KEY;
GO

See Also
Transparent Data Encryption (TDE )
SQL Server Encryption
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
ALTER DATABASE SET Options (Transact-SQL )
CREATE DATABASE ENCRYPTION KEY (Transact-SQL )
ALTER DATABASE ENCRYPTION KEY (Transact-SQL )
sys.dm_database_encryption_keys (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Removes a database scoped credential from the server.
Transact-SQL Syntax Conventions

Syntax
DROP DATABASE SCOPED CREDENTIAL credential_name

Arguments
credential_name
Is the name of the database scoped credential to remove from the server.

Remarks
To drop the secret associated with a database scoped credential without dropping the database scoped credential
itself, use ALTER CREDENTIAL.
Information about database scoped credentials is visible in the sys.database_scoped_credentials catalog view.

Permissions
Requires ALTER permission on the credential.

Examples
The following example removes the database scoped credential called SalesAccess .

DROP DATABASE SCOPED CREDENTIAL AppCred;


GO

See Also
Credentials (Database Engine)
CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL )
ALTER DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.database_scoped_credentials
CREATE CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
DROP DEFAULT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more user-defined defaults from the current database.

IMPORTANT
DROP DEFAULT will be removed in the next version of Microsoft SQL Server. Do not use DROP DEFAULT in new
development work, and plan to modify applications that currently use them. Instead, use default definitions that you can
create by using the DEFAULT keyword of ALTER TABLE or CREATE TABLE.

Transact-SQL Syntax Conventions

Syntax
DROP DEFAULT [ IF EXISTS ] { [ schema_name . ] default_name } [ ,...n ] [ ; ]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the default only if it already exists.
schema_name
Is the name of the schema to which the default belongs.
default_name
Is the name of an existing default. To see a list of defaults that exist, execute sp_help. Defaults must comply with
the rules for identifiers. Specifying the default schema name is optional.

Remarks
Before dropping a default, unbind the default by executing sp_unbindefault if the default is currently bound to a
column or an alias data type.
After a default is dropped from a column that allows for null values, NULL is inserted in that position when rows
are added and no value is explicitly supplied. After a default is dropped from a NOT NULL column, an error
message is returned when rows are added and no value is explicitly supplied. These rows are added later as part of
the typical INSERT statement behavior.

Permissions
To execute DROP DEFAULT, at a minimum, a user must have ALTER permission on the schema to which the
default belongs.

Examples
A. Dropping a default
If a default has not been bound to a column or to an alias data type, it can just be dropped using DROP DEFAULT.
The following example removes the user-created default named datedflt .

USE AdventureWorks2012;
GO
IF EXISTS (SELECT name FROM sys.objects
WHERE name = 'datedflt'
AND type = 'D')
DROP DEFAULT datedflt;
GO

Beginning with SQL Server 2016 (13.x) you can use the following syntax.

DROP DEFAULT IF EXISTS datedflt;


GO

B. Dropping a default that has been bound to a column


The following example unbinds the default associated with the EmergencyContactPhone column of the Contact
table and then drops the default named phonedflt .

USE AdventureWorks2012;
GO
BEGIN
EXEC sp_unbindefault 'Person.Contact.Phone'
DROP DEFAULT phonedflt
END;
GO

See Also
CREATE DEFAULT (Transact-SQL )
sp_helptext (Transact-SQL )
sp_help (Transact-SQL )
sp_unbindefault (Transact-SQL )
DROP ENDPOINT (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing endpoint.
Transact-SQL Syntax Conventions

Syntax
DROP ENDPOINT endPointName

Arguments
endPointName
Is the name of the endpoint to be removed.

Remarks
The ENDPOINT DDL statements cannot be executed inside a user transaction.

Permissions
User must be a member of the sysadmin fixed server role, the owner of the endpoint, or have been granted
CONTROL permission on the endpoint.

Examples
The following example removes a previously created endpoint called sql_endpoint .

DROP ENDPOINT sql_endpoint;

See Also
CREATE ENDPOINT (Transact-SQL )
ALTER ENDPOINT (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP EXTERNAL DATA SOURCE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a PolyBase external data source.
Transact-SQL Syntax Conventions

Syntax
-- Drop an external data source
DROP EXTERNAL DATA SOURCE external_data_source_name
[;]

Arguments
external_data_source_name
The name of the external data source to drop.

Metadata

To view a list of external data sources use the sys.external_data_sources system view.

SELECT * FROM sys.external_data_sources;

Permissions
Requires ALTER ANY EXTERNAL DATA SOURCE.

Locking
Takes a shared lock on the external data source object.

General Remarks
Dropping an external data source does not remove the external data.

Examples
A. Using basic syntax

DROP EXTERNAL DATA SOURCE mydatasource;

See Also
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
DROP EXTERNAL FILE FORMAT (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a PolyBase external file format.
Transact-SQL Syntax Conventions

Syntax
-- Drop an external file format
DROP EXTERNAL FILE FORMAT external_file_format_name
[;]

Arguments
external_file_format_name
The name of the external file format to drop.

Metadata

To view a list of external file formats use the sys.external_file_formats (Transact-SQL ) system view.

SELECT * FROM sys.external_file_formats;

Permissions
Requires ALTER ANY EXTERNAL FILE FORMAT.

General Remarks
Dropping an external file format does not remove the external data.

Locking
Takes a shared lock on the external file format object.

Examples
A. Using basic syntax

DROP EXTERNAL FILE FORMAT myfileformat;

See Also
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
DROP EXTERNAL LIBRARY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Deletes an existing package library. Package libraries are used by supported external runtimes, such as R or
Python.

Syntax
DROP EXTERNAL LIBRARY library_name
[ AUTHORIZATION owner_name ];

Arguments
library_name
Specifies the name of an existing package library.
Libraries are scoped to the user. Library names must be unique within the context of a specific user or owner.
owner_name
Specifies the name of the user or role that owns the external library.
Database owners can delete libraries created by other users.

Permissions
To delete a library requires the privilege ALTER ANY EXTERNAL LIBRARY. By default, any database owner, or the
owner of the object, can also delete an external library.
Return values
An informational message is returned if the statement was successful.

Remarks
Unlike other DROP statements in SQL Server, this statement supports specifying an optional authorization clause.
This allows dbo or users in the db_owner role to drop a package library uploaded by a regular user in the
database.

Examples
Add the custom R package, customPackage , to a database:

CREATE EXTERNAL LIBRARY customPackage


FROM (CONTENT = 'C:\temp\customPackage_v1.1.zip')
WITH (LANGUAGE = 'R');
GO

Delete the customPackage library.


DROP EXTERNAL LIBRARY customPackage;

See also
CREATE EXTERNAL LIBRARY (Transact-SQL )
ALTER EXTERNAL LIBRARY (Transact-SQL )
sys.external_library_files
sys.external_libraries
DROP EXTERNAL RESOURCE POOL (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Deletes a Resource Governor external resource pool used to define resources for external processes. For R
Services the external pool governs rterm.exe , BxlServer.exe , and other processes spawned by them. External
resource pools are created by using CREATE EXTERNAL RESOURCE POOL (Transact-SQL ) and modified by
using ALTER EXTERNAL RESOURCE POOL (Transact-SQL ).
Transact-SQL Syntax Conventions.

Syntax
DROP EXTERNAL RESOURCE POOL pool_name

Arguments
pool_name
The name of the external resource pool to be deleted.

Remarks
You cannot drop an external resource pool if it contains workload groups.
You cannot drop the Resource Governor default or internal pools.
The reconfiguration does n
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states. For
more information, see Resource Governor.

Permissions
Requires CONTROL SERVER permission.

Examples
The following example drops the external resource pool named ex_pool .

DROP EXTERNAL RESOURCE POOL ex_pool;


GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

See Also
external scripts enabled Server Configuration Option
SQL Server R Services
Known Issues for SQL Server R Services
CREATE EXTERNAL RESOURCE POOL (Transact-SQL )
ALTER EXTERNAL RESOURCE POOL (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
DROP EXTERNAL TABLE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a PolyBase external table from. This does not delete the external data.
Transact-SQL Syntax Conventions

Syntax
DROP EXTERNAL TABLE [ database_name . [schema_name ] . | schema_name . ] table_name
[;]

Arguments
[ database_name . [schema_name] . | schema_name . ] table_name
The one- to three-part name of the external table to remove. The table name can optionally include the schema, or
the database and schema.

Permissions
Requires ALTER permission on the schema to which the table belongs.

General Remarks
Dropping an external table removes all table-related metadata. It does not delete the external data.

Examples
A. Using basic syntax

DROP EXTERNAL TABLE SalesPerson;


DROP EXTERNAL TABLE dbo.SalesPerson;
DROP EXTERNAL TABLE EasternDivision.dbo.SalesPerson;

B. Dropping an external table from the current database


The following example removes the ProductVendor1 table, its data, indexes, and any dependent views from the
current database.

DROP EXTERNAL TABLE ProductVendor1;

C. Dropping a table from another database


The following example drops the SalesPerson table in the EasternDivision database.

DROP EXTERNAL TABLE EasternDivision.dbo.SalesPerson;


See Also
CREATE EXTERNAL TABLE (Transact-SQL )
DROP EVENT NOTIFICATION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an event notification trigger from the current database.
Transact-SQL Syntax Conventions

Syntax
DROP EVENT NOTIFICATION notification_name [ ,...n ]
ON { SERVER | DATABASE | QUEUE queue_name }
[ ; ]

Arguments
notification_name
Is the name of the event notification to remove. Multiple event notifications can be specified. To see a list of
currently created event notifications, use sys.event_notifications (Transact-SQL ).
SERVER
Indicates the scope of the event notification applies to the current server. SERVER must be specified if it was
specified when the event notification was created.
DATABASE
Indicates the scope of the event notification applies to the current database. DATABASE must be specified if it was
specified when the event notification was created.
QUEUE queue_name
Indicates the scope of the event notification applies to the queue specified by queue_name. QUEUE must be
specified if it was specified when the event notification was created. queue_name is the name of the queue and
must also be specified.

Remarks
If an event notification fires within a transaction and is dropped within the same transaction, the event notification
instance is sent, and then the event notification is dropped.

Permissions
To drop an event notification that is scoped at the database level, at a minimum, a user must be the owner of the
event notification or have ALTER ANY DATABASE EVENT NOTIFICATION permission in the current database.
To drop an event notification that is scoped at the server level, at a minimum, a user must be the owner of the
event notification or have ALTER ANY EVENT NOTIFICATION permission in the server.
To drop an event notification on a specific queue, at a minimum, a user must be the owner of the event notification
or have ALTER permission on the parent queue.
Examples
The following example creates a database-scoped event notification, then drops it:

USE AdventureWorks2012;
GO
CREATE EVENT NOTIFICATION NotifyALTER_T1
ON DATABASE
FOR ALTER_TABLE
TO SERVICE 'NotifyService',
'8140a771-3c4b-4479-8ac0-81008ab17984';
GO
DROP EVENT NOTIFICATION NotifyALTER_T1
ON DATABASE;

See Also
CREATE EVENT NOTIFICATION (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.event_notifications (Transact-SQL )
sys.events (Transact-SQL )
DROP EVENT SESSION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an event session.
Transact-SQL Syntax Conventions

Syntax
DROP EVENT SESSION event_session_name
ON SERVER

Arguments
event_session_name
Is the name of an existing event session.

Remarks
When you drop an event session, all configuration information, such as targets and session parameters, is
completely removed.

Permissions
Requires the ALTER ANY EVENT SESSION permission.

Examples
The following example shows how to drop an event session.

DROP EVENT SESSION evt_spin_lock_diagnosis


ON SERVER;

See Also
CREATE EVENT SESSION (Transact-SQL )
ALTER EVENT SESSION (Transact-SQL )
sys.server_event_sessions (Transact-SQL )
DROP FULLTEXT CATALOG (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a full-text catalog from a database. You must drop all full-text indexes associated with the catalog before
you drop the catalog.
Transact-SQL Syntax Conventions

Syntax
DROP FULLTEXT CATALOG catalog_name

Arguments
catalog_name
Is the name of the catalog to be removed. If catalog_name does not exist, Microsoft SQL Server returns an error
and does not perform the DROP operation. The filegroup of the full-text catalog must not be marked OFFLINE or
READONLY for the command to succeed.

Permissions
User must have DROP permission on the full-text catalog or be a member of the db_owner, or db_ddladmin
fixed database roles.

See Also
sys.fulltext_catalogs (Transact-SQL )
ALTER FULLTEXT CATALOG (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
Full-Text Search
DROP FULLTEXT INDEX (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a full-text index from a specified table or indexed view.
Transact-SQL Syntax Conventions

Syntax
DROP FULLTEXT INDEX ON table_name

Arguments
table_name
Is the name of the table or indexed view containing the full-text index to be removed.

Remarks
You do not need to drop all columns from the full-text index before using the DROP FULLTEXT INDEX command.

Permissions
The user must have ALTER permission on the table or indexed view, or be a member of the sysadmin fixed server
role, or db_owner or db_ddladmin fixed database roles.

Examples
The following example drops the full-text index that exists on the JobCandidate table.

USE AdventureWorks2012;
GO
DROP FULLTEXT INDEX ON HumanResources.JobCandidate;
GO

See Also
sys.fulltext_indexes (Transact-SQL )
ALTER FULLTEXT INDEX (Transact-SQL )
CREATE FULLTEXT INDEX (Transact-SQL )
Full-Text Search
DROP FULLTEXT STOPLIST (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a full-text stoplist from the database in SQL Server.
Transact-SQL Syntax Conventions

IMPORTANT
CREATE FULLTEXT STOPLIST is supported only for compatibility level 100 and higher. For compatibility levels 80 and 90, the
system stoplist is always assigned to the database.

Syntax
DROP FULLTEXT STOPLIST stoplist_name
;

Arguments
stoplist_name
Is the name of the full-text stoplist to drop from the database.

Remarks
DROP FULLTEXT STOPLIST fails if any full-text indexes refer to the full-text stoplist being dropped.

Permissions
To drop a stoplist requires having DROP permission on the stoplist or membership in the db_owner or
db_ddladmin fixed database roles.

Examples
The following example drops a full-text stoplist named myStoplist .

DROP FULLTEXT STOPLIST myStoplist;

See Also
ALTER FULLTEXT STOPLIST (Transact-SQL )
CREATE FULLTEXT STOPLIST (Transact-SQL )
sys.fulltext_stoplists (Transact-SQL )
sys.fulltext_stopwords (Transact-SQL )
DROP FUNCTION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more user-defined functions from the current database. User-defined functions are created by
using CREATE FUNCTION and modified by using ALTER FUNCTION.
The DROP function supports natively compiled, scalar user-defined functions. For more information, see Scalar
User-Defined Functions for In-Memory OLTP.
Transact-SQL Syntax Conventions

Syntax
-- SQL Server, Azure SQL Database

DROP FUNCTION [ IF EXISTS ] { [ schema_name. ] function_name } [ ,...n ]


[;]

-- Azure SQL Data Warehouse, Parallel Data Warehouse

DROP FUNCTION [ schema_name. ] function_name


[;]

Arguments
IF EXISTS
Conditionally drops the function only if it already exists. Available beginning with SQL Server 2016 and in SQL
Database.
schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the name of the user-defined function or functions to be removed. Specifying the schema name is optional. The
server name and database name cannot be specified.

Remarks
DROP FUNCTION will fail if there are Transact-SQL functions or views in the database that reference this
function and were created by using SCHEMABINDING, or if there are computed columns, CHECK constraints, or
DEFAULT constraints that reference the function.
DROP FUNCTION will fail if there are computed columns that reference this function and have been indexed.

Permissions
To execute DROP FUNCTION, at a minimum, a user must have ALTER permission on the schema to which the
function belongs, or CONTROL permission on the function.
Examples
A. Dropping a function
The following example drops the fn_SalesByStore user-defined function from the Sales schema in the
AdventureWorks2012 sample database. To create this function, see Example B in CREATE FUNCTION (Transact-
SQL ).

DROP FUNCTION Sales.fn_SalesByStore;

See Also
ALTER FUNCTION (Transact-SQL )
CREATE FUNCTION (Transact-SQL )
OBJECT_ID (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.parameters (Transact-SQL )
DROP INDEX (Transact-SQL)
5/3/2018 • 14 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more relational, spatial, filtered, or XML indexes from the current database. You can drop a
clustered index and move the resulting table to another filegroup or partition scheme in a single transaction by
specifying the MOVE TO option.
The DROP INDEX statement does not apply to indexes created by defining PRIMARY KEY or UNIQUE
constraints. To remove the constraint and corresponding index, use ALTER TABLE with the DROP CONSTRAINT
clause.

IMPORTANT
The syntax defined in <drop_backward_compatible_index> will be removed in a future version of Microsoft SQL Server.
Avoid using this syntax in new development work, and plan to modify applications that currently use the feature. Use the
syntax specified under <drop_relational_or_xml_index> instead. XML indexes cannot be dropped using backward
compatible syntax.

Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server (All options except filegroup and filestream apply to Azure SQL Database.)

DROP INDEX [ IF EXISTS ]


{ <drop_relational_or_xml_or_spatial_index> [ ,...n ]
| <drop_backward_compatible_index> [ ,...n ]
}

<drop_relational_or_xml_or_spatial_index> ::=
index_name ON <object>
[ WITH ( <drop_clustered_index_option> [ ,...n ] ) ]

<drop_backward_compatible_index> ::=
[ owner_name. ] table_or_view_name.index_name

<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}

<drop_clustered_index_option> ::=
{
MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
| MOVE TO { partition_scheme_name ( column_name )
| filegroup_name
| "default"
}
[ FILESTREAM_ON { partition_scheme_name
| filestream_filegroup_name
| "default" } ]
}

-- Syntax for Azure SQL Database

DROP INDEX
{ <drop_relational_or_xml_or_spatial_index> [ ,...n ]
}

<drop_relational_or_xml_or_spatial_index> ::=
index_name ON <object>

<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DROP INDEX index_name ON [ database_name . [schema_name ] . | schema_name . ] table_name


[;]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the index only if it already exists.
index_name
Is the name of the index to be dropped.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or_view_name
Is the name of the table or view associated with the index. Spatial indexes are supported only on tables.
To display a report of the indexes on an object, use the sys.indexes catalog view.
Windows Azure SQL Database supports the three-part name format database_name.
[schema_name].object_name when the database_name is the current database or the database_name is tempdb
and the object_name starts with #.
<drop_clustered_index_option>
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Controls clustered index options. These options cannot be used with other index types.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database (Performance Levels P2 and P3 only).
Overrides the max degree of parallelism configuration option for the duration of the index operation. For more
information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to limit
the number of processors used in a parallel plan execution. The maximum is 64 processors.

IMPORTANT
MAXDOP is not allowed for spatial indexes or XML indexes.

max_degree_of_parallelism can be:


1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.

NOTE
Parallel index operations are not available in every edition of SQL Server. For a list of features that are supported by the
editions of SQL Server, see Editions and Supported Features for SQL Server 2016.

ONLINE = ON | OFF
Applies to: SQL Server 2008 through SQL Server 2017, Azure SQL Database.
Specifies whether underlying tables and associated indexes are available for queries and data modification during
the index operation. The default is OFF.
ON
Long-term table locks are not held. This allows queries or updates to the underlying table to continue.
OFF
Table locks are applied and the table is unavailable for the duration of the index operation.
The ONLINE option can only be specified when you drop clustered indexes. For more information, see the
Remarks section.

NOTE
Online index operations are not available in every edition of SQL Server. For a list of features that are supported by the
editions of SQL Server, see Editions and Supported Features for SQL Server 2016.

MOVE TO { partition_scheme_name(column_name) | filegroup_name | "default"


Applies to: SQL Server 2008 through SQL Server 2017. SQL Database supports "default" as the filegroup
name.
Specifies a location to move the data rows that currently are in the leaf level of the clustered index. The data is
moved to the new location in the form of a heap. You can specify either a partition scheme or filegroup as the
new location, but the partition scheme or filegroup must already exist. MOVE TO is not valid for indexed views or
nonclustered indexes. If a partition scheme or filegroup is not specified, the resulting table will be located in the
same partition scheme or filegroup as was defined for the clustered index.
If a clustered index is dropped by using MOVE TO, any nonclustered indexes on the base table are rebuilt, but
they remain in their original filegroups or partition schemes. If the base table is moved to a different filegroup or
partition scheme, the nonclustered indexes are not moved to coincide with the new location of the base table
(heap). Therefore, even if the nonclustered indexes were previously aligned with the clustered index, they might
no longer be aligned with the heap. For more information about partitioned index alignment, see Partitioned
Tables and Indexes.
partition_scheme_name ( column_name )
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies a partition scheme as the location for the resulting table. The partition scheme must have already been
created by executing either CREATE PARTITION SCHEME or ALTER PARTITION SCHEME. If no location is
specified and the table is partitioned, the table is included in the same partition scheme as the existing clustered
index.
The column name in the scheme is not restricted to the columns in the index definition. Any column in the base
table can be specified.
filegroup_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a filegroup as the location for the resulting table. If no location is specified and the table is not
partitioned, the resulting table is included in the same filegroup as the clustered index. The filegroup must already
exist.
"default"
Specifies the default location for the resulting table.

NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in MOVE TO
"default" or MOVE TO [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be set ON for the current
session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).
FILESTREAM_ON { partition_scheme_name | filestream_filegroup_name | "default" }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a location to move the FILESTREAM table that currently is in the leaf level of the clustered index. The
data is moved to the new location in the form of a heap. You can specify either a partition scheme or filegroup as
the new location, but the partition scheme or filegroup must already exist. FILESTREAM ON is not valid for
indexed views or nonclustered indexes. If a partition scheme is not specified, the data will be located in the same
partition scheme as was defined for the clustered index.
partition_scheme_name
Specifies a partition scheme for the FILESTREAM data. The partition scheme must have already been created by
executing either CREATE PARTITION SCHEME or ALTER PARTITION SCHEME. If no location is specified and
the table is partitioned, the table is included in the same partition scheme as the existing clustered index.
If you specify a partition scheme for MOVE TO, you must use the same partition scheme for FILESTREAM ON.
filestream_filegroup_name
Specifies a FILESTREAM filegroup for FILESTREAM data. If no location is specified and the table is not
partitioned, the data is included in the default FILESTREAM filegroup.
"default"
Specifies the default location for the FILESTREAM data.

NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in MOVE TO
"default" or MOVE TO [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current
session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).

Remarks
When a nonclustered index is dropped, the index definition is removed from metadata and the index data pages
(the B -tree) are removed from the database files. When a clustered index is dropped, the index definition is
removed from metadata and the data rows that were stored in the leaf level of the clustered index are stored in
the resulting unordered table, a heap. All the space previously occupied by the index is regained. This space can
then be used for any database object.
An index cannot be dropped if the filegroup in which it is located is offline or set to read-only.
When the clustered index of an indexed view is dropped, all nonclustered indexes and auto-created statistics on
the same view are automatically dropped. Manually created statistics are not dropped.
The syntaxtable_or_view_name.index_name is maintained for backward compatibility. An XML index or spatial
index cannot be dropped by using the backward compatible syntax.
When indexes with 128 extents or more are dropped, the Database Engine defers the actual page deallocations,
and their associated locks, until after the transaction commits.
Sometimes indexes are dropped and re-created to reorganize or rebuild the index, such as to apply a new fill
factor value or to reorganize data after a bulk load. To do this, using ALTER INDEXis more efficient, especially for
clustered indexes. ALTER INDEX REBUILD has optimizations to prevent the overhead of rebuilding the
nonclustered indexes.

Using Options with DROP INDEX


You can set the following index options when you drop a clustered index: MAXDOP, ONLINE, and MOVE TO.
Use MOVE TO to drop the clustered index and move the resulting table to another filegroup or partition scheme
in a single transaction.
When you specify ONLINE = ON, queries and modifications to the underlying data and associated nonclustered
indexes are not blocked by the DROP INDEX transaction. Only one clustered index can be dropped online at a
time. For a complete description of the ONLINE option, see CREATE INDEX (Transact-SQL ).
You cannot drop a clustered index online if the index is disabled on a view, or contains text, ntext, image,
varchar(max), nvarchar(max), varbinary(max), or xml columns in the leaf-level data rows.
Using the ONLINE = ON and MOVE TO options requires additional temporary disk space.
After an index is dropped, the resulting heap appears in the sys.indexes catalog view with NULL in the name
column. To view the table name, join sys.indexes with sys.tables on object_id. For an example query, see
example D.
On multiprocessor computers that are running SQL Server 2005 Enterprise Edition or later, DROP INDEX may
use more processors to perform the scan and sort operations associated with dropping the clustered index, just
like other queries do. You can manually configure the number of processors that are used to run the DROP
INDEX statement by specifying the MAXDOP index option. For more information, see Configure Parallel Index
Operations.
When a clustered index is dropped, the corresponding heap partitions retain their data compression setting
unless the partitioning scheme is modified. If the partitioning scheme is changed, all partitions are rebuilt to an
uncompressed state (DATA_COMPRESSION = NONE ). To drop a clustered index and change the partitioning
scheme requires the following two steps:
1. Drop the clustered index.
2. Modify the table by using an ALTER TABLE ... REBUILD ... option specifying the compression option.
When a clustered index is dropped OFFLINE, only the upper levels of clustered indexes are removed; therefore,
the operation is quite fast. When a clustered index is dropped ONLINE, SQL Server rebuilds the heap two times,
once for step 1 and once for step 2. For more information about data compression, see Data Compression.

XML Indexes
Options cannot be specified when you drop anXML index. Also, you cannot use the
table_or_view_name.index_name syntax. When a primary XML index is dropped, all associated secondary XML
indexes are automatically dropped. For more information, see XML Indexes (SQL Server).

Spatial Indexes
Spatial indexes are supported only on tables. When you drop a spatial index, you cannot specify any options or
use .index_name. The correct syntax is as follows:
DROP INDEX spatial_index_name ON spatial_table_name;
For more information about spatial indexes, see Spatial Indexes Overview.

Permissions
To execute DROP INDEX, at a minimum, ALTER permission on the table or view is required. This permission is
granted by default to the sysadmin fixed server role and the db_ddladmin and db_owner fixed database roles.

Examples
A. Dropping an index
The following example deletes the index IX_ProductVendor_VendorID on the ProductVendor table in the
AdventureWorks2012 database.

DROP INDEX IX_ProductVendor_BusinessEntityID


ON Purchasing.ProductVendor;
GO

B. Dropping multiple indexes


The following example deletes two indexes in a single transaction in the AdventureWorks2012 database.

DROP INDEX
IX_PurchaseOrderHeader_EmployeeID ON Purchasing.PurchaseOrderHeader,
IX_Address_StateProvinceID ON Person.Address;
GO

C. Dropping a clustered index online and setting the MAXDOP option


The following example deletes a clustered index with the ONLINE option set to ON and MAXDOP set to 8 .
Because the MOVE TO option was not specified, the resulting table is stored in the same filegroup as the index.
This examples uses the AdventureWorks2012 database
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.

DROP INDEX AK_BillOfMaterials_ProductAssemblyID_ComponentID_StartDate


ON Production.BillOfMaterials WITH (ONLINE = ON, MAXDOP = 2);
GO

D. Dropping a clustered index online and moving the table to a new filegroup
The following example deletes a clustered index online and moves the resulting table (heap) to the filegroup
NewGroup by using the MOVE TO clause. The sys.indexes , sys.tables , and sys.filegroups catalog views are
queried to verify the index and table placement in the filegroups before and after the move. (Beginning with SQL
Server 2016 (13.x) you can use the DROP INDEX IF EXISTS syntax.)
Applies to: SQL Server 2008 through SQL Server 2017.
--Create a clustered index on the PRIMARY filegroup if the index does not exist.
CREATE UNIQUE CLUSTERED INDEX
AK_BillOfMaterials_ProductAssemblyID_ComponentID_StartDate
ON Production.BillOfMaterials (ProductAssemblyID, ComponentID,
StartDate)
ON 'PRIMARY';
GO
-- Verify filegroup location of the clustered index.
SELECT t.name AS [Table Name], i.name AS [Index Name], i.type_desc,
i.data_space_id, f.name AS [Filegroup Name]
FROM sys.indexes AS i
JOIN sys.filegroups AS f ON i.data_space_id = f.data_space_id
JOIN sys.tables as t ON i.object_id = t.object_id
AND i.object_id = OBJECT_ID(N'Production.BillOfMaterials','U')
GO
--Create filegroup NewGroup if it does not exist.
IF NOT EXISTS (SELECT name FROM sys.filegroups
WHERE name = N'NewGroup')
BEGIN
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP NewGroup;
ALTER DATABASE AdventureWorks2012
ADD FILE (NAME = File1,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\File1.ndf')
TO FILEGROUP NewGroup;
END
GO
--Verify new filegroup
SELECT * from sys.filegroups;
GO
-- Drop the clustered index and move the BillOfMaterials table to
-- the Newgroup filegroup.
-- Set ONLINE = OFF to execute this example on editions other than Enterprise Edition.
DROP INDEX AK_BillOfMaterials_ProductAssemblyID_ComponentID_StartDate
ON Production.BillOfMaterials
WITH (ONLINE = ON, MOVE TO NewGroup);
GO
-- Verify filegroup location of the moved table.
SELECT t.name AS [Table Name], i.name AS [Index Name], i.type_desc,
i.data_space_id, f.name AS [Filegroup Name]
FROM sys.indexes AS i
JOIN sys.filegroups AS f ON i.data_space_id = f.data_space_id
JOIN sys.tables as t ON i.object_id = t.object_id
AND i.object_id = OBJECT_ID(N'Production.BillOfMaterials','U');
GO

E. Dropping a PRIMARY KEY constraint online


Indexes that are created as the result of creating PRIMARY KEY or UNIQUE constraints cannot be dropped by
using DROP INDEX. They are dropped using the ALTER TABLE DROP CONSTRAINT statement. For more
information, see ALTER TABLE.
The following example deletes a clustered index with a PRIMARY KEY constraint by dropping the constraint. The
ProductCostHistory table has no FOREIGN KEY constraints. If it did, those constraints would have to be removed
first.

-- Set ONLINE = OFF to execute this example on editions other than Enterprise Edition.
ALTER TABLE Production.TransactionHistoryArchive
DROP CONSTRAINT PK_TransactionHistoryArchive_TransactionID
WITH (ONLINE = ON);

F. Dropping an XML index


The following example drops an XML index on the ProductModel table in the AdventureWorks2012 database.
DROP INDEX PXML_ProductModel_CatalogDescription
ON Production.ProductModel;

G. Dropping a clustered index on a FILESTREAM table


The following example deletes a clustered index online and moves the resulting table (heap) and FILESTREAM
data to the MyPartitionScheme partition scheme by using both the MOVE TO clause and the FILESTREAM ON clause.
Applies to: SQL Server 2008 through SQL Server 2017.

DROP INDEX PK_MyClusteredIndex


ON dbo.MyTable
WITH (MOVE TO MyPartitionScheme,
FILESTREAM_ON MyPartitionScheme);
GO

See Also
ALTER INDEX (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
ALTER TABLE (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE XML INDEX (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.indexes (Transact-SQL )
sys.tables (Transact-SQL )
sys.filegroups (Transact-SQL )
sp_spaceused (Transact-SQL )
DROP INDEX (Selective XML Indexes)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing selective XML index or secondary selective XML index in SQL Server. For more information, see
Selective XML Indexes (SXI).
Transact-SQL Syntax Conventions

Syntax
DROP INDEX index_name ON <object>
[ WITH ( <drop_index_option> [ ,...n ] ) ]

<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}

<drop_index_option> ::=
{
MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
}

Arguments
index_name
Is the name of the existing index to drop.
< object> Is the table that contains the indexed XML column. Use one of the following formats:
database_name.schema_name.table_name

database_name..table_name

schema_name.table_name

table_name

<drop_index_option> For information about the drop index options, see DROP INDEX (Transact-SQL ).

Security
Permissions
ALTER permission on the table or view is required to run DROP INDEX. This permission is granted by default to
the sysadmin fixed server role and the db_ddladmin and db_owner fixed database roles.

Example
The following example shows a DROP INDEX statement.
DROP INDEX sxi_index ON tbl;

See Also
Selective XML Indexes (SXI)
Create, Alter, and Drop Selective XML Indexes
DROP LOGIN (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a SQL Server login account.
Transact-SQL Syntax Conventions

Syntax
DROP LOGIN login_name

Arguments
login_name
Specifies the name of the login to be dropped.

Remarks
A login cannot be dropped while it is logged in. A login that owns any securable, server-level object, or SQL
Server Agent job cannot be dropped.
You can drop a login to which database users are mapped; however, this will create orphaned users. For more
information, see Troubleshoot Orphaned Users (SQL Server).
In SQL Database, login data required to authenticate a connection and server-level firewall rules are temporarily
cached in each database. This cache is periodically refreshed. To force a refresh of the authentication cache and
make sure that a database has the latest version of the logins table, execute DBCC FLUSHAUTHCACHE
(Transact-SQL ).

Permissions
Requires ALTER ANY LOGIN permission on the server.

Examples
A. Dropping a login
The following example drops the login WilliJo .

DROP LOGIN WilliJo;


GO

See Also
CREATE LOGIN (Transact-SQL )
ALTER LOGIN (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes the master key from the current database.
Transact-SQL Syntax Conventions

Syntax
DROP MASTER KEY

Arguments
This statement takes no arguments.

Remarks
The drop will fail if any private key in the database is protected by the master key.

Permissions
Requires CONTROL permission on the database.

Examples
The following example removes the master key for the AdventureWorks2012 database.

USE AdventureWorks2012;
DROP MASTER KEY;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


The following example removes the master key.

USE master;
DROP MASTER KEY;
GO

See Also
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
BACKUP MASTER KEY (Transact-SQL )
RESTORE MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
Encryption Hierarchy
DROP MESSAGE TYPE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing message type.
Transact-SQL Syntax Conventions

Syntax
DROP MESSAGE TYPE message_type_name
[ ; ]

Arguments
message_type_name
The name of the message type to delete. Server, database, and schema names cannot be specified.

Permissions
Permission for dropping a message type defaults to the owner of the message type, members of the db_ddladmin
or db_owner fixed database roles, and members of the sysadmin fixed server role.

Remarks
You cannot drop a message type if any contracts refer to the message type.

Examples
The following example deletes the //Adventure-Works.com/Expenses/SubmitExpense message type from the database.

DROP MESSAGE TYPE [//Adventure-Works.com/Expenses/SubmitExpense] ;

See Also
ALTER MESSAGE TYPE (Transact-SQL )
CREATE MESSAGE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP PARTITION FUNCTION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a partition function from the current database. Partition functions are created by using CREATE
PARTITION FUNCTION and modified by using ALTER PARTITION FUNCTION.
Transact-SQL Syntax Conventions

Syntax
DROP PARTITION FUNCTION partition_function_name [ ; ]

Arguments
partition_function_name
Is the name of the partition function that is to be dropped.

Remarks
A partition function can be dropped only if there are no partition schemes currently using the partition function. If
there are partition schemes using the partition function, DROP PARTITION FUNCTION returns an error.

Permissions
Any one of the following permissions can be used to execute DROP PARTITION FUNCTION:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition function was created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition function was created.

Examples
The following example assumes the partition function myRangePF has been created in the current database.

DROP PARTITION FUNCTION myRangePF;

See Also
CREATE PARTITION FUNCTION (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.partition_functions (Transact-SQL )
sys.partition_parameters (Transact-SQL )
sys.partition_range_values (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
DROP PARTITION SCHEME (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a partition scheme from the current database. Partition schemes are created by using CREATE
PARTITION SCHEME and modified by using ALTER PARTITION SCHEME.
Transact-SQL Syntax Conventions

Syntax
DROP PARTITION SCHEME partition_scheme_name [ ; ]

Arguments
partition_scheme_name
Is the name of the partition scheme to be dropped.

Remarks
A partition scheme can be dropped only if there are no tables or indexes currently using the partition scheme. If
there are tables or indexes using the partition scheme, DROP PARTITION SCHEME returns an error. DROP
PARTITION SCHEME does not remove the filegroups themselves.

Permissions
The following permissions can be used to execute DROP PARTITION SCHEME:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition scheme was created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition scheme was created.

Examples
The following example drops the partition scheme myRangePS1 from the current database:

DROP PARTITION SCHEME myRangePS1;

See Also
CREATE PARTITION SCHEME (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
sys.partition_schemes (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.destination_data_spaces (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
DROP PROCEDURE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more stored procedures or procedure groups from the current database in SQL Server 2017.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

DROP { PROC | PROCEDURE } [ IF EXISTS ] { [ schema_name. ] procedure } [ ,...n ]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DROP { PROC | PROCEDURE } { [ schema_name. ] procedure_name }

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the procedure only if it already exists.
schema_name
The name of the schema to which the procedure belongs. A server name or database name cannot be specified.
procedure
The name of the stored procedure or stored procedure group to be removed. Individual procedures within a
numbered procedure group cannot be dropped; the whole procedure group is dropped.

Best Practices
Before removing any stored procedure, check for dependent objects and modify these objects accordingly.
Dropping a stored procedure can cause dependent objects and scripts to fail when these objects are not updated.
For more information, see View the Dependencies of a Stored Procedure

Metadata

To display a list of existing procedures, query the sys.objects catalog view. To display the procedure definition,
query the sys.sql_modules catalog view.

Security
Permissions
Requires CONTROL permission on the procedure, or ALTER permission on the schema to which the procedure
belongs, or membership in the db_ddladmin fixed server role.

Examples
The following example removes the dbo.uspMyProc stored procedure in the current database.

DROP PROCEDURE dbo.uspMyProc;


GO

The following example removes several stored procedures in the current database.

DROP PROCEDURE dbo.uspGetSalesbyMonth, dbo.uspUpdateSalesQuotes, dbo.uspGetSalesByYear;

The following example removes the dbo.uspMyProc stored procedure if it exists but does not cause an error if the
procedure does not exist. This syntax is new in SQL Server 2016 (13.x).

DROP PROCEDURE IF EXISTS dbo.uspMyProc;


GO

See Also
ALTER PROCEDURE (Transact-SQL )
CREATE PROCEDURE (Transact-SQL )
sys.objects (Transact-SQL )
sys.sql_modules (Transact-SQL )
Delete a Stored Procedure
DROP QUEUE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing queue.
Transact-SQL Syntax Conventions

Syntax
DROP QUEUE <object>
[ ; ]

<object> ::=
{
[ database_name . [ schema_name ] . | schema_name . ]
queue_name
}

Arguments
database_name
The name of the database that contains the queue to drop. When no database_name is provided, defaults to the
current database.
schema_name (object)
The name of the schema that owns the queue to drop. When no schema_name is provided, defaults to the default
schema for the current user.
queue_name
The name of the queue to drop.

Remarks
You cannot drop a queue if any services refer to the queue.

Permissions
Permission for dropping a queue defaults to the owner of the queue, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.

Examples
The following example drops the ExpenseQueue queue from the current database.

DROP QUEUE ExpenseQueue ;

See Also
CREATE QUEUE (Transact-SQL )
ALTER QUEUE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP REMOTE SERVICE BINDING (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a remote service binding.
Transact-SQL Syntax Conventions

Syntax
DROP REMOTE SERVICE BINDING binding_name
[ ; ]

Arguments
binding_name
Is the name of the remote service binding to drop. Server, database, and schema names cannot be specified.

Permissions
Permission for dropping a remote service binding defaults to the owner of the remote service binding, members
of the db_owner fixed database role, and members of the sysadmin fixed server role.

Examples
The following example deletes the remote service binding APBinding from the database.

DROP REMOTE SERVICE BINDING APBinding ;

See Also
CREATE REMOTE SERVICE BINDING (Transact-SQL )
ALTER REMOTE SERVICE BINDING (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a user-defined Resource Governor resource pool.
Transact-SQL Syntax Conventions.

Syntax
DROP RESOURCE POOL pool_name
[ ; ]

Arguments
pool_name
Is the name of an existing user-defined resource pool.

Remarks
You cannot drop a resource pool if it contains workload groups.
You cannot drop the Resource Governor default or internal pools.
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states.
For more information, see Resource Governor.

Permissions
Requires CONTROL SERVER permission.

Examples
The following example drops the resource pool named big_pool .

DROP RESOURCE POOL big_pool;


GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

See Also
Resource Governor
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
DROP ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a role from the database.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

DROP ROLE [ IF EXISTS ] role_name

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DROP ROLE role_name

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the role only if it already exists.
role_name
Specifies the role to be dropped from the database.

Remarks
Roles that own securables cannot be dropped from the database. To drop a database role that owns securables,
you must first transfer ownership of those securables or drop them from the database. Roles that have members
cannot be dropped from the database. To drop a role that has members, you must first remove members of the
role.
To remove members from a database role, use ALTER ROLE (Transact-SQL ).
You cannot use DROP ROLE to drop a fixed database role.
Information about role membership can be viewed in the sys.database_role_members catalog view.
Cau t i on

Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).
To remove a server role, use DROP SERVER ROLE (Transact-SQL ).

Permissions
Requires ALTER ANY ROLE permission on the database, or CONTROL permission on the role, or membership
in the db_securityadmin.

Examples
The following example drops the database role purchasing from the AdventureWorks2012 database.

DROP ROLE purchasing;


GO

See Also
CREATE ROLE (Transact-SQL )
ALTER ROLE (Transact-SQL )
Principals (Database Engine)
EVENTDATA (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
Security Functions (Transact-SQL )
DROP ROUTE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a route, deleting the information for the route from the routing table of the current database.
Transact-SQL Syntax Conventions

Syntax
DROP ROUTE route_name
[ ; ]

Arguments
route_name
The name of the route to drop. Server, database, and schema names cannot be specified.

Remarks
The routing table that stores the routes is a metadata table that can be read through the catalog view sys.routes.
The routing table can only be updated through the CREATE ROUTE, ALTER ROUTE, and DROP ROUTE
statements.
You can drop a route regardless of whether any conversations use the route. However, if there is no other route to
the remote service, messages for those conversations will remain in the transmission queue until a route to the
remote service is created or the conversation times out.

Permissions
Permission for dropping a route defaults to the owner of the route, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.

Examples
The following example deletes the ExpenseRoute route.

DROP ROUTE ExpenseRoute ;

See Also
ALTER ROUTE (Transact-SQL )
CREATE ROUTE (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.routes (Transact-SQL )
DROP RULE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more user-defined rules from the current database.

IMPORTANT
DROP RULE will be removed in the next version of Microsoft SQL Server. Do not use DROP RULE in new development work,
and plan to modify applications that currently use them. Instead, use CHECK constraints that you can create by using the
CHECK keyword of CREATE TABLE or ALTER TABLE. For more information, see Unique Constraints and Check Constraints.

Transact-SQL Syntax Conventions

Syntax
DROP RULE [ IF EXISTS ] { [ schema_name . ] rule_name } [ ,...n ] [ ; ]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the rule only if it already exists.
schema_name
Is the name of the schema to which the rule belongs.
rule
Is the rule to be removed. Rule names must comply with the rules for identifiers. Specifying the rule schema name
is optional.

Remarks
To drop a rule, first unbind it if the rule is currently bound to a column or to an alias data type. To unbind the rule,
use sp_unbindrule. If the rule is bound when you try to drop it, an error message is displayed and the DROP
RULE statement is canceled.
After a rule is dropped, new data entered into the columns previously governed by the rule is entered without the
constraints of the rule. Existing data is not affected in any way.
The DROP RULE statement does not apply to CHECK constraints. For more information about dropping CHECK
constraints, see ALTER TABLE (Transact-SQL ).

Permissions
To execute DROP RULE, at a minimum, a user must have ALTER permission on the schema to which the rule
belongs.
Examples
The following example unbinds and then drops the rule named VendorID_rule .

sp_unbindrule 'Production.ProductVendor.VendorID'
DROP RULE VendorID_rule
GO

See Also
CREATE RULE (Transact-SQL )
sp_bindrule (Transact-SQL )
sp_help (Transact-SQL )
sp_helptext (Transact-SQL )
sp_unbindrule (Transact-SQL )
USE (Transact-SQL )
DROP SCHEMA (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a schema from the database.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

DROP SCHEMA [ IF EXISTS ] schema_name

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DROP SCHEMA schema_name

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the schema only if it already exists.
schema_name
Is the name by which the schema is known within the database.

Remarks
The schema that is being dropped must not contain any objects. If the schema contains objects, the DROP
statement fails.
Information about schemas is visible in the sys.schemas catalog view.
Caution Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that
schemas are equivalent to database users may no longer return correct results. Old catalog views, including
sysobjects, should not be used in a database in which any of the following DDL statements have ever been used:
CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE
ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).

Permissions
Requires CONTROL permission on the schema or ALTER ANY SCHEMA permission on the database.

Examples
The following example starts with a single CREATE SCHEMA statement. The statement creates the schema Sprockets
that is owned by Krishna and a table Sprockets.NineProngs , and then grants SELECT permission to Anibal and
denies SELECT permission to Hung-Fu .

CREATE SCHEMA Sprockets AUTHORIZATION Krishna


CREATE TABLE NineProngs (source int, cost int, partnumber int)
GRANT SELECT TO Anibal
DENY SELECT TO Hung-Fu;
GO

The following statements drop the schema. Note that you must first drop the table that is contained by the
schema.

DROP TABLE Sprockets.NineProngs;


DROP SCHEMA Sprockets;
GO

See Also
CREATE SCHEMA (Transact-SQL )
ALTER SCHEMA (Transact-SQL )
DROP SCHEMA (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP SEARCH PROPERTY LIST (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a property list from the current database if the search property list is currently not associated with any full-
text index in the database.

Syntax
DROP SEARCH PROPERTY LIST property_list_name
;

Arguments
property_list_name
Is the name of the search property list to be dropped. property_list_name is an identifier.
To view the names of the existing property lists, use the sys.registered_search_property_lists catalog view, as
follows:

SELECT name FROM sys.registered_search_property_lists;

Remarks
You cannot drop a search property list from a database while the list is associated with any full-text index, and
attempts to do so fail. To drop a search property list from a given full-text index, use the ALTER FULLTEXT INDEX
statement, and specify the SET SEARCH PROPERTY LIST clause with either OFF or the name of another search
property list.
To view the property lists on a server instance
sys.registered_search_property_lists (Transact-SQL )
To view the property lists associated with full-text indexes
sys.fulltext_indexes (Transact-SQL )
To remove a property list from a full-text index
ALTER FULLTEXT INDEX (Transact-SQL )

Permissions
Requires CONTROL permission on the search property list.
NOTE
The property list owner can grant CONTROL permissions on the list. By default, the user who creates a search property list
is its owner. The owner can be changed by using the ALTER AUTHORIZATION Transact-SQL statement.

Examples
The following example drops the JobCandidateProperties property list from the AdventureWorks2012 database.

DROP SEARCH PROPERTY LIST JobCandidateProperties;


GO

See Also
ALTER SEARCH PROPERTY LIST (Transact-SQL )
CREATE SEARCH PROPERTY LIST (Transact-SQL )
Search Document Properties with Search Property Lists
sys.registered_search_properties (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
DROP SECURITY POLICY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Deletes a security policy.
Transact-SQL Syntax Conventions

Syntax
DROP SECURITY POLICY [ IF EXISTS ] [schema_name. ] security_policy_name
[;]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the security policy only if it already exists.
schema_name
Is the name of the schema to which the security policy belongs.
security_policy_name
The name of the security policy. Security policy names must comply with the rules for identifiers and must be
unique within the database and to its schema.

Remarks
Permissions
Requires the ALTER ANY SECURITY POLICY permission and ALTER permission on the schema.

Example
DROP SECURITY POLICY secPolicy;

See Also
Row -Level Security
CREATE SECURITY POLICY (Transact-SQL )
ALTER SECURITY POLICY (Transact-SQL )
sys.security_policies (Transact-SQL )
sys.security_predicates (Transact-SQL )
DROP SEQUENCE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a sequence object from the current database.
Transact-SQL Syntax Conventions

Syntax
DROP SEQUENCE [ IF EXISTS ] { [ database_name . [ schema_name ] . | schema_name. ] sequence_name } [ ,...n
]
[ ; ]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the sequence only if it already exists.
database_name
Is the name of the database in which the sequence object was created.
schema_name
Is the name of the schema to which the sequence object belongs.
sequence_name
Is the name of the sequence to be dropped. Type is sysname.

Remarks
After generating a number, a sequence object has no continuing relationship to the number it generated, so the
sequence object can be dropped, even though the number generated is still in use.
A sequence object can be dropped while it is referenced by a stored procedure, or trigger, because it is not schema
bound. A sequence object cannot be dropped if it is referenced as a default value in a table. The error message will
list the object referencing the sequence.
To list all sequence objects in the database, execute the following statement.

SELECT sch.name + '.' + seq.name AS [Sequence schema and name]


FROM sys.sequences AS seq
JOIN sys.schemas AS sch
ON seq.schema_id = sch.schema_id ;
GO

Security
Permissions
Requires ALTER or CONTROL permission on the schema.
Audit
To audit DROP SEQUENCE, monitor the SCHEMA_OBJECT_CHANGE_GROUP.

Examples
The following example removes a sequence object named CountBy1 from the current database.

DROP SEQUENCE CountBy1 ;


GO

See Also
ALTER SEQUENCE (Transact-SQL )
CREATE SEQUENCE (Transact-SQL )
NEXT VALUE FOR (Transact-SQL )
Sequence Numbers
DROP SERVER AUDIT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a Server Audit Object using the SQL Server Audit feature. For more information on SQL Server Audit,
see SQL Server Audit (Database Engine).
Transact-SQL Syntax Conventions

Syntax
DROP SERVER AUDIT audit_name
[ ; ]

Remarks
You must set the State of an audit to the OFF option in order to make any changes to an Audit. If DROP AUDIT
is run while an audit is enabled with any options other than STATE=OFF, you will receive a
MSG_NEED_AUDIT_DISABLED error message.
A DROP SERVER AUDIT removes the metadata for the Audit, but not the audit data that was collected before
the command was issued.
DROP SERVER AUDIT does not drop associated server or database audit specifications. These specifications
must be dropped manually or left orphaned and later mapped to a new server audit.

Permissions
To create, alter or drop a Server Audit Principals require the ALTER ANY SERVER AUDIT or the CONTROL
SERVER permission.

Examples
The following example drops an audit called HIPAA_Audit .

ALTER SERVER AUDIT HIPAA_Audit


STATE = OFF;
GO
DROP SERVER AUDIT HIPAA_Audit;
GO

See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
sys.dm_audit_class_type_map (Transact-SQL )
Create a Server Audit and Server Audit Specification
DROP SERVER AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a server audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions

Syntax
DROP SERVER AUDIT SPECIFICATION audit_specification_name
[ ; ]

Arguments
audit_specification_name
Name of an existing server audit specification object.

Remarks
A DROP SERVER AUDIT SPECIFICATION removes the metadata for the audit specification, but not the audit
data collected before the DROP command was issued. You must set the state of a server audit specification to
OFF using ALTER SERVER AUDIT SPECIFICATION before it can be dropped.

Permissions
Users with the ALTER ANY SERVER AUDIT permission can drop server audit specifications.

Examples
The following example drops a server audit specification called HIPAA_Audit_Specification .

DROP SERVER AUDIT SPECIFICATION HIPAA_Audit_Specification;


GO

For a full example about how to create an audit, see SQL Server Audit (Database Engine).

See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
sys.dm_audit_class_type_map (Transact-SQL )
Create a Server Audit and Server Audit Specification
DROP SERVER ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a user-defined server role.
User-defined server roles are new in SQL Server 2012 (11.x).
Transact-SQL Syntax Conventions

Syntax
DROP SERVER ROLE role_name

Arguments
role_name
Specifies the user-defined server role to be dropped from the server.

Remarks
User-defined server roles that own securables cannot be dropped from the server. To drop a user-defined server
role that owns securables, you must first transfer ownership of those securables or delete them.
User-defined server roles that have members cannot be dropped. To drop a user-defined server role that has
members, you must first remove members of the role by using ALTER SERVER ROLE.
Fixed server roles cannot be removed.
You can view information about role membership by querying the sys.server_role_members catalog view.

Permissions
Requires CONTROL permission on the server role or ALTER ANY SERVER ROLE permission.

Examples
A. To drop a server role
The following example drops the server role purchasing .

DROP SERVER ROLE purchasing;


GO

B. To view role membership


To view role membership, use the Server Role (Members) page in SQL Server Management Studio or execute
the following query:
SELECT SRM.role_principal_id, SP.name AS Role_Name,
SRM.member_principal_id, SP2.name AS Member_Name
FROM sys.server_role_members AS SRM
JOIN sys.server_principals AS SP
ON SRM.Role_principal_id = SP.principal_id
JOIN sys.server_principals AS SP2
ON SRM.member_principal_id = SP2.principal_id
ORDER BY SP.name, SP2.name

C. To view role membership


To determine whether a server role owns another server role, execute the following query:

SELECT SP1.name AS RoleOwner, SP2.name AS Server_Role


FROM sys.server_principals AS SP1
JOIN sys.server_principals AS SP2
ON SP1.principal_id = SP2.owning_principal_id
ORDER BY SP1.name ;

See Also
ALTER ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
Principals (Database Engine)
DROP ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
DROP SERVICE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing service.
Transact-SQL Syntax Conventions

Syntax
DROP SERVICE service_name
[ ; ]

Arguments
service_name
The name of the service to drop. Server, database, and schema names cannot be specified.

Remarks
You cannot drop a service if any conversation priorities refer to it.
Dropping a service deletes all messages for the service from the queue that the service uses. Service Broker sends
an error to the remote side of any open conversations that use the service.

Permissions
Permission for dropping a service defaults to the owner of the service, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.

Examples
The following example drops the service //Adventure-Works.com/Expenses .

DROP SERVICE [//Adventure-Works.com/Expenses] ;

See Also
ALTER BROKER PRIORITY (Transact-SQL )
ALTER SERVICE (Transact-SQL )
CREATE SERVICE (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP SIGNATURE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a digital signature from a stored procedure, function, trigger, or assembly.
Transact-SQL Syntax Conventions

Syntax
DROP [ COUNTER ] SIGNATURE FROM module_name
BY <crypto_list> [ ,...n ]

<crypto_list> ::=
CERTIFICATE cert_name
| ASYMMETRIC KEY Asym_key_name

Arguments
module_name
Is the name of a stored procedure, function, assembly, or trigger.
CERTIFICATE cert_name
Is the name of a certificate with which the stored procedure, function, assembly, or trigger is signed.
ASYMMETRIC KEY Asym_key_name
Is the name of an asymmetric key with which the stored procedure, function, assembly, or trigger is signed.

Remarks
Information about signatures is visible in the sys.crypt_properties catalog view.

Permissions
Requires ALTER permission on the object and CONTROL permission on the certificate or asymmetric key. If an
associated private key is protected by a password, the user also must have the password.

Examples
The following example removes the signature of certificate HumanResourcesDP from the stored procedure
HumanResources.uspUpdateEmployeeLogin .

USE AdventureWorks2012;
DROP SIGNATURE FROM HumanResources.uspUpdateEmployeeLogin
BY CERTIFICATE HumanResourcesDP;
GO

See Also
sys.crypt_properties (Transact-SQL )
ADD SIGNATURE (Transact-SQL )
DROP STATISTICS (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops statistics for multiple collections within the specified tables in the current database.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

DROP STATISTICS table.statistics_name | view.statistics_name [ ,...n ]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DROP STATISTICS [ schema_name . ] table_name.statistics_name


[;]

Arguments
table | view
Is the name of the target table or indexed view for which statistics should be dropped. Table and view names must
comply with the rules for Database Identifiers. Specifying the table or view owner name is optional.
statistics_name
Is the name of the statistics group to drop. Statistics names must comply with the rules for identifiers

Remarks
Be careful when you drop statistics. Doing so may affect the execution plan chosen by the query optimizer.
Statistics on indexes cannot be dropped by using DROP STATISTICS. Statistics remain as long as the index exists.
For more information about displaying statistics, see DBCC SHOW_STATISTICS (Transact-SQL ).

Permissions
Requires ALTER permission on the table or view.

Examples
A. Dropping statistics from a table
The following example drops the statistics groups (collections) of two tables. The VendorCredit statistics group
(collection) of the Vendor table and the CustomerTotal statistics (collection) of the SalesOrderHeader table are
dropped.
-- Create the statistics groups.
USE AdventureWorks2012;
GO
CREATE STATISTICS VendorCredit
ON Purchasing.Vendor (Name, CreditRating)
WITH SAMPLE 50 PERCENT
CREATE STATISTICS CustomerTotal
ON Sales.SalesOrderHeader (CustomerID, TotalDue)
WITH FULLSCAN;
GO
DROP STATISTICS Purchasing.Vendor.VendorCredit, Sales.SalesOrderHeader.CustomerTotal;

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


B. Dropping statistics from a table
The following examples drop the CustomerStats1 statistics from table Customer .

DROP STATISTICS Customer.CustomerStats1;


DROP STATISTICS dbo.Customer.CustomerStats1;

See Also
ALTER DATABASE (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
sys.stats (Transact-SQL )
sys.stats_columns (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
sp_autostats (Transact-SQL )
sp_createstats (Transact-SQL )
UPDATE STATISTICS (Transact-SQL )
EVENTDATA (Transact-SQL )
USE (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a symmetric key from the current database.
Transact-SQL Syntax Conventions

Syntax
DROP SYMMETRIC KEY symmetric_key_name [REMOVE PROVIDER KEY]

Arguments
symmetric_key_name
Is the name of the symmetric key to be dropped.
REMOVE PROVIDER KEY
Removes an Extensible Key Management (EKM ) key from an EKM device. For more information about Extensible
Key Management, see Extensible Key Management (EKM ).

Remarks
If the key is open in the current session the statement will fail.
If the asymmetric key is mapped to an Extensible Key Management (EKM ) key on an EKM device and the
REMOVE PROVIDER KEY option is not specified, the key will be dropped from the database but not the device,
and a warning will be issued.

Permissions
Requires CONTROL permission on the symmetric key.

Examples
The following example removes a symmetric key named GailSammamishKey6 from the current database.

CLOSE SYMMETRIC KEY GailSammamishKey6;


DROP SYMMETRIC KEY GailSammamishKey6;
GO

See Also
CREATE SYMMETRIC KEY (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
ALTER SYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
CLOSE SYMMETRIC KEY (Transact-SQL )
Extensible Key Management (EKM )
DROP SYNONYM (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a synonym from a specified schema.
Transact-SQL Syntax Conventions

Syntax
DROP SYNONYM [ IF EXISTS ] [ schema. ] synonym_name

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version)
Conditionally drops the synonym only if it already exists.
schema
Specifies the schema in which the synonym exists. If schema is not specified, SQL Server uses the default schema
of the current user.
synonym_name
Is the name of the synonym to be dropped.

Remarks
References to synonyms are not schema-bound; therefore, you can drop a synonym at any time. References to
dropped synonyms will be found only at run time.
Synonyms can be created, dropped and referenced in dynamic SQL.

Permissions
To drop a synonym, a user must satisfy at least one of the following conditions. The user must be:
The current owner of a synonym.
A grantee holding CONTROL on a synonym.
A grantee holding ALTER SCHEMA permission on the containing schema.

Examples
The following example first creates a synonym, MyProduct , and then drops the synonym.
USE tempdb;
GO
-- Create a synonym for the Product table in AdventureWorks2012.
CREATE SYNONYM MyProduct
FOR AdventureWorks2012.Production.Product;
GO
-- Drop synonym MyProduct.
USE tempdb;
GO
DROP SYNONYM MyProduct;
GO

See Also
CREATE SYNONYM (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP TABLE (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more table definitions and all data, indexes, triggers, constraints, and permission specifications
for those tables. Any view or stored procedure that references the dropped table must be explicitly dropped by
using DROP VIEW or DROP PROCEDURE. To report the dependencies on a table, use
sys.dm_sql_referencing_entities.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

DROP TABLE [ IF EXISTS ] [ database_name . [ schema_name ] . | schema_name . ]


table_name [ ,...n ]
[ ; ]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DROP TABLE [ database_name . [ schema_name ] . | schema_name . ] table_name


[;]

Arguments
database_name
Is the name of the database in which the table was created.
Windows Azure SQL Database supports the three-part name format database_name.
[schema_name].object_name when the database_name is the current database or the database_name is tempdb
and the object_name starts with #. Windows Azure SQL Database does not support four-part names.
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the table only if it already exists.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to be removed.

Remarks
DROP TABLE cannot be used to drop a table that is referenced by a FOREIGN KEY constraint. The referencing
FOREIGN KEY constraint or the referencing table must first be dropped. If both the referencing table and the
table that holds the primary key are being dropped in the same DROP TABLE statement, the referencing table
must be listed first.
Multiple tables can be dropped in any database. If a table being dropped references the primary key of another
table that is also being dropped, the referencing table with the foreign key must be listed before the table holding
the primary key that is being referenced.
When a table is dropped, rules or defaults on the table lose their binding, and any constraints or triggers
associated with the table are automatically dropped. If you re-create a table, you must rebind the appropriate
rules and defaults, re-create any triggers, and add all required constraints.
If you delete all rows in a table by using DELETE tablename or use the TRUNCATE TABLE statement, the table
exists until it is dropped.
Large tables and indexes that use more than 128 extents are dropped in two separate phases: logical and physical.
In the logical phase, the existing allocation units used by the table are marked for deallocation and locked until the
transaction commits. In the physical phase, the IAM pages marked for deallocation are physically dropped in
batches.
If you drop a table that contains a VARBINARY (MAX) column with the FILESTREAM attribute, any data stored in
the file system will not be removed.

IMPORTANT
DROP TABLE and CREATE TABLE should not be executed on the same table in the same batch. Otherwise an unexpected
error may occur.

Permissions
Requires ALTER permission on the schema to which the table belongs, CONTROL permission on the table, or
membership in the db_ddladmin fixed database role.

Examples
A. Dropping a table in the current database
The following example removes the ProductVendor1 table and its data and indexes from the current database.

DROP TABLE ProductVendor1 ;

B. Dropping a table in another database


The following example drops the SalesPerson2 table in the AdventureWorks2012 database. The example can
be executed from any database on the server instance.

DROP TABLE AdventureWorks2012.dbo.SalesPerson2 ;

C. Dropping a temporary table


The following example creates a temporary table, tests for its existence, drops it, and tests again for its existence.
This example does not use the IF EXISTS syntax which is available beginning with SQL Server 2016 (13.x).
CREATE TABLE #temptable (col1 int);
GO
INSERT INTO #temptable
VALUES (10);
GO
SELECT * FROM #temptable;
GO
IF OBJECT_ID(N'tempdb..#temptable', N'U') IS NOT NULL
DROP TABLE #temptable;
GO
--Test the drop.
SELECT * FROM #temptable;

D. Dropping a table using IF EXISTS


Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
The following example creates a table named T1. Then the second statement drops the table. The third statement
performs no action because the table is already deleted, however it does not cause an error.

CREATE TABLE T1 (Col1 int);


GO
DROP TABLE IF EXISTS T1;
GO
DROP TABLE IF EXISTS T1;

See Also
ALTER TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL )
DELETE (Transact-SQL )
sp_help (Transact-SQL )
sp_spaceused (Transact-SQL )
TRUNCATE TABLE (Transact-SQL )
DROP VIEW (Transact-SQL )
DROP PROCEDURE (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.sql_expression_dependencies (Transact-SQL )
DROP TRIGGER (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more DML or DDL triggers from the current database.
Transact-SQL Syntax Conventions

Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)

DROP TRIGGER [ IF EXISTS ] [schema_name.]trigger_name [ ,...n ] [ ; ]

-- Trigger on a CREATE, ALTER, DROP, GRANT, DENY, REVOKE or UPDATE statement (DDL Trigger)

DROP TRIGGER [ IF EXISTS ] trigger_name [ ,...n ]


ON { DATABASE | ALL SERVER }
[ ; ]

-- Trigger on a LOGON event (Logon Trigger)

DROP TRIGGER [ IF EXISTS ] trigger_name [ ,...n ]


ON ALL SERVER

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version, SQL Database).
Conditionally drops the trigger only if it already exists.
schema_name
Is the name of the schema to which a DML trigger belongs. DML triggers are scoped to the schema of the table or
view on which they are created. schema_name cannot be specified for DDL or logon triggers.
trigger_name
Is the name of the trigger to remove. To see a list of currently created triggers, use sys.server_assembly_modules
or sys.server_triggers.
DATABASE
Indicates the scope of the DDL trigger applies to the current database. DATABASE must be specified if it was also
specified when the trigger was created or modified.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates the scope of the DDL trigger applies to the current server. ALL SERVER must be specified if it was also
specified when the trigger was created or modified. ALL SERVER also applies to logon triggers.
NOTE
This option is not available in a contained database.

Remarks
You can remove a DML trigger by dropping it or by dropping the trigger table. When a table is dropped, all
associated triggers are also dropped.
When a trigger is dropped, information about the trigger is removed from the sys.objects, sys.triggers and
sys.sql_modules catalog views.
Multiple DDL triggers can be dropped per DROP TRIGGER statement only if all triggers were created using
identical ON clauses.
To rename a trigger, use DROP TRIGGER and CREATE TRIGGER. To change the definition of a trigger, use ALTER
TRIGGER.
For more information about determining dependencies for a specific trigger, see sys.sql_expression_dependencies,
sys.dm_sql_referenced_entities (Transact-SQL ), and sys.dm_sql_referencing_entities (Transact-SQL ).
For more information about viewing the text of the trigger, see sp_helptext (Transact-SQL ) and sys.sql_modules
(Transact-SQL ).
For more information about viewing a list of existing triggers, see sys.triggers (Transact-SQL ) and
sys.server_triggers (Transact-SQL ).

Permissions
To drop a DML trigger requires ALTER permission on the table or view on which the trigger is defined.
To drop a DDL trigger defined with server scope (ON ALL SERVER ) or a logon trigger requires CONTROL
SERVER permission in the server. To drop a DDL trigger defined with database scope (ON DATABASE ) requires
ALTER ANY DATABASE DDL TRIGGER permission in the current database.

Examples
A. Dropping a DML trigger
The following example drops the employee_insupd trigger in the AdventureWorks2012 database. (Beginning with
SQL Server 2016 (13.x) you can use the DROP TRIGGER IF EXISTS syntax.)

IF OBJECT_ID ('employee_insupd', 'TR') IS NOT NULL


DROP TRIGGER employee_insupd;

B. Dropping a DDL trigger


The following example drops DDL trigger safety .

IMPORTANT
Because DDL triggers are not schema-scoped and, therefore do not appear in the sys.objects catalog view, the OBJECT_ID
function cannot be used to query whether they exist in the database. Objects that are not schema-scoped must be queried
by using the appropriate catalog view. For DDL triggers, use sys.triggers.
DROP TRIGGER safety
ON DATABASE;

See Also
ALTER TRIGGER (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
ENABLE TRIGGER (Transact-SQL )
DISABLE TRIGGER (Transact-SQL )
EVENTDATA (Transact-SQL )
Get Information About DML Triggers
sp_help (Transact-SQL )
sp_helptrigger (Transact-SQL )
sys.triggers (Transact-SQL )
sys.trigger_events (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.assembly_modules (Transact-SQL )
sys.server_triggers (Transact-SQL )
sys.server_trigger_events (Transact-SQL )
sys.server_sql_modules (Transact-SQL )
sys.server_assembly_modules (Transact-SQL )
DROP TYPE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an alias data type or a common language runtime (CLR ) user-defined type from the current database.
Transact-SQL Syntax Conventions

Syntax
DROP TYPE [ IF EXISTS ] [ schema_name. ] type_name [ ; ]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the type only if it already exists.
schema_name
Is the name of the schema to which the alias or user-defined type belongs.
type_name
Is the name of the alias data type or the user-defined type you want to drop.

Remarks
The DROP TYPE statement will not execute when any of the following is true:
There are tables in the database that contain columns of the alias data type or the user-defined type.
Information about alias or user-defined type columns can be obtained by querying the sys.columns or
sys.column_type_usages catalog views.
There are computed columns, CHECK constraints, schema-bound views, and schema-bound functions
whose definitions reference the alias or user-defined type. Information about these references can be
obtained by querying the sys.sql_expression_dependencies catalog view.
There are functions, stored procedures, or triggers created in the database, and these routines use variables
and parameters of the alias or user-defined type. Information about alias or user-defined type parameters
can be obtained by querying the sys.parameters or sys.parameter_type_usages catalog views.

Permissions
Requires either CONTROL permission on type_name or ALTER permission on schema_name.

Examples
The following example assumes a type named ssn is already created in the current database.
DROP TYPE ssn ;

See Also
CREATE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP USER (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a user from the current database.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

DROP USER [ IF EXISTS ] user_name

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DROP USER user_name

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version, SQL Database).
Conditionally drops the user only if it already exists.
user_name
Specifies the name by which the user is identified inside this database.

Remarks
Users that own securables cannot be dropped from the database. Before dropping a database user that owns
securables, you must first drop or transfer ownership of those securables.
The guest user cannot be dropped, but guest user can be disabled by revoking its CONNECT permission by
executing REVOKE CONNECT FROM GUEST within any database other than master or tempdb.
Cau t i on

Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER AUTHORIZATION.
In such databases you must instead use the new catalog views. The new catalog views take into account the
separation of principals and schemas that was introduced in SQL Server 2005. For more information about
catalog views, see Catalog Views (Transact-SQL ).

Permissions
Requires ALTER ANY USER permission on the database.
Examples
The following example removes database user AbolrousHazem from the AdventureWorks2012 database.

DROP USER AbolrousHazem;


GO

See Also
CREATE USER (Transact-SQL )
ALTER USER (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP VIEW (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more views from the current database. DROP VIEW can be executed against indexed views.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

DROP VIEW [ IF EXISTS ] [ schema_name . ] view_name [ ...,n ] [ ; ]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DROP VIEW [ schema_name . ] view_name


[;]

Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version, SQL Database).|
Conditionally drops the view only if it already exists.
schema_name
Is the name of the schema to which the view belongs.
view_name
Is the name of the view to remove.

Remarks
When you drop a view, the definition of the view and other information about the view is deleted from the system
catalog. All permissions for the view are also deleted.
Any view on a table that is dropped by using DROP TABLE must be dropped explicitly by using DROP VIEW.
When executed against an indexed view, DROP VIEW automatically drops all indexes on a view. To display all
indexes on a view, use sp_helpindex.
When querying through a view, the Database Engine checks to make sure that all the database objects referenced
in the statement exist and that they are valid in the context of the statement, and that data modification statements
do not violate any data integrity rules. A check that fails returns an error message. A successful check translates
the action into an action against the underlying table or tables. If the underlying tables or views have changed
since the view was originally created, it may be useful to drop and re-create the view.
For more information about determining dependencies for a specific view, see sys.sql_dependencies (Transact-
SQL ).
For more information about viewing the text of the view, see sp_helptext (Transact-SQL ).

Permissions
Requires CONTROL permission on the view, ALTER permission on the schema containing the view, or
membership in the db_ddladmin fixed server role.

Examples
A. Drop a view
The following example removes the view Reorder .

DROP VIEW dbo.Reorder ;


GO

See Also
ALTER VIEW (Transact-SQL )
CREATE VIEW (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.columns (Transact-SQL )
sys.objects (Transact-SQL )
USE (Transact-SQL )
sys.sql_expression_dependencies (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing user-defined Resource Governor workload group.
Transact-SQL Syntax Conventions.

Syntax
DROP WORKLOAD GROUP group_name
[;]

Arguments
group_name
Is the name of an existing user-defined workload group.

Remarks
The DROP WORKLOAD GROUP statement is not allowed on the Resource Governor internal or default groups.
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states.
For more information, see Resource Governor.
If a workload group contains active sessions, dropping or moving the workload group to a different resource
pool will fail when the ALTER RESOURCE GOVERNOR RECONFIGURE statement is called to apply the
change. To avoid this problem, you can take one of the following actions:
Wait until all the sessions from the affected group have disconnected, and then rerun the ALTER
RESOURCE GOVERNOR RECONFIGURE statement.
Explicitly stop sessions in the affected group by using the KILL command, and then rerun the ALTER
RESOURCE GOVERNOR RECONFIGURE statement.
Restart the server. After the restart process is completed, the deleted group will not be created, and a
moved group will use the new resource pool assignment.
In a scenario in which you have issued the DROP WORKLOAD GROUP statement but decide that you do
not want to explicitly stop sessions to apply the change, you can re-create the group by using the same
name that it had before you issued the DROP statement, and then move the group to the original resource
pool. To apply the changes, run the ALTER RESOURCE GOVERNOR RECONFIGURE statement.

Permissions
Requires CONTROL SERVER permission.

Examples
The following example drops the workload group named adhoc .

DROP WORKLOAD GROUP adhoc;


GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

See Also
Resource Governor
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
DROP XML SCHEMA COLLECTION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Deletes the whole XML schema collection and all of its components.
Transact-SQL Syntax Conventions

Syntax
DROP XML SCHEMA COLLECTION [ relational_schema. ]sql_identifier

Arguments
relational_schema
Identifies the relational schema name. If not specified, the default relational schema is assumed.
sql_identifier
Is the name of the XML schema collection to drop.

Remarks
Dropping an XML schema collection is a transactional operation. This means when you drop an XML schema
collection inside a transaction and later roll back the transaction, the XML schema collection is not dropped.
You cannot drop an XML schema collection when it is in use. This means that the collection being dropped cannot
be any of the following:
Associated with any xml type parameter or column.
Specified in any table constraints.
Referenced in a schema-bound function or stored procedure. For example, the following function will lock
the XML schema collection MyCollection because the function specifies WITH SCHEMABINDING . If you
remove it, there is no lock on the XML SCHEMA COLLECTION.

CREATE FUNCTION dbo.MyFunction()


RETURNS int
WITH SCHEMABINDING
AS
BEGIN
...
DECLARE @x XML(MyCollection)
...
END;

Permissions
To drop an XML SCHEMA COLLECTION requires DROP permission on the collection.
Examples
The following example shows removing an XML schema collection.

DROP XML SCHEMA COLLECTION ManuInstructionsSchemaCollection;


GO

See Also
CREATE XML SCHEMA COLLECTION (Transact-SQL )
ALTER XML SCHEMA COLLECTION (Transact-SQL )
EVENTDATA (Transact-SQL )
Compare Typed XML to Untyped XML
Requirements and Limitations for XML Schema Collections on the Server
ENABLE TRIGGER (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Enables a DML, DDL, or logon trigger.
Transact-SQL Syntax Conventions

Syntax
ENABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL }
ON { object_name | DATABASE | ALL SERVER } [ ; ]

Arguments
schema_name
Is the name of the schema to which the trigger belongs. schema_name cannot be specified for DDL or logon
triggers.
trigger_name
Is the name of the trigger to be enabled.
ALL
Indicates that all triggers defined at the scope of the ON clause are enabled.
object_name
Is the name of the table or view on which the DML trigger trigger_name was created to execute.
DATABASE
For a DDL trigger, indicates that trigger_name was created or modified to execute with database scope.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
For a DDL trigger, indicates that trigger_name was created or modified to execute with server scope. ALL
SERVER also applies to logon triggers.

NOTE
This option is not available in a contained database.

Remarks
Enabling a trigger does not re-create it. A disabled trigger still exists as an object in the current database, but does
not fire. Enabling a trigger causes it to fire when any Transact-SQL statements on which it was originally
programmed are executed. Triggers are disabled by using DISABLE TRIGGER. DML triggers defined on tables
can be also be disabled or enabled by using ALTER TABLE.

Permissions
To enable a DML trigger, at a minimum, a user must have ALTER permission on the table or view on which the
trigger was created.
To enable a DDL trigger with server scope (ON ALL SERVER ) or a logon trigger, a user must have CONTROL
SERVER permission on the server. To enable a DDL trigger with database scope (ON DATABASE ), at a minimum,
a user must have ALTER ANY DATABASE DDL TRIGGER permission in the current database.

Examples
A. Enabling a DML trigger on a table
The following example disables trigger uAddress that was created on table Address in the AdventureWorks
database, and then enables it.

DISABLE TRIGGER Person.uAddress ON Person.Address;


GO
ENABLE Trigger Person.uAddress ON Person.Address;
GO

B. Enabling a DDL trigger


The following example creates a DDL trigger safety with database scope, and then disable and enables it.

CREATE TRIGGER safety


ON DATABASE
FOR DROP_TABLE, ALTER_TABLE
AS
PRINT 'You must disable Trigger "safety" to drop or alter tables!'
ROLLBACK;
GO
DISABLE TRIGGER safety ON DATABASE;
GO
ENABLE TRIGGER safety ON DATABASE;
GO

C. Enabling all triggers that were defined with the same scope
The following example enables all DDL triggers that were created at the server scope.
Applies to: SQL Server 2008 through SQL Server 2017.

ENABLE Trigger ALL ON ALL SERVER;


GO

See Also
DISABLE TRIGGER (Transact-SQL )
ALTER TRIGGER (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
DROP TRIGGER (Transact-SQL )
sys.triggers (Transact-SQL )
INSERT (Transact-SQL)
5/3/2018 • 34 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds one or more rows to a table or a view in SQL Server. For examples, see Examples.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

[ WITH <common_table_expression> [ ,...n ] ]


INSERT
{
[ TOP ( expression ) [ PERCENT ] ]
[ INTO ]
{ <object> | rowset_function_limited
[ WITH ( <Table_Hint_Limited> [ ...n ] ) ]
}
{
[ ( column_list ) ]
[ <OUTPUT Clause> ]
{ VALUES ( { DEFAULT | NULL | expression } [ ,...n ] ) [ ,...n ]
| derived_table
| execute_statement
| <dml_table_source>
| DEFAULT VALUES
}
}
}
[;]

<object> ::=
{
[ server_name . database_name . schema_name .
| database_name .[ schema_name ] .
| schema_name .
]
table_or_view_name
}

<dml_table_source> ::=
SELECT <select_list>
FROM ( <dml_statement_with_output_clause> )
[AS] table_alias [ ( column_alias [ ,...n ] ) ]
[ WHERE <search_condition> ]
[ OPTION ( <query_hint> [ ,...n ] ) ]
-- External tool only syntax

INSERT
{
[BULK]
[ database_name . [ schema_name ] . | schema_name . ]
[ table_name | view_name ]
( <column_definition> )
[ WITH (
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] KEEP_NULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] TABLOCK ]
) ]
}

[; ] <column_definition> ::=
column_name <data_type>
[ COLLATE collation_name ]
[ NULL | NOT NULL ]

<data type> ::=


[ type_schema_name . ] type_name
[ ( precision [ , scale ] | max ]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

INSERT INTO [ database_name . [ schema_name ] . | schema_name . ] table_name


[ ( column_name [ ,...n ] ) ]
{
VALUES ( { NULL | expression } )
| SELECT <select_criteria>
}
[ OPTION ( <query_option> [ ,...n ] ) ]
[;]

Arguments
WITH <common_table_expression>
Specifies the temporary named result set, also known as common table expression, defined within the scope of
the INSERT statement. The result set is derived from a SELECT statement. For more information, see WITH
common_table_expression (Transact-SQL ).
TOP (expression) [ PERCENT ]
Specifies the number or percent of random rows that will be inserted. expression can be either a number or a
percent of the rows. For more information, see TOP (Transact-SQL ).
INTO
Is an optional keyword that can be used between INSERT and the target table.
server_name
Applies to: SQL Server 2008 through SQL Server 2017.
Is the name of the linked server on which the table or view is located. server_name can be specified as a linked
server name, or by using the OPENDATASOURCE function.
When server_name is specified as a linked server, database_name and schema_name are required. When
server_name is specified with OPENDATASOURCE, database_name and schema_name may not apply to all
data sources and is subject to the capabilities of the OLE DB provider that accesses the remote object.
database_name
Applies to: SQL Server 2008 through SQL Server 2017.
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or view_name
Is the name of the table or view that is to receive the data.
A table variable, within its scope, can be used as a table source in an INSERT statement.
The view referenced by table_or_view_name must be updatable and reference exactly one base table in the
FROM clause of the view. For example, an INSERT into a multi-table view must use a column_list that
references only columns from one base table. For more information about updatable views, see CREATE VIEW
(Transact-SQL ).
rowset_function_limited
Applies to: SQL Server 2008 through SQL Server 2017.
Is either the OPENQUERY or OPENROWSET function. Use of these functions is subject to the capabilities of
the OLE DB provider that accesses the remote object.
WITH ( <table_hint_limited> [... n ] )
Specifies one or more table hints that are allowed for a target table. The WITH keyword and the parentheses
are required.
READPAST, NOLOCK, and READUNCOMMITTED are not allowed. For more information about table hints,
see Table Hints (Transact-SQL ).

IMPORTANT
The ability to specify the HOLDLOCK, SERIALIZABLE, READCOMMITTED, REPEATABLEREAD, or UPDLOCK hints on tables
that are targets of INSERT statements will be removed in a future version of SQL Server. These hints do not affect the
performance of INSERT statements. Avoid using them in new development work, and plan to modify applications that
currently use them.

Specifying the TABLOCK hint on a table that is the target of an INSERT statement has the same effect as
specifying the TABLOCKX hint. An exclusive lock is taken on the table.
(column_list)
Is a list of one or more columns in which to insert data. column_list must be enclosed in parentheses and
delimited by commas.
If a column is not in column_list, the Database Engine must be able to provide a value based on the definition of
the column; otherwise, the row cannot be loaded. The Database Engine automatically provides a value for the
column if the column:
Has an IDENTITY property. The next incremental identity value is used.
Has a default. The default value for the column is used.
Has a timestamp data type. The current timestamp value is used.
Is nullable. A null value is used.
Is a computed column. The calculated value is used.
column_list must be used when explicit values are inserted into an identity column, and the SET
IDENTITY_INSERT option must be ON for the table.
OUTPUT Clause
Returns inserted rows as part of the insert operation. The results can be returned to the processing application
or inserted into a table or table variable for further processing.
The OUTPUT clause is not supported in DML statements that reference local partitioned views, distributed
partitioned views, or remote tables, or INSERT statements that contain an execute_statement. The OUTPUT
INTO clause is not supported in INSERT statements that contain a <dml_table_source> clause.
VALUES
Introduces the list or lists of data values to be inserted. There must be one data value for each column in
column_list, if specified, or in the table. The value list must be enclosed in parentheses.
If the values in the Value list are not in the same order as the columns in the table or do not have a value for
each column in the table, column_list must be used to explicitly specify the column that stores each incoming
value.
You can use the Transact-SQL row constructor (also called a table value constructor) to specify multiple rows in
a single INSERT statement. The row constructor consists of a single VALUES clause with multiple value lists
enclosed in parentheses and separated by a comma. For more information, see Table Value Constructor
(Transact-SQL ).
DEFAULT
Forces the Database Engine to load the default value defined for a column. If a default does not exist for the
column and the column allows null values, NULL is inserted. For a column defined with the timestamp data
type, the next timestamp value is inserted. DEFAULT is not valid for an identity column.
expression
Is a constant, a variable, or an expression. The expression cannot contain an EXECUTE statement.
When referencing the Unicode character data types nchar, nvarchar, and ntext, 'expression' should be
prefixed with the capital letter 'N'. If 'N' is not specified, SQL Server converts the string to the code page that
corresponds to the default collation of the database or column. Any characters not found in this code page are
lost.
derived_table
Is any valid SELECT statement that returns rows of data to be loaded into the table. The SELECT statement
cannot contain a common table expression (CTE ).
execute_statement
Is any valid EXECUTE statement that returns data with SELECT or READTEXT statements. For more
information, see EXECUTE (Transact-SQL ).
The RESULT SETS options of the EXECUTE statement cannot be specified in an INSERT…EXEC statement.
If execute_statement is used with INSERT, each result set must be compatible with the columns in the table or
in column_list.
execute_statement can be used to execute stored procedures on the same server or a remote server. The
procedure in the remote server is executed, and the result sets are returned to the local server and loaded into
the table in the local server. In a distributed transaction, execute_statement cannot be issued against a loopback
linked server when the connection has multiple active result sets (MARS ) enabled.
If execute_statement returns data with the READTEXT statement, each READTEXT statement can return a
maximum of 1 MB (1024 KB ) of data. execute_statement can also be used with extended procedures.
execute_statement inserts the data returned by the main thread of the extended procedure; however, output
from threads other than the main thread are not inserted.
You cannot specify a table-valued parameter as the target of an INSERT EXEC statement; however, it can be
specified as a source in the INSERT EXEC string or stored-procedure. For more information, see Use Table-
Valued Parameters (Database Engine).
<dml_table_source>
Specifies that the rows inserted into the target table are those returned by the OUTPUT clause of an INSERT,
UPDATE, DELETE, or MERGE statement, optionally filtered by a WHERE clause. If <dml_table_source> is
specified, the target of the outer INSERT statement must meet the following restrictions:
It must be a base table, not a view.
It cannot be a remote table.
It cannot have any triggers defined on it.
It cannot participate in any primary key-foreign key relationships.
It cannot participate in merge replication or updatable subscriptions for transactional replication.
The compatibility level of the database must be set to 100 or higher. For more information, see OUTPUT
Clause (Transact-SQL ).
<select_list>
Is a comma-separated list specifying which columns returned by the OUTPUT clause to insert. The
columns in <select_list> must be compatible with the columns into which values are being inserted.
<select_list> cannot reference aggregate functions or TEXTPTR.

NOTE
Any variables listed in the SELECT list refer to their original values, regardless of any changes made to them in
<dml_statement_with_output_clause>.

<dml_statement_with_output_clause>
Is a valid INSERT, UPDATE, DELETE, or MERGE statement that returns affected rows in an OUTPUT clause.
The statement cannot contain a WITH clause, and cannot target remote tables or partitioned views. If UPDATE
or DELETE is specified, it cannot be a cursor-based UPDATE or DELETE. Source rows cannot be referenced as
nested DML statements.
WHERE <search_condition>
Is any WHERE clause containing a valid <search_condition> that filters the rows returned by
<dml_statement_with_output_clause>. For more information, see Search Condition (Transact-SQL ). When
used in this context, <search_condition> cannot contain subqueries, scalar user-defined functions that perform
data access, aggregate functions, TEXTPTR, or full-text search predicates.
DEFAULT VALUES
Applies to: SQL Server 2008 through SQL Server 2017.
Forces the new row to contain the default values defined for each column.
BULK
Applies to: SQL Server 2008 through SQL Server 2017.
Used by external tools to upload a binary data stream. This option is not intended for use with tools such as
SQL Server Management Studio, SQLCMD, OSQL, or data access application programming interfaces such as
SQL Server Native Client.
FIRE_TRIGGERS
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies that any insert triggers defined on the destination table execute during the binary data stream upload
operation. For more information, see BULK INSERT (Transact-SQL ).
CHECK_CONSTRAINTS
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies that all constraints on the target table or view must be checked during the binary data stream upload
operation. For more information, see BULK INSERT (Transact-SQL ).
KEEPNULLS
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies that empty columns should retain a null value during the binary data stream upload operation. For
more information, see Keep Nulls or Use Default Values During Bulk Import (SQL Server).
KILOBYTES_PER_BATCH = kilobytes_per_batch
Specifies the approximate number of kilobytes (KB ) of data per batch as kilobytes_per_batch. For more
information, see BULK INSERT (Transact-SQL ).
ROWS_PER_BATCH =rows_per_batch
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates the approximate number of rows of data in the binary data stream. For more information, see BULK
INSERT (Transact-SQL ).

NOTE
A syntax error is raised if a column list is not provided.

Remarks
For information specific to inserting data into SQL graph tables, see INSERT (SQL Graph).

Best Practices
Use the @@ROWCOUNT function to return the number of inserted rows to the client application. For more
information, see @@ROWCOUNT (Transact-SQL ).
Best Practices for Bulk Importing Data
Using INSERT INTO…SELECT to Bulk Import Data with Minimal Logging
You can use INSERT INTO <target_table> SELECT <columns> FROM <source_table> to efficiently transfer a large
number of rows from one table, such as a staging table, to another table with minimal logging. Minimal logging
can improve the performance of the statement and reduce the possibility of the operation filling the available
transaction log space during the transaction.
Minimal logging for this statement has the following requirements:
The recovery model of the database is set to simple or bulk-logged.
The target table is an empty or nonempty heap.
The target table is not used in replication.
The TABLOCK hint is specified for the target table.
Rows that are inserted into a heap as the result of an insert action in a MERGE statement may also be
minimally logged.
Unlike the BULK INSERT statement, which holds a less restrictive Bulk Update lock, INSERT INTO…SELECT
with the TABLOCK hint holds an exclusive (X) lock on the table. This means that you cannot insert rows using
parallel insert operations.
Using OPENROWSET and BULK to Bulk Import Data
The OPENROWSET function can accept the following table hints, which provide bulk-load optimizations with
the INSERT statement:
The TABLOCK hint can minimize the number of log records for the insert operation. The recovery model
of the database must be set to simple or bulk-logged and the target table cannot be used in replication.
For more information, see Prerequisites for Minimal Logging in Bulk Import.
The IGNORE_CONSTRAINTS hint can temporarily disable FOREIGN KEY and CHECK constraint
checking.
The IGNORE_TRIGGERS hint can temporarily disable trigger execution.
The KEEPDEFAULTS hint allows the insertion of a table column's default value, if any, instead of NULL
when the data record lacks a value for the column.
The KEEPIDENTITY hint allows the identity values in the imported data file to be used for the identity
column in the target table.
These optimizations are similar to those available with the BULK INSERT command. For more information, see
Table Hints (Transact-SQL ).

Data Types
When you insert rows, consider the following data type behavior:
If a value is being loaded into columns with a char, varchar, or varbinary data type, the padding or
truncation of trailing blanks (spaces for char and varchar, zeros for varbinary) is determined by the
SET ANSI_PADDING setting defined for the column when the table was created. For more information,
see SET ANSI_PADDING (Transact-SQL ).
The following table shows the default operation for SET ANSI_PADDING OFF.

DATA TYPE DEFAULT OPERATION

char Pad value with spaces to the defined width of column.

varchar Remove trailing spaces to the last non-space character


or to a single-space character for strings made up of
only spaces.

varbinary Remove trailing zeros.

If an empty string (' ') is loaded into a column with a varchar or text data type, the default operation is
to load a zero-length string.
Inserting a null value into a text or image column does not create a valid text pointer, nor does it
preallocate an 8-KB text page.
Columns created with the uniqueidentifier data type store specially formatted 16-byte binary values.
Unlike with identity columns, the Database Engine does not automatically generate values for columns
with the uniqueidentifier data type. During an insert operation, variables with a data type of
uniqueidentifier and string constants in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (36 characters
including hyphens, where x is a hexadecimal digit in the range 0-9 or a-f ) can be used for
uniqueidentifier columns. For example, 6F9619FF -8B86-D011-B42D -00C04FC964FF is a valid value
for a uniqueidentifier variable or column. Use the NEWID () function to obtain a globally unique ID
(GUID ).
Inserting Values into User-Defined Type Columns
You can insert values in user-defined type columns by:
Supplying a value of the user-defined type.
Supplying a value in a SQL Server system data type, as long as the user-defined type supports implicit
or explicit conversion from that type. The following example shows how to insert a value in a column of
user-defined type Point , by explicitly converting from a string.

INSERT INTO Cities (Location)


VALUES ( CONVERT(Point, '12.3:46.2') );

A binary value can also be supplied without performing explicit conversion, because all user-defined
types are implicitly convertible from binary.
Calling a user-defined function that returns a value of the user-defined type. The following example uses
a user-defined function CreateNewPoint() to create a new value of user-defined type Point and insert
the value into the Cities table.

INSERT INTO Cities (Location)


VALUES ( dbo.CreateNewPoint(x, y) );

Error Handling
You can implement error handling for the INSERT statement by specifying the statement in a TRY…CATCH
construct.
If an INSERT statement violates a constraint or rule, or if it has a value incompatible with the data type of the
column, the statement fails and an error message is returned.
If INSERT is loading multiple rows with SELECT or EXECUTE, any violation of a rule or constraint that occurs
from the values being loaded causes the statement to be stopped, and no rows are loaded.
When an INSERT statement encounters an arithmetic error (overflow, divide by zero, or a domain error)
occurring during expression evaluation, the Database Engine handles these errors as if SET ARITHABORT is
set to ON. The batch is stopped, and an error message is returned. During expression evaluation when SET
ARITHABORT and SET ANSI_WARNINGS are OFF, if an INSERT, DELETE or UPDATE statement encounters
an arithmetic error, overflow, divide-by-zero, or a domain error, SQL Server inserts or updates a NULL value. If
the target column is not nullable, the insert or update action fails and the user receives an error.

Interoperability
When an INSTEAD OF trigger is defined on INSERT actions against a table or view, the trigger executes
instead of the INSERT statement. For more information about INSTEAD OF triggers, see CREATE TRIGGER
(Transact-SQL ).
Limitations and Restrictions
When you insert values into remote tables and not all values for all columns are specified, you must identify the
columns to which the specified values are to be inserted.
When TOP is used with INSERT the referenced rows are not arranged in any order and the ORDER BY clause
can not be directly specified in this statements. If you need to use TOP to insert rows in a meaningful
chronological order, you must use TOP together with an ORDER BY clause that is specified in a subselect
statement. See the Examples section that follows in this topic.
INSERT queries that use SELECT with ORDER BY to populate rows guarantees how identity values are
computed but not the order in which the rows are inserted.
In Parallel Data Warehouse, the ORDER BY clause is invalid in VIEWS, CREATE TABLE AS SELECT, INSERT
SELECT, inline functions, derived tables, subqueries and common table expressions, unless TOP is also
specified.

Logging Behavior
The INSERT statement is always fully logged except when using the OPENROWSET function with the BULK
keyword or when using INSERT INTO <target_table> SELECT <columns> FROM <source_table> . These operations
can be minimally logged. For more information, see the section "Best Practices for Bulk Loading Data" earlier in
this topic.

Security
During a linked server connection, the sending server provides a login name and password to connect to the
receiving server on its behalf. For this connection to work, you must create a login mapping between the linked
servers by using sp_addlinkedsrvlogin.
When you use OPENROWSET(BULK…), it is important to understand how SQL Server handles
impersonation. For more information, see "Security Considerations" in Import Bulk Data by Using BULK
INSERT or OPENROWSET(BULK...) (SQL Server).
Permissions
INSERT permission is required on the target table.
INSERT permissions default to members of the sysadmin fixed server role, the db_owner and db_datawriter
fixed database roles, and the table owner. Members of the sysadmin, db_owner, and the db_securityadmin
roles, and the table owner can transfer permissions to other users.
To execute INSERT with the OPENROWSET function BULK option, you must be a member of the sysadmin
fixed server role or of the bulkadmin fixed server role.

Examples
CATEGORY FEATURED SYNTAX ELEMENTS

Basic syntax INSERT • table value constructor

Handling column values IDENTITY • NEWID • default values • user-defined types

Inserting data from other tables INSERT…SELECT • INSERT…EXECUTE • WITH common table
expression • TOP • OFFSET FETCH
CATEGORY FEATURED SYNTAX ELEMENTS

Specifying target objects other than standard tables Views • table variables

Inserting rows into a remote table Linked server • OPENQUERY rowset function •
OPENDATASOURCE rowset function

Bulk loading data from tables or data files INSERT…SELECT • OPENROWSET function

Overriding the default behavior of the query optimizer by Table hints


using hints

Capturing the results of the INSERT statement OUTPUT clause

Basic Syntax
Examples in this section demonstrate the basic functionality of the INSERT statement using the minimum
required syntax.
A. Inserting a single row of data
The following example inserts one row into the Production.UnitMeasure table in the AdventureWorks2012
database. The columns in this table are UnitMeasureCode , Name , and ModifiedDate . Because values for all
columns are supplied and are listed in the same order as the columns in the table, the column names do not
have to be specified in the column list.

INSERT INTO Production.UnitMeasure


VALUES (N'FT', N'Feet', '20080414');

B. Inserting multiple rows of data


The following example uses the table value constructor to insert three rows into the Production.UnitMeasure
table in the AdventureWorks2012 database in a single INSERT statement. Because values for all columns are
supplied and are listed in the same order as the columns in the table, the column names do not have to be
specified in the column list.

INSERT INTO Production.UnitMeasure


VALUES (N'FT2', N'Square Feet ', '20080923'), (N'Y', N'Yards', '20080923')
, (N'Y3', N'Cubic Yards', '20080923');

C. Inserting data that is not in the same order as the table columns
The following example uses a column list to explicitly specify the values that are inserted into each column. The
column order in the Production.UnitMeasure table in the AdventureWorks2012 database is UnitMeasureCode ,
Name , ModifiedDate ; however, the columns are not listed in that order in column_list.

INSERT INTO Production.UnitMeasure (Name, UnitMeasureCode,


ModifiedDate)
VALUES (N'Square Yards', N'Y2', GETDATE());

Handling Column Values


Examples in this section demonstrate methods of inserting values into columns that are defined with an
IDENTITY property, DEFAULT value, or are defined with data types such as uniqueidentifer or user-defined
type columns.
D. Inserting data into a table with columns that have default values
The following example shows inserting rows into a table with columns that automatically generate a value or
have a default value. Column_1 is a computed column that automatically generates a value by concatenating a
string with the value inserted into column_2 . Column_2 is defined with a default constraint. If a value is not
specified for this column, the default value is used. Column_3 is defined with the rowversion data type, which
automatically generates a unique, incrementing binary number. Column_4 does not automatically generate a
value. When a value for this column is not specified, NULL is inserted. The INSERT statements insert rows that
contain values for some of the columns but not all. In the last INSERT statement, no columns are specified and
only the default values are inserted by using the DEFAULT VALUES clause.

CREATE TABLE dbo.T1


(
column_1 AS 'Computed column ' + column_2,
column_2 varchar(30)
CONSTRAINT default_name DEFAULT ('my column default'),
column_3 rowversion,
column_4 varchar(40) NULL
);
GO
INSERT INTO dbo.T1 (column_4)
VALUES ('Explicit value');
INSERT INTO dbo.T1 (column_2, column_4)
VALUES ('Explicit value', 'Explicit value');
INSERT INTO dbo.T1 (column_2)
VALUES ('Explicit value');
INSERT INTO T1 DEFAULT VALUES;
GO
SELECT column_1, column_2, column_3, column_4
FROM dbo.T1;
GO

E. Inserting data into a table with an identity column


The following example shows different methods of inserting data into an identity column. The first two INSERT
statements allow identity values to be generated for the new rows. The third INSERT statement overrides the
IDENTITY property for the column with the SET IDENTITY_INSERT statement and inserts an explicit value
into the identity column.

CREATE TABLE dbo.T1 ( column_1 int IDENTITY, column_2 VARCHAR(30));


GO
INSERT T1 VALUES ('Row #1');
INSERT T1 (column_2) VALUES ('Row #2');
GO
SET IDENTITY_INSERT T1 ON;
GO
INSERT INTO T1 (column_1,column_2)
VALUES (-99, 'Explicit identity value');
GO
SELECT column_1, column_2
FROM T1;
GO

F. Inserting data into a uniqueidentifier column by using NEWID()


The following example uses the NEWID () function to obtain a GUID for column_2 . Unlike for identity columns,
the Database Engine does not automatically generate values for columns with the uniqueidentifier data type, as
shown by the second INSERT statement.
CREATE TABLE dbo.T1
(
column_1 int IDENTITY,
column_2 uniqueidentifier,
);
GO
INSERT INTO dbo.T1 (column_2)
VALUES (NEWID());
INSERT INTO T1 DEFAULT VALUES;
GO
SELECT column_1, column_2
FROM dbo.T1;

G. Inserting data into user-defined type columns


The following Transact-SQL statements insert three rows into the PointValue column of the Points table. This
column uses a CLR user-defined type (UDT). The Point data type consists of X and Y integer values that are
exposed as properties of the UDT. You must use either the CAST or CONVERT function to cast the comma-
delimited X and Y values to the Point type. The first two statements use the CONVERT function to convert a
string value to the Point type, and the third statement uses the CAST function. For more information, see
Manipulating UDT Data.

INSERT INTO dbo.Points (PointValue) VALUES (CONVERT(Point, '3,4'));


INSERT INTO dbo.Points (PointValue) VALUES (CONVERT(Point, '1,5'));
INSERT INTO dbo.Points (PointValue) VALUES (CAST ('1,99' AS Point));

Inserting Data from Other Tables


Examples in this section demonstrate methods of inserting rows from one table into another table.
H. Using the SELECT and EXECUTE options to insert data from other tables
The following example shows how to insert data from one table into another table by using INSERT…SELECT
or INSERT…EXECUTE. Each is based on a multi-table SELECT statement that includes an expression and a
literal value in the column list.
The first INSERT statement uses a SELECT statement to derive the data from the source tables ( Employee ,
SalesPerson , and Person ) in the AdventureWorks2012 database and store the result set in the EmployeeSales
table. The second INSERT statement uses the EXECUTE clause to call a stored procedure that contains the
SELECT statement, and the third INSERT uses the EXECUTE clause to reference the SELECT statement as a
literal string.
CREATE TABLE dbo.EmployeeSales
( DataSource varchar(20) NOT NULL,
BusinessEntityID varchar(11) NOT NULL,
LastName varchar(40) NOT NULL,
SalesDollars money NOT NULL
);
GO
CREATE PROCEDURE dbo.uspGetEmployeeSales
AS
SET NOCOUNT ON;
SELECT 'PROCEDURE', sp.BusinessEntityID, c.LastName,
sp.SalesYTD
FROM Sales.SalesPerson AS sp
INNER JOIN Person.Person AS c
ON sp.BusinessEntityID = c.BusinessEntityID
WHERE sp.BusinessEntityID LIKE '2%'
ORDER BY sp.BusinessEntityID, c.LastName;
GO
--INSERT...SELECT example
INSERT INTO dbo.EmployeeSales
SELECT 'SELECT', sp.BusinessEntityID, c.LastName, sp.SalesYTD
FROM Sales.SalesPerson AS sp
INNER JOIN Person.Person AS c
ON sp.BusinessEntityID = c.BusinessEntityID
WHERE sp.BusinessEntityID LIKE '2%'
ORDER BY sp.BusinessEntityID, c.LastName;
GO
--INSERT...EXECUTE procedure example
INSERT INTO dbo.EmployeeSales
EXECUTE dbo.uspGetEmployeeSales;
GO
--INSERT...EXECUTE('string') example
INSERT INTO dbo.EmployeeSales
EXECUTE
('
SELECT ''EXEC STRING'', sp.BusinessEntityID, c.LastName,
sp.SalesYTD
FROM Sales.SalesPerson AS sp
INNER JOIN Person.Person AS c
ON sp.BusinessEntityID = c.BusinessEntityID
WHERE sp.BusinessEntityID LIKE ''2%''
ORDER BY sp.BusinessEntityID, c.LastName
');
GO
--Show results.
SELECT DataSource,BusinessEntityID,LastName,SalesDollars
FROM dbo.EmployeeSales;

I. Using WITH common table expression to define the data inserted


The following example creates the NewEmployee table in the AdventureWorks2012 database. A common table
expression ( EmployeeTemp ) defines the rows from one or more tables to be inserted into the NewEmployee table.
The INSERT statement references the columns in the common table expression.
CREATE TABLE HumanResources.NewEmployee
(
EmployeeID int NOT NULL,
LastName nvarchar(50) NOT NULL,
FirstName nvarchar(50) NOT NULL,
PhoneNumber Phone NULL,
AddressLine1 nvarchar(60) NOT NULL,
City nvarchar(30) NOT NULL,
State nchar(3) NOT NULL,
PostalCode nvarchar(15) NOT NULL,
CurrentFlag Flag
);
GO
WITH EmployeeTemp (EmpID, LastName, FirstName, Phone,
Address, City, StateProvince,
PostalCode, CurrentFlag)
AS (SELECT
e.BusinessEntityID, c.LastName, c.FirstName, pp.PhoneNumber,
a.AddressLine1, a.City, sp.StateProvinceCode,
a.PostalCode, e.CurrentFlag
FROM HumanResources.Employee e
INNER JOIN Person.BusinessEntityAddress AS bea
ON e.BusinessEntityID = bea.BusinessEntityID
INNER JOIN Person.Address AS a
ON bea.AddressID = a.AddressID
INNER JOIN Person.PersonPhone AS pp
ON e.BusinessEntityID = pp.BusinessEntityID
INNER JOIN Person.StateProvince AS sp
ON a.StateProvinceID = sp.StateProvinceID
INNER JOIN Person.Person as c
ON e.BusinessEntityID = c.BusinessEntityID
)
INSERT INTO HumanResources.NewEmployee
SELECT EmpID, LastName, FirstName, Phone,
Address, City, StateProvince, PostalCode, CurrentFlag
FROM EmployeeTemp;
GO

J. Using TOP to limit the data inserted from the source table
The following example creates the table EmployeeSales and inserts the name and year-to-date sales data for
the top 5 random employees from the table HumanResources.Employee in the AdventureWorks2012 database.
The INSERT statement chooses any 5 rows returned by the SELECT statement. The OUTPUT clause displays
the rows that are inserted into the EmployeeSales table. Notice that the ORDER BY clause in the SELECT
statement is not used to determine the top 5 employees.

CREATE TABLE dbo.EmployeeSales


( EmployeeID nvarchar(11) NOT NULL,
LastName nvarchar(20) NOT NULL,
FirstName nvarchar(20) NOT NULL,
YearlySales money NOT NULL
);
GO
INSERT TOP(5)INTO dbo.EmployeeSales
OUTPUT inserted.EmployeeID, inserted.FirstName,
inserted.LastName, inserted.YearlySales
SELECT sp.BusinessEntityID, c.LastName, c.FirstName, sp.SalesYTD
FROM Sales.SalesPerson AS sp
INNER JOIN Person.Person AS c
ON sp.BusinessEntityID = c.BusinessEntityID
WHERE sp.SalesYTD > 250000.00
ORDER BY sp.SalesYTD DESC;

If you have to use TOP to insert rows in a meaningful chronological order, you must use TOP together with
ORDER BY in a subselect statement as shown in the following example. The OUTPUT clause displays the rows
that are inserted into the EmployeeSales table. Notice that the top 5 employees are now inserted based on the
results of the ORDER BY clause instead of random rows.

INSERT INTO dbo.EmployeeSales


OUTPUT inserted.EmployeeID, inserted.FirstName,
inserted.LastName, inserted.YearlySales
SELECT TOP (5) sp.BusinessEntityID, c.LastName, c.FirstName, sp.SalesYTD
FROM Sales.SalesPerson AS sp
INNER JOIN Person.Person AS c
ON sp.BusinessEntityID = c.BusinessEntityID
WHERE sp.SalesYTD > 250000.00
ORDER BY sp.SalesYTD DESC;

Specifying Target Objects Other Than Standard Tables


Examples in this section demonstrate how to insert rows by specifying a view or table variable.
K. Inserting data by specifying a view
The following example specifies a view name as the target object; however, the new row is inserted in the
underlying base table. The order of the values in the INSERT statement must match the column order of the
view. For more information, see Modify Data Through a View.

CREATE TABLE T1 ( column_1 int, column_2 varchar(30));


GO
CREATE VIEW V1 AS
SELECT column_2, column_1
FROM T1;
GO
INSERT INTO V1
VALUES ('Row 1',1);
GO
SELECT column_1, column_2
FROM T1;
GO
SELECT column_1, column_2
FROM V1;
GO

L. Inserting data into a table variable


The following example specifies a table variable as the target object in the AdventureWorks2012 database.

-- Create the table variable.


DECLARE @MyTableVar table(
LocationID int NOT NULL,
CostRate smallmoney NOT NULL,
NewCostRate AS CostRate * 1.5,
ModifiedDate datetime);

-- Insert values into the table variable.


INSERT INTO @MyTableVar (LocationID, CostRate, ModifiedDate)
SELECT LocationID, CostRate, GETDATE()
FROM Production.Location
WHERE CostRate > 0;

-- View the table variable result set.


SELECT * FROM @MyTableVar;
GO

Inserting Rows into a Remote Table


Examples in this section demonstrate how to insert rows into a remote target table by using a linked server or a
rowset function to reference the remote table.
M. Inserting data into a remote table by using a linked server
The following example inserts rows into a remote table. The example begins by creating a link to the remote
data source by using sp_addlinkedserver. The linked server name, MyLinkServer , is then specified as part of the
four-part object name in the form server.catalog.schema.object.
Applies to: SQL Server 2008 through SQL Server 2017.

USE master;
GO
-- Create a link to the remote data source.
-- Specify a valid server name for @datasrc as 'server_name'
-- or 'server_nameinstance_name'.

EXEC sp_addlinkedserver @server = N'MyLinkServer',


@srvproduct = N' ',
@provider = N'SQLNCLI',
@datasrc = N'server_name',
@catalog = N'AdventureWorks2012';
GO

-- Specify the remote data source in the FROM clause using a four-part name
-- in the form linked_server.catalog.schema.object.

INSERT INTO MyLinkServer.AdventureWorks2012.HumanResources.Department (Name, GroupName)


VALUES (N'Public Relations', N'Executive General and Administration');
GO

N. Inserting data into a remote table by using the OPENQUERY function


The following example inserts a row into a remote table by specifying the OPENQUERY rowset function. The
linked server name created in the previous example is used in this example.
Applies to: SQL Server 2008 through SQL Server 2017.

INSERT OPENQUERY (MyLinkServer,


'SELECT Name, GroupName
FROM AdventureWorks2012.HumanResources.Department')
VALUES ('Environmental Impact', 'Engineering');
GO

O. Inserting data into a remote table by using the OPENDATASOURCE function


The following example inserts a row into a remote table by specifying the OPENDATASOURCE rowset
function. Specify a valid server name for the data source by using the format server_name or
server_name\instance_name.
Applies to: SQL Server 2008 through SQL Server 2017.

-- Use the OPENDATASOURCE function to specify the remote data source.


-- Specify a valid server name for Data Source using the format
-- server_name or server_nameinstance_name.

INSERT INTO OPENDATASOURCE('SQLNCLI',


'Data Source= <server_name>; Integrated Security=SSPI')
.AdventureWorks2012.HumanResources.Department (Name, GroupName)
VALUES (N'Standards and Methods', 'Quality Assurance');
GO

P. Inserting into an external table created using PolyBase


Export data from SQL Server to Hadoop or Azure Storage. First, create an external table that points to the
destination file or directory. Then, use INSERT INTO to export data from a local SQL Server table to an
external data source. The INSERT INTO statement creates the destination file or directory if it does not exist
and the results of the SELECT statement are exported to the specified location in the specified file format. For
more information, see Get started with PolyBase.
Applies to: SQL Server 2017.

-- Create an external table.


CREATE EXTERNAL TABLE [dbo].[FastCustomers2009] (
[FirstName] char(25) NOT NULL,
[LastName] char(25) NOT NULL,
[YearlyIncome] float NULL,
[MaritalStatus] char(1) NOT NULL
)
WITH (
LOCATION='/old_data/2009/customerdata.tbl',
DATA_SOURCE = HadoopHDP2,
FILE_FORMAT = TextFileFormat,
REJECT_TYPE = VALUE,
REJECT_VALUE = 0
);

-- Export data: Move old data to Hadoop while keeping


-- it query-able via external table.

INSERT INTO dbo.FastCustomer2009


SELECT T.* FROM Insured_Customers T1 JOIN CarSensor_Data T2
ON (T1.CustomerKey = T2.CustomerKey)
WHERE T2.YearMeasured = 2009 and T2.Speed > 40;

Bulk Loading Data from Tables or Data Files


Examples in this section demonstrate two methods to bulk load data into a table by using the INSERT
statement.
Q. Inserting data into a heap with minimal logging
The following example creates a a new table (a heap) and inserts data from another table into it using minimal
logging. The example assumes that the recovery model of the AdventureWorks2012 database is set to FULL. To
ensure minimal logging is used, the recovery model of the AdventureWorks2012 database is set to
BULK_LOGGED before rows are inserted and reset to FULL after the INSERT INTO…SELECT statement. In
addition, the TABLOCK hint is specified for the target table Sales.SalesHistory . This ensures that the statement
uses minimal space in the transaction log and performs efficiently.
-- Create the target heap.
CREATE TABLE Sales.SalesHistory(
SalesOrderID int NOT NULL,
SalesOrderDetailID int NOT NULL,
CarrierTrackingNumber nvarchar(25) NULL,
OrderQty smallint NOT NULL,
ProductID int NOT NULL,
SpecialOfferID int NOT NULL,
UnitPrice money NOT NULL,
UnitPriceDiscount money NOT NULL,
LineTotal money NOT NULL,
rowguid uniqueidentifier ROWGUIDCOL NOT NULL,
ModifiedDate datetime NOT NULL );
GO
-- Temporarily set the recovery model to BULK_LOGGED.
ALTER DATABASE AdventureWorks2012
SET RECOVERY BULK_LOGGED;
GO
-- Transfer data from Sales.SalesOrderDetail to Sales.SalesHistory
INSERT INTO Sales.SalesHistory WITH (TABLOCK)
(SalesOrderID,
SalesOrderDetailID,
CarrierTrackingNumber,
OrderQty,
ProductID,
SpecialOfferID,
UnitPrice,
UnitPriceDiscount,
LineTotal,
rowguid,
ModifiedDate)
SELECT * FROM Sales.SalesOrderDetail;
GO
-- Reset the recovery model.
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL;
GO

R. Using the OPENROWSET function with BULK to bulk load data into a table
The following example inserts rows from a data file into a table by specifying the OPENROWSET function. The
IGNORE_TRIGGERS table hint is specified for performance optimization. For more examples, see Import Bulk
Data by Using BULK INSERT or OPENROWSET(BULK...) (SQL Server).
Applies to: SQL Server 2008 through SQL Server 2017.

INSERT INTO HumanResources.Department WITH (IGNORE_TRIGGERS) (Name, GroupName)


SELECT b.Name, b.GroupName
FROM OPENROWSET (
BULK 'C:SQLFilesDepartmentData.txt',
FORMATFILE = 'C:SQLFilesBulkloadFormatFile.xml',
ROWS_PER_BATCH = 15000)AS b ;

Overriding the Default Behavior of the Query Optimizer by Using Hints


Examples in this section demonstrate how to use table hints to temporarily override the default behavior of the
query optimizer when processing the INSERT statement.
Cau t i on

Because the SQL Server query optimizer typically selects the best execution plan for a query, we recommend
that hints be used only as a last resort by experienced developers and database administrators.
S. Using the TABLOCK hint to specify a locking method
The following example specifies that an exclusive (X) lock is taken on the Production.Location table and is held
until the end of the INSERT statement.
Applies to: SQL Server, SQL Database.

INSERT INTO Production.Location WITH (XLOCK)


(Name, CostRate, Availability)
VALUES ( N'Final Inventory', 15.00, 80.00);

Capturing the Results of the INSERT Statement


Examples in this section demonstrate how to use the OUTPUT Clause to return information from, or
expressions based on, each row affected by an INSERT statement. These results can be returned to the
processing application for use in such things as confirmation messages, archiving, and other such application
requirements.
T. Using OUTPUT with an INSERT statement
The following example inserts a row into the ScrapReason table and uses the OUTPUT clause to return the
results of the statement to the @MyTableVar table variable. Because the ScrapReasonID column is defined with
an IDENTITY property, a value is not specified in the INSERT statement for that column. However, note that the
value generated by the Database Engine for that column is returned in the OUTPUT clause in the
INSERTED.ScrapReasonID column.

DECLARE @MyTableVar table( NewScrapReasonID smallint,


Name varchar(50),
ModifiedDate datetime);
INSERT Production.ScrapReason
OUTPUT INSERTED.ScrapReasonID, INSERTED.Name, INSERTED.ModifiedDate
INTO @MyTableVar
VALUES (N'Operator error', GETDATE());

--Display the result set of the table variable.


SELECT NewScrapReasonID, Name, ModifiedDate FROM @MyTableVar;
--Display the result set of the table.
SELECT ScrapReasonID, Name, ModifiedDate
FROM Production.ScrapReason;

U. Using OUTPUT with identity and computed columns


The following example creates the EmployeeSales table and then inserts several rows into it using an INSERT
statement with a SELECT statement to retrieve data from source tables. The EmployeeSales table contains an
identity column ( EmployeeID ) and a computed column ( ProjectedSales ). Because these values are generated
by the Database Engine during the insert operation, neither of these columns can be defined in @MyTableVar .
CREATE TABLE dbo.EmployeeSales
( EmployeeID int IDENTITY (1,5)NOT NULL,
LastName nvarchar(20) NOT NULL,
FirstName nvarchar(20) NOT NULL,
CurrentSales money NOT NULL,
ProjectedSales AS CurrentSales * 1.10
);
GO
DECLARE @MyTableVar table(
LastName nvarchar(20) NOT NULL,
FirstName nvarchar(20) NOT NULL,
CurrentSales money NOT NULL
);

INSERT INTO dbo.EmployeeSales (LastName, FirstName, CurrentSales)


OUTPUT INSERTED.LastName,
INSERTED.FirstName,
INSERTED.CurrentSales
INTO @MyTableVar
SELECT c.LastName, c.FirstName, sp.SalesYTD
FROM Sales.SalesPerson AS sp
INNER JOIN Person.Person AS c
ON sp.BusinessEntityID = c.BusinessEntityID
WHERE sp.BusinessEntityID LIKE '2%'
ORDER BY c.LastName, c.FirstName;

SELECT LastName, FirstName, CurrentSales


FROM @MyTableVar;
GO
SELECT EmployeeID, LastName, FirstName, CurrentSales, ProjectedSales
FROM dbo.EmployeeSales;

V. Inserting data returned from an OUTPUT clause


The following example captures data returned from the OUTPUT clause of a MERGE statement, and inserts
that data into another table. The MERGE statement updates the Quantity column of the ProductInventory
table daily, based on orders that are processed in the SalesOrderDetail table in the AdventureWorks2012
database. It also deletes rows for products whose inventories drop to 0. The example captures the rows that are
deleted and inserts them into another table, ZeroInventory , which tracks products with no inventory.

--Create ZeroInventory table.


CREATE TABLE Production.ZeroInventory (DeletedProductID int, RemovedOnDate DateTime);
GO

INSERT INTO Production.ZeroInventory (DeletedProductID, RemovedOnDate)


SELECT ProductID, GETDATE()
FROM
( MERGE Production.ProductInventory AS pi
USING (SELECT ProductID, SUM(OrderQty) FROM Sales.SalesOrderDetail AS sod
JOIN Sales.SalesOrderHeader AS soh
ON sod.SalesOrderID = soh.SalesOrderID
AND soh.OrderDate = '20070401'
GROUP BY ProductID) AS src (ProductID, OrderQty)
ON (pi.ProductID = src.ProductID)
WHEN MATCHED AND pi.Quantity - src.OrderQty <= 0
THEN DELETE
WHEN MATCHED
THEN UPDATE SET pi.Quantity = pi.Quantity - src.OrderQty
OUTPUT $action, deleted.ProductID) AS Changes (Action, ProductID)
WHERE Action = 'DELETE';
IF @@ROWCOUNT = 0
PRINT 'Warning: No rows were inserted';
GO
SELECT DeletedProductID, RemovedOnDate FROM Production.ZeroInventory;
W. Inserting data using the SELECT option
The following example shows how to insert multiple rows of data using an INSERT statement with a SELECT
option. The first INSERT statement uses a SELECT statement directly to retrieve data from the source table, and
then to store the result set in the EmployeeTitles table.

CREATE TABLE EmployeeTitles


( EmployeeKey INT NOT NULL,
LastName varchar(40) NOT NULL,
Title varchar(50) NOT NULL
);
INSERT INTO EmployeeTitles
SELECT EmployeeKey, LastName, Title
FROM ssawPDW.dbo.DimEmployee
WHERE EndDate IS NULL;

X. Specifying a label with the INSERT statement


The following example shows the use of a label with an INSERT statement.

-- Uses AdventureWorks

INSERT INTO DimCurrency


VALUES (500, N'C1', N'Currency1')
OPTION ( LABEL = N'label1' );

Y. Using a label and a query hint with the INSERT statement


This query shows the basic syntax for using a label and a query join hint with the INSERT statement. After the
query is submitted to the Control node, SQL Server, running on the Compute nodes, will apply the hash join
strategy when it generates the SQL Server query plan. For more information on join hints and how to use the
OPTION clause, see OPTION (SQL Server PDW ).

-- Uses AdventureWorks

INSERT INTO DimCustomer (CustomerKey, CustomerAlternateKey,


FirstName, MiddleName, LastName )
SELECT ProspectiveBuyerKey, ProspectAlternateKey,
FirstName, MiddleName, LastName
FROM ProspectiveBuyer p JOIN DimGeography g ON p.PostalCode = g.PostalCode
WHERE g.CountryRegionCode = 'FR'
OPTION ( LABEL = 'Add French Prospects', HASH JOIN);

See Also
BULK INSERT (Transact-SQL )
DELETE (Transact-SQL )
EXECUTE (Transact-SQL )
FROM (Transact-SQL )
IDENTITY (Property) (Transact-SQL )
NEWID (Transact-SQL )
SELECT (Transact-SQL )
UPDATE (Transact-SQL )
MERGE (Transact-SQL )
OUTPUT Clause (Transact-SQL )
Use the inserted and deleted Tables
INSERT (SQL Graph)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds one or more rows to a node or edge table in SQL Server.

NOTE
For standard Transact-SQL statements, see INSERT TABLE (Transact-SQL).

Transact-SQL Syntax Conventions

INSERT Into Node Table Syntax


The syntax for inserting into a Node table is same as that of a regular table.
[ WITH <common_table_expression> [ ,...n ] ]
INSERT
{
[ TOP ( expression ) [ PERCENT ] ]
[ INTO ]
{ <object> | rowset_function_limited
[ WITH ( <Table_Hint_Limited> [ ...n ] ) ]
}
{
[ (column_list) ] | [(<edge_table_column_list>)]
[ <OUTPUT Clause> ]
{ VALUES ( { DEFAULT | NULL | expression } [ ,...n ] ) [ ,...n ]
| derived_table
| execute_statement
| <dml_table_source>
| DEFAULT VALUES
}
}
}
[;]

<object> ::=
{
[ server_name . database_name . schema_name .
| database_name .[ schema_name ] .
| schema_name .
]
node_table_name | edge_table_name
}

<dml_table_source> ::=
SELECT <select_list>
FROM ( <dml_statement_with_output_clause> )
[AS] table_alias [ ( column_alias [ ,...n ] ) ]
[ WHERE <on_or_where_search_condition> ]
[ OPTION ( <query_hint> [ ,...n ] ) ]

<on_or_where_search_condition> ::=
{ <search_condition_with_match> | <search_condition> }

<search_condition_with_match> ::=
{ <graph_predicate> | [ NOT ] <predicate> | ( <search_condition> ) }
[ AND { <graph_predicate> | [ NOT ] <predicate> | ( <search_condition> ) } ]
[ ,...n ]

<search_condition> ::=
{ [ NOT ] <predicate> | ( <search_condition> ) }
[ { AND | OR } [ NOT ] { <predicate> | ( <search_condition> ) } ]
[ ,...n ]

<graph_predicate> ::=
MATCH( <graph_search_pattern> [ AND <graph_search_pattern> ] [ , ...n] )

<graph_search_pattern>::=
<node_alias> { { <-( <edge_alias> )- | -( <edge_alias> )-> } <node_alias> }

<edge_table_column_list> ::=
($from_id, $to_id, [column_list])

Arguments
This document describes arguments pertaining to SQL graph. For a full list and description of supported
arguments in INSERT statement, see INSERT TABLE (Transact-SQL )
INTO
Is an optional keyword that can be used between INSERT and the target table.
search_condition_with_match
MATCH clause can be used in a subquery while inserting into a node or edge table. For MATCH statement syntax,
see GRAPH MATCH (Transact-SQL )
graph_search_pattern
Search pattern provided to MATCH clause as part of the graph predicate.
edge_table_column_list
Users must provide values for $from_id and $to_id while inserting into an edge. An error will be returned if a
value is not provided or NULLs are inserted into these columns.

Remarks
Inserting into a node is same as inserting into any relational table. Values for the $node_id column are
automatically generated.
While inserting into an edge table, users must provide values for $from_id and $to_id columns.
BULK insert for node table is remains same as that of a relational table.
Before bulk inserting into an edge table, the node tables must be imported. Values for $from_id and $to_id can
then be extracted from the $node_id column of the node table and inserted as edges.
Permissions
INSERT permission is required on the target table.
INSERT permissions default to members of the sysadmin fixed server role, the db_owner and db_datawriter
fixed database roles, and the table owner. Members of the sysadmin, db_owner, and the db_securityadmin
roles, and the table owner can transfer permissions to other users.
To execute INSERT with the OPENROWSET function BULK option, you must be a member of the sysadmin fixed
server role or of the bulkadmin fixed server role.

Examples
A. Insert into node table
The following example creates a Person node table and inserts 2 rows into that table.

-- Create person node table


CREATE TABLE dbo.Person (ID integer PRIMARY KEY, name varchar(50)) AS NODE;

-- Insert records for Alice and John


INSERT INTO dbo.Person VALUES (1, 'Alice');
INSERT INTO dbo.Person VALUES (2,'John');

B. Insert into edge table


The following example creates a friend edge table and inserts an edge into the table.

-- Create friend edge table


CREATE TABLE dbo.friend (start_date DATE) AS EDGE;

-- Create a friend edge, that connect Alice and John


INSERT INTO dbo.friend VALUES ((SELECT $node_id FROM dbo.Person WHERE name = 'Alice'),
(SELECT $node_id FROM dbo.Person WHERE name = 'John'), '9/15/2011');
See Also
INSERT TABLE (Transact-SQL )
Graph processing with SQL Server 2017
MERGE (Transact-SQL)
5/3/2018 • 17 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Performs insert, update, or delete operations on a target table based on the results of a join with a source table. For
example, you can synchronize two tables by inserting, updating, or deleting rows in one table based on differences
found in the other table.
Performance Tip: The conditional behavior described for the MERGE statement works best when the two tables
have a complex mixture of matching characteristics. For example, inserting a row if it does not exist, or updating
the row if it does match. When simply updating one table based on the rows of another table, improved
performance and scalability can be achieved with basic INSERT, UPDATE, and DELETE statements. For example:

INSERT tbl_A (col, col2)


SELECT col, col2
FROM tbl_B
WHERE NOT EXISTS (SELECT col FROM tbl_A A2 WHERE A2.col = tbl_B.col);

Transact-SQL Syntax Conventions

Syntax
[ WITH <common_table_expression> [,...n] ]
MERGE
[ TOP ( expression ) [ PERCENT ] ]
[ INTO ] <target_table> [ WITH ( <merge_hint> ) ] [ [ AS ] table_alias ]
USING <table_source>
ON <merge_search_condition>
[ WHEN MATCHED [ AND <clause_search_condition> ]
THEN <merge_matched> ] [ ...n ]
[ WHEN NOT MATCHED [ BY TARGET ] [ AND <clause_search_condition> ]
THEN <merge_not_matched> ]
[ WHEN NOT MATCHED BY SOURCE [ AND <clause_search_condition> ]
THEN <merge_matched> ] [ ...n ]
[ <output_clause> ]
[ OPTION ( <query_hint> [ ,...n ] ) ]
;

<target_table> ::=
{
[ database_name . schema_name . | schema_name . ]
target_table
}

<merge_hint>::=
{
{ [ <table_hint_limited> [ ,...n ] ]
[ [ , ] INDEX ( index_val [ ,...n ] ) ] }
}

<table_source> ::=
{
table_or_view_name [ [ AS ] table_alias ] [ <tablesample_clause> ]
[ WITH ( table_hint [ [ , ]...n ] ) ]
| rowset_function [ [ AS ] table_alias ]
[ ( bulk_column_alias [ ,...n ] ) ]
| user_defined_function [ [ AS ] table_alias ]
| OPENXML <openxml_clause>
| derived_table [ AS ] table_alias [ ( column_alias [ ,...n ] ) ]
| <joined_table>
| <pivoted_table>
| <unpivoted_table>
}

<merge_search_condition> ::=
<search_condition>

<merge_matched>::=
{ UPDATE SET <set_clause> | DELETE }

<set_clause>::=
SET
{ column_name = { expression | DEFAULT | NULL }
| { udt_column_name.{ { property_name = expression
| field_name = expression }
| method_name ( argument [ ,...n ] ) }
}
| column_name { .WRITE ( expression , @Offset , @Length ) }
| @variable = expression
| @variable = column = expression
| column_name { += | -= | *= | /= | %= | &= | ^= | |= } expression
| @variable { += | -= | *= | /= | %= | &= | ^= | |= } expression
| @variable = column { += | -= | *= | /= | %= | &= | ^= | |= } expression
} [ ,...n ]

<merge_not_matched>::=
{
INSERT [ ( column_list ) ]
{ VALUES ( values_list )
| DEFAULT VALUES }
}

<clause_search_condition> ::=
<search_condition>

<search condition> ::=


{ [ NOT ] <predicate> | ( <search_condition> ) }
[ { AND | OR } [ NOT ] { <predicate> | ( <search_condition> ) } ]
[ ,...n ]

<predicate> ::=
{ expression { = | < > | ! = | > | > = | ! > | < | < = | ! < } expression
| string_expression [ NOT ] LIKE string_expression
[ ESCAPE 'escape_character' ]
| expression [ NOT ] BETWEEN expression AND expression
| expression IS [ NOT ] NULL
| CONTAINS
( { column | * } , '< contains_search_condition >' )
| FREETEXT ( { column | * } , 'freetext_string' )
| expression [ NOT ] IN ( subquery | expression [ ,...n ] )
| expression { = | < > | ! = | > | > = | ! > | < | < = | ! < }
{ ALL | SOME | ANY} ( subquery )
| EXISTS ( subquery ) }

<output_clause>::=
{
[ OUTPUT <dml_select_list> INTO { @table_variable | output_table }
[ (column_list) ] ]
[ OUTPUT <dml_select_list> ]
}

<dml_select_list>::=
{ <column_name> | scalar_expression }
[ [AS] column_alias_identifier ] [ ,...n ]
<column_name> ::=
{ DELETED | INSERTED | from_table_name } . { * | column_name }
| $action

Arguments
WITH <common_table_expression>
Specifies the temporary named result set or view, also known as common table expression, defined within the
scope of the MERGE statement. The result set is derived from a simple query and is referenced by the MERGE
statement. For more information, see WITH common_table_expression (Transact-SQL ).
TOP ( expression ) [ PERCENT ]
Specifies the number or percentage of rows that are affected. expression can be either a number or a percentage of
the rows. The rows referenced in the TOP expression are not arranged in any order. For more information, see TOP
(Transact-SQL ).
The TOP clause is applied after the entire source table and the entire target table are joined and the joined rows
that do not qualify for an insert, update, or delete action are removed. The TOP clause further reduces the number
of joined rows to the specified value and the insert, update, or delete actions are applied to the remaining joined
rows in an unordered fashion. That is, there is no order in which the rows are distributed among the actions
defined in the WHEN clauses. For example, specifying TOP (10) affects 10 rows; of these rows, 7 may be updated
and 3 inserted, or 1 may be deleted, 5 updated, and 4 inserted and so on.
Because the MERGE statement performs a full table scan of both the source and target tables, I/O performance
can be affected when using the TOP clause to modify a large table by creating multiple batches. In this scenario, it
is important to ensure that all successive batches target new rows.
database_name
Is the name of the database in which target_table is located.
schema_name
Is the name of the schema to which target_table belongs.
target_table
Is the table or view against which the data rows from <table_source> are matched based on
<clause_search_condition>. target_table is the target of any insert, update, or delete operations specified by the
WHEN clauses of the MERGE statement.
If target_table is a view, any actions against it must satisfy the conditions for updating views. For more information,
see Modify Data Through a View.
target_table cannot be a remote table. target_table cannot have any rules defined on it.
[ AS ] table_alias
Is an alternative name used to reference a table.
USING <table_source>
Specifies the data source that is matched with the data rows in target_table based on <merge_search condition>.
The result of this match dictates the actions to take by the WHEN clauses of the MERGE statement.
<table_source> can be a remote table or a derived table that accesses remote tables.
<table_source> can be a derived table that uses the Transact-SQL table value constructor to construct a table by
specifying multiple rows.
For more information about the syntax and arguments of this clause, see FROM (Transact-SQL ).
ON <merge_search_condition>
Specifies the conditions on which <table_source> is joined with target_table to determine where they match.
Cau t i on

It is important to specify only the columns from the target table that are used for matching purposes. That is,
specify columns from the target table that are compared to the corresponding column of the source table. Do not
attempt to improve query performance by filtering out rows in the target table in the ON clause, such as by
specifying AND NOT target_table.column_x = value . Doing so may return unexpected and incorrect results.
WHEN MATCHED THEN <merge_matched>
Specifies that all rows of target_table that match the rows returned by <table_source> ON
<merge_search_condition>, and satisfy any additional search condition, are either updated or deleted according to
the <merge_matched> clause.
The MERGE statement can have at most two WHEN MATCHED clauses. If two clauses are specified, then the first
clause must be accompanied by an AND <search_condition> clause. For any given row, the second WHEN
MATCHED clause is only applied if the first is not. If there are two WHEN MATCHED clauses, then one must
specify an UPDATE action and one must specify a DELETE action. If UPDATE is specified in the <merge_matched>
clause, and more than one row of <table_source>matches a row in target_table based on
<merge_search_condition>, SQL Server returns an error. The MERGE statement cannot update the same row
more than once, or update and delete the same row.
WHEN NOT MATCHED [ BY TARGET ] THEN <merge_not_matched>
Specifies that a row is inserted into target_table for every row returned by <table_source> ON
<merge_search_condition> that does not match a row in target_table, but does satisfy an additional search
condition, if present. The values to insert are specified by the <merge_not_matched> clause. The MERGE
statement can have only one WHEN NOT MATCHED clause.
WHEN NOT MATCHED BY SOURCE THEN <merge_matched>
Specifies that all rows of target_table that do not match the rows returned by <table_source> ON
<merge_search_condition>, and that satisfy any additional search condition, are either updated or deleted
according to the <merge_matched> clause.
The MERGE statement can have at most two WHEN NOT MATCHED BY SOURCE clauses. If two clauses are
specified, then the first clause must be accompanied by an AND <clause_search_condition> clause. For any given
row, the second WHEN NOT MATCHED BY SOURCE clause is only applied if the first is not. If there are two
WHEN NOT MATCHED BY SOURCE clauses, then one must specify an UPDATE action and one must specify a
DELETE action. Only columns from the target table can be referenced in <clause_search_condition>.
When no rows are returned by <table_source>, columns in the source table cannot be accessed. If the update or
delete action specified in the <merge_matched> clause references columns in the source table, error 207 (Invalid
column name) is returned. For example, the clause
WHEN NOT MATCHED BY SOURCE THEN UPDATE SET TargetTable.Col1 = SourceTable.Col1 may cause the statement to fail
because Col1 in the source table is inaccessible.
AND <clause_search_condition>
Specifies any valid search condition. For more information, see Search Condition (Transact-SQL ).
<table_hint_limited>
Specifies one or more table hints that are applied on the target table for each of the insert, update, or delete actions
that are performed by the MERGE statement. The WITH keyword and the parentheses are required.
NOLOCK and READUNCOMMITTED are not allowed. For more information about table hints, see Table Hints
(Transact-SQL ).
Specifying the TABLOCK hint on a table that is the target of an INSERT statement has the same effect as specifying
the TABLOCKX hint. An exclusive lock is taken on the table. When FORCESEEK is specified, it is applied to the
implicit instance of the target table joined with the source table.
Cau t i on
Specifying READPAST with WHEN NOT MATCHED [ BY TARGET ] THEN INSERT may result in INSERT
operations that violate UNIQUE constraints.
INDEX ( index_val [ ,...n ] )
Specifies the name or ID of one or more indexes on the target table for performing an implicit join with the source
table. For more information, see Table Hints (Transact-SQL ).
<output_clause>
Returns a row for every row in target_table that is updated, inserted, or deleted, in no particular order. $action can
be specified in the output clause. $action is a column of type nvarchar(10) that returns one of three values for
each row: 'INSERT', 'UPDATE', or 'DELETE', according to the action that was performed on that row. For more
information about the arguments of this clause, see OUTPUT Clause (Transact-SQL ).
OPTION ( <query_hint> [ ,...n ] )
Specifies that optimizer hints are used to customize the way the Database Engine processes the statement. For
more information, see Query Hints (Transact-SQL ).
<merge_matched>
Specifies the update or delete action that is applied to all rows of target_table that do not match the rows returned
by <table_source> ON <merge_search_condition>, and that satisfy any additional search condition.
UPDATE SET <set_clause>
Specifies the list of column or variable names to be updated in the target table and the values with which to update
them.
For more information about the arguments of this clause, see UPDATE (Transact-SQL ). Setting a variable to the
same value as a column is not permitted.
DELETE
Specifies that the rows matching rows in target_table are deleted.
<merge_not_matched>
Specifies the values to insert into the target table.
(column_list)
Is a list of one or more columns of the target table in which to insert data. Columns must be specified as a single-
part name or else the MERGE statement will fail. column_list must be enclosed in parentheses and delimited by
commas.
VALUES ( values_list)
Is a comma-separated list of constants, variables, or expressions that return values to insert into the target table.
Expressions cannot contain an EXECUTE statement.
DEFAULT VALUES
Forces the inserted row to contain the default values defined for each column.
For more information about this clause, see INSERT (Transact-SQL ).
<search condition>
Specifies the search conditions used to specify <merge_search_condition> or <clause_search_condition>. For
more information about the arguments for this clause, see Search Condition (Transact-SQL ).

Remarks
At least one of the three MATCHED clauses must be specified, but they can be specified in any order. A variable
cannot be updated more than once in the same MATCHED clause.
Any insert, update, or delete actions specified on the target table by the MERGE statement are limited by any
constraints defined on it, including any cascading referential integrity constraints. If IGNORE_DUP_KEY is set to
ON for any unique indexes on the target table, MERGE ignores this setting.
The MERGE statement requires a semicolon (;) as a statement terminator. Error 10713 is raised when a MERGE
statement is run without the terminator.
When used after MERGE, @@ROWCOUNT (Transact-SQL ) returns the total number of rows inserted, updated,
and deleted to the client.
MERGE is a fully reserved keyword when the database compatibility level is set to 100 or higher. The MERGE
statement is available under both 90 and 100 database compatibility levels; however the keyword is not fully
reserved when the database compatibility level is set to 90.
The MERGE statement should not be used when using queued updating replication. The MERGE and queued
updating trigger are not compatible. Replace the MERGE statement with an insert or an update statement.

Trigger Implementation
For every insert, update, or delete action specified in the MERGE statement, SQL Server fires any corresponding
AFTER triggers defined on the target table, but does not guarantee on which action to fire triggers first or last.
Triggers defined for the same action honor the order you specify. For more information about setting trigger firing
order, see Specify First and Last Triggers.
If the target table has an enabled INSTEAD OF trigger defined on it for an insert, update, or delete action
performed by a MERGE statement, then it must have an enabled INSTEAD OF trigger for all of the actions
specified in the MERGE statement.
If there are any INSTEAD OF UPDATE or INSTEAD OF DELETE triggers defined on target_table, the update or
delete operations are not performed. Instead, the triggers fire and the inserted and deleted tables are populated
accordingly.
If there are any INSTEAD OF INSERT triggers defined on target_table, the insert operation is not performed.
Instead, the triggers fire and the inserted table is populated accordingly.

Permissions
Requires SELECT permission on the source table and INSERT, UPDATE, or DELETE permissions on the target
table. For additional information, see the Permissions section in the SELECT, INSERT, UPDATE, and DELETE
topics.

Examples
A. Using MERGE to perform INSERT and UPDATE operations on a table in a single statement
A common scenario is updating one or more columns in a table if a matching row exists, or inserting the data as a
new row if a matching row does not exist. This is usually done by passing parameters to a stored procedure that
contains the appropriate UPDATE and INSERT statements. With the MERGE statement, you can perform both
tasks in a single statement. The following example shows a stored procedure in the AdventureWorks2012database
that contains both an INSERT statement and an UPDATE statement. The procedure is then modified to perform
the equivalent operations by using a single MERGE statement.
CREATE PROCEDURE dbo.InsertUnitMeasure
@UnitMeasureCode nchar(3),
@Name nvarchar(25)
AS
BEGIN
SET NOCOUNT ON;
-- Update the row if it exists.
UPDATE Production.UnitMeasure
SET Name = @Name
WHERE UnitMeasureCode = @UnitMeasureCode
-- Insert the row if the UPDATE statement failed.
IF (@@ROWCOUNT = 0 )
BEGIN
INSERT INTO Production.UnitMeasure (UnitMeasureCode, Name)
VALUES (@UnitMeasureCode, @Name)
END
END;
GO
-- Test the procedure and return the results.
EXEC InsertUnitMeasure @UnitMeasureCode = 'ABC', @Name = 'Test Value';
SELECT UnitMeasureCode, Name FROM Production.UnitMeasure
WHERE UnitMeasureCode = 'ABC';
GO

-- Rewrite the procedure to perform the same operations using the


-- MERGE statement.
-- Create a temporary table to hold the updated or inserted values
-- from the OUTPUT clause.
CREATE TABLE #MyTempTable
(ExistingCode nchar(3),
ExistingName nvarchar(50),
ExistingDate datetime,
ActionTaken nvarchar(10),
NewCode nchar(3),
NewName nvarchar(50),
NewDate datetime
);
GO
ALTER PROCEDURE dbo.InsertUnitMeasure
@UnitMeasureCode nchar(3),
@Name nvarchar(25)
AS
BEGIN
SET NOCOUNT ON;

MERGE Production.UnitMeasure AS target


USING (SELECT @UnitMeasureCode, @Name) AS source (UnitMeasureCode, Name)
ON (target.UnitMeasureCode = source.UnitMeasureCode)
WHEN MATCHED THEN
UPDATE SET Name = source.Name
WHEN NOT MATCHED THEN
INSERT (UnitMeasureCode, Name)
VALUES (source.UnitMeasureCode, source.Name)
OUTPUT deleted.*, $action, inserted.* INTO #MyTempTable;
END;
GO
-- Test the procedure and return the results.
EXEC InsertUnitMeasure @UnitMeasureCode = 'ABC', @Name = 'New Test Value';
EXEC InsertUnitMeasure @UnitMeasureCode = 'XYZ', @Name = 'Test Value';
EXEC InsertUnitMeasure @UnitMeasureCode = 'ABC', @Name = 'Another Test Value';

SELECT * FROM #MyTempTable;


-- Cleanup
DELETE FROM Production.UnitMeasure WHERE UnitMeasureCode IN ('ABC','XYZ');
DROP TABLE #MyTempTable;
GO
B. Using MERGE to perform UPDATE and DELETE operations on a table in a single statement
The following example uses MERGE to update the ProductInventory table in the AdventureWorks2012 sample
database on a daily basis, based on orders that are processed in the SalesOrderDetail table. The Quantity column
of the ProductInventory table is updated by subtracting the number of orders placed each day for each product in
the SalesOrderDetail table. If the number of orders for a product drops the inventory level of a product to 0 or
less, the row for that product is deleted from the ProductInventory table.

CREATE PROCEDURE Production.usp_UpdateInventory


@OrderDate datetime
AS
MERGE Production.ProductInventory AS target
USING (SELECT ProductID, SUM(OrderQty) FROM Sales.SalesOrderDetail AS sod
JOIN Sales.SalesOrderHeader AS soh
ON sod.SalesOrderID = soh.SalesOrderID
AND soh.OrderDate = @OrderDate
GROUP BY ProductID) AS source (ProductID, OrderQty)
ON (target.ProductID = source.ProductID)
WHEN MATCHED AND target.Quantity - source.OrderQty <= 0
THEN DELETE
WHEN MATCHED
THEN UPDATE SET target.Quantity = target.Quantity - source.OrderQty,
target.ModifiedDate = GETDATE()
OUTPUT $action, Inserted.ProductID, Inserted.Quantity,
Inserted.ModifiedDate, Deleted.ProductID,
Deleted.Quantity, Deleted.ModifiedDate;
GO

EXECUTE Production.usp_UpdateInventory '20030501'

C. Using MERGE to perform UPDATE and INSERT operations on a target table by using a derived source table
The following example uses MERGE to modify the SalesReason table in the AdventureWorks2012 database by
either updating or inserting rows. When the value of NewName in the source table matches a value in the Name
column of the target table, ( SalesReason ), the ReasonType column is updated in the target table. When the value of
NewName does not match, the source row is inserted into the target table. The source table is a derived table that
uses the Transact-SQL table value constructor to specify multiple rows for the source table. For more information
about using the table value constructor in a derived table, see Table Value Constructor (Transact-SQL ). The
example also shows how to store the results of the OUTPUT clause in a table variable and then summarize the
results of the MERGE statment by performing a simple select operation that returns the count of inserted and
updated rows.

-- Create a temporary table variable to hold the output actions.


DECLARE @SummaryOfChanges TABLE(Change VARCHAR(20));

MERGE INTO Sales.SalesReason AS Target


USING (VALUES ('Recommendation','Other'), ('Review', 'Marketing'),
('Internet', 'Promotion'))
AS Source (NewName, NewReasonType)
ON Target.Name = Source.NewName
WHEN MATCHED THEN
UPDATE SET ReasonType = Source.NewReasonType
WHEN NOT MATCHED BY TARGET THEN
INSERT (Name, ReasonType) VALUES (NewName, NewReasonType)
OUTPUT $action INTO @SummaryOfChanges;

-- Query the results of the table variable.


SELECT Change, COUNT(*) AS CountPerChange
FROM @SummaryOfChanges
GROUP BY Change;
D. Inserting the results of the MERGE statement into another table
The following example captures data returned from the OUTPUT clause of a MERGE statement and inserts that
data into another table. The MERGE statement updates the Quantity column of the ProductInventory table in the
AdventureWorks2012 database, based on orders that are processed in the SalesOrderDetail table. The example
captures the rows that are updated and inserts them into another table that is used to track inventory changes.

CREATE TABLE Production.UpdatedInventory


(ProductID INT NOT NULL, LocationID int, NewQty int, PreviousQty int,
CONSTRAINT PK_Inventory PRIMARY KEY CLUSTERED (ProductID, LocationID));
GO
INSERT INTO Production.UpdatedInventory
SELECT ProductID, LocationID, NewQty, PreviousQty
FROM
( MERGE Production.ProductInventory AS pi
USING (SELECT ProductID, SUM(OrderQty)
FROM Sales.SalesOrderDetail AS sod
JOIN Sales.SalesOrderHeader AS soh
ON sod.SalesOrderID = soh.SalesOrderID
AND soh.OrderDate BETWEEN '20030701' AND '20030731'
GROUP BY ProductID) AS src (ProductID, OrderQty)
ON pi.ProductID = src.ProductID
WHEN MATCHED AND pi.Quantity - src.OrderQty >= 0
THEN UPDATE SET pi.Quantity = pi.Quantity - src.OrderQty
WHEN MATCHED AND pi.Quantity - src.OrderQty <= 0
THEN DELETE
OUTPUT $action, Inserted.ProductID, Inserted.LocationID,
Inserted.Quantity AS NewQty, Deleted.Quantity AS PreviousQty)
AS Changes (Action, ProductID, LocationID, NewQty, PreviousQty)
WHERE Action = 'UPDATE';
GO

See Also
SELECT (Transact-SQL )
INSERT (Transact-SQL )
UPDATE (Transact-SQL )
DELETE (Transact-SQL )
OUTPUT Clause (Transact-SQL )
MERGE in Integration Services Packages
FROM (Transact-SQL )
Table Value Constructor (Transact-SQL )
RENAME (Transact-SQL)
5/4/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Renames a user-created table in SQL Data Warehouse. Renames a user-created table or database in Parallel Data
Warehouse.

NOTE
To rename a database in SQL Data Warehouse, use ALTER DATABASE (Azure SQL Data Warehouse. To rename a database in
Azure SQL Database, use the ALTER DATABASE (Azure SQL Database) statement. To rename a database in SQL Server, use
the stored procedure sp_renamedb (Transact-SQL).

Syntax
-- Syntax for Azure SQL Data Warehouse

-- Rename a table.
RENAME OBJECT [::] [ [ database_name . [schema_name ] ] . ] | [schema_name . ] ] table_name TO new_table_name
[;]

-- Syntax for Parallel Data Warehouse

-- Rename a table
RENAME OBJECT [::] [ [ database_name . [ schema_name ] . ] | [ schema_name . ] ] table_name TO new_table_name
[;]

-- Rename a database
RENAME DATABASE [::] database_name TO new_database_name
[;]

Arguments
RENAME OBJECT [::] [ [database_name . [ schema_name ] . ] | [ schema_name . ] ]table_name TO
new_table_name
APPLIES TO: SQL Data Warehouse, Parallel Data Warehouse
Change the name of a user-defined table. Specify the table to be renamed with a one-, two-, or three-part name.
Specify the new table new_table_name as a one-part name.
RENAME DATABASE [::] [ database_name TO new_database_name
APPLIES TO: Parallel Data Warehouse
Change the name of a user-defined database from database_name to new_database_name. You can't rename a
database to any of these Parallel Data Warehousereserved database names:
master
model
msdb
tempdb
pdwtempdb1
pdwtempdb2
DWConfiguration
DWDiagnostics
DWQueue

Permissions
To run this command, you need this permission:
ALTER permission on the table

Limitations and Restrictions


Cannot rename an external table, indexes, or views
You can't rename an external table, indexes, or views. Instead of renaming, you can drop the external table, index, or
view and then re-create it with the new name.
Cannot rename a table in use
You can't rename a table or database while it is in use. Renaming a table requires an exclusive lock on the table. If
the table is in use, you may need to terminate sessions that are using the table. To terminate a session, you can use
the KILL command. Use KILL cautiously since when a session is terminated any uncommitted work will be rolled
back. Sessions in SQL Data Warehouse are prefixed by 'SID'. Include 'SID' and the session number when invoking
the KILL command. This example views a list of active or idle sessions and then terminates session 'SID1234'.
Views are not updated
When renaming a database, all views that use the former database name will become invalid. This behavior applies
to views both inside and outside the database. For example, if the Sales database is renamed, a view that contains
SELECT * FROM Sales.dbo.table1 will become invalid. To resolve this issue, you can either avoid using three-part
names in views, or update the views to reference the new database name.
When renaming a table, views are not updated to reference the new table name. Each view, inside or outside of the
database, that references the former table name will become invalid. To resolve this issue, you can update each
view to reference the new table name.

Locking
Renaming a table takes a shared lock on the DATABASE object, a shared lock on the SCHEMA object, and an
exclusive lock on the table.

Examples
A. Rename a database
APPLIES TO: Parallel Data Warehouse only
This example renames the user-defined database AdWorks to AdWorks2.
-- Rename the user defined database AdWorks
RENAME DATABASE AdWorks to AdWorks2;

When renaming a table, all objects and properties associated with the table are updated to reference the new table
name. For example, table definitions, indexes, constraints, and permissions are updated. Views are not updated.
B. Rename a table
APPLIES TO: SQL Data Warehouse, Parallel Data Warehouse
This example renames the Customer table to Customer1.

-- Rename the customer table


RENAME OBJECT Customer TO Customer1;

RENAME OBJECT mydb.dbo.Customer TO Customer1;

When renaming a table, all objects and properties associated with the table are updated to reference the new table
name. For example, table definitions, indexes, constraints, and permissions are updated. Views are not updated.
C. Move a table to a different schema
APPLIES TO: SQL Data Warehouse, Parallel Data Warehouse
If your intent is to move the object to a different schema, use ALTER SCHEMA (Transact-SQL ). For example, the
following statement moves the table item from the product schema to the dbo schema.

ALTER SCHEMA dbo TRANSFER OBJECT::product.item;

D. Terminate sessions before renaming a table


APPLIES TO: SQL Data Warehouse, Parallel Data Warehouse
It is important to remember that you can't rename a table while it is in use. A rename of a table requires an
exclusive lock on the table. If the table is in use, you may need to terminate the session using the table. To
terminate a session, you can use the KILL command. Use KILL cautiously since when a session is terminated any
uncommitted work will be rolled back. Sessions in SQL Data Warehouse are prefixed by 'SID'. You will need to
include 'SID' and the session number when invoking the KILL command. This example views a list of active or idle
sessions and then terminates session 'SID1234'.

-- View a list of the current sessions


SELECT session_id, login_name, status
FROM sys.dm_pdw_exec_sessions
WHERE status='Active' OR status='Idle';

-- Terminate a session using the session_id.


KILL 'SID1234';
ADD SIGNATURE (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse
Adds a digital signature to a stored procedure, function, assembly, or trigger. Also adds a countersignature to a stored procedure, function, assembly, or trigger.
Transact-SQL Syntax Conventions

Syntax
ADD [ COUNTER ] SIGNATURE TO module_class::module_name
BY <crypto_list> [ ,...n ]

<crypto_list> ::=
CERTIFICATE cert_name
| CERTIFICATE cert_name [ WITH PASSWORD = 'password' ]
| CERTIFICATE cert_name WITH SIGNATURE = signed_blob
| ASYMMETRIC KEY Asym_Key_Name
| ASYMMETRIC KEY Asym_Key_Name [ WITH PASSWORD = 'password'.]
| ASYMMETRIC KEY Asym_Key_Name WITH SIGNATURE = signed_blob

Arguments
module_class
Is the class of the module to which the signature is added. The default for schema-scoped modules is OBJECT.
module_name
Is the name of a stored procedure, function, assembly, or trigger to be signed or countersigned.
CERTIFICATE cert_name
Is the name of a certificate with which to sign or countersign the stored procedure, function, assembly, or trigger.
WITH PASSWORD ='password'
Is the password that is required to decrypt the private key of the certificate or asymmetric key. This clause is only required if the private key is not protected by the database
master key.
SIGNATURE =signed_blob
Specifies the signed, binary large object (BLOB) of the module. This clause is useful if you want to ship a module without shipping the private key. When you use this clause,
only the module, signature, and public key are required to add the signed binary large object to a database. signed_blob is the blob itself in hexadecimal format.
ASYMMETRIC KEY Asym_Key_Name
Is the name of an asymmetric key with which to sign or counter-sign the stored procedure, function, assembly, or trigger.

Remarks
The module being signed or countersigned and the certificate or asymmetric key used to sign it must already exist. Every character in the module is included in the signature
calculation. This includes leading carriage returns and line feeds.
A module can be signed and countersigned by any number of certificates and asymmetric keys.
The signature of a module is dropped when the module is changed.
If a module contains an EXECUTE AS clause, the security ID (SID) of the principal is also included as a part of the signing process.
Cau t i on

Module signing should only be used to grant permissions, never to deny or revoke permissions.
Inline table-valued functions cannot be signed.
Information about signatures is visible in the sys.crypt_properties catalog view.

WARNING
When recreating a procedure for signature, all the statements in the original batch must match recreation batch. If any portion of the batch differs, even in spaces or comments, the resultant
signature will be different.

Countersignatures
When executing a signed module, the signatures will be temporarily added to the SQL token, but the signatures are lost if the module executes another module or if the
module terminates execution. A countersignature is a special form of signature. By itself, a countersignature does not grant any permissions, however, it allows signatures
made by the same certificate or asymmetric key to be kept for the duration of the call made to the countersigned object.
For example, presume that user Alice calls procedure ProcSelectT1ForAlice, which calls procedure procSelectT1, which selects from table T1. Alice has EXECUTE permission
on ProcSelectT1ForAlice and procSelectT1, but she does not have SELECT permission on T1, and no ownership chaining is involved in this entire chain. Alice cannot access
table T1, either directly, or through the use of ProcSelectT1ForAlice and procSelectT1. Since we want Alice to always use ProcSelectT1ForAlice for access, we don't want to
grant her permission to execute procSelectT1. How can we accomplish this?
If we sign procSelectT1, such that procSelectT1 can access T1, then Alice can invoke procSelectT1 directly and she doesn't have to call ProcSelectT1ForAlice.
We could deny EXECUTE permission on procSelectT1 to Alice, but then Alice would not be able to call procSelectT1 through ProcSelectT1ForAlice either.
Signing ProcSelectT1ForAlice would not work by itself, because the signature would be lost in the call to procSelectT1.
However, by countersigning procSelectT1 with the same certificate used to sign ProcSelectT1ForAlice, SQL Server will keep the signature across the call chain and will allow
access to T1. If Alice attempts to call procSelectT1 directly, she cannot access T1, because the countersignature doesn't grant any rights. Example C below, shows the Transact-
SQL for this example.

Permissions
Requires ALTER permission on the object and CONTROL permission on the certificate or asymmetric key. If an associated private key is protected by a password, the user
also must have the password.

Examples
A. Signing a stored procedure by using a certificate
The following example signs the stored procedure HumanResources.uspUpdateEmployeeLogin with the certificate HumanResourcesDP .

USE AdventureWorks2012;
ADD SIGNATURE TO HumanResources.uspUpdateEmployeeLogin
BY CERTIFICATE HumanResourcesDP;
GO

B. Signing a stored procedure by using a signed BLOB


The following example creates a new database and creates a certificate to use in the example. The example creates and signs a simple stored procedure and retrieves the
signature BLOB from sys.crypt_properties . The signature is then dropped and added again. The example signs the procedure by using the WITH SIGNATURE syntax.

CREATE DATABASE TestSignature ;


GO
USE TestSignature ;
GO
-- Create a CERTIFICATE to sign the procedure.
CREATE CERTIFICATE cert_signature_demo
ENCRYPTION BY PASSWORD = 'pGFD4bb925DGvbd2439587y'
WITH SUBJECT = 'ADD SIGNATURE demo';
GO
-- Create a simple procedure.
CREATE PROC [sp_signature_demo]
AS
PRINT 'This is the content of the procedure.' ;
GO
-- Sign the procedure.
ADD SIGNATURE TO [sp_signature_demo]
BY CERTIFICATE [cert_signature_demo]
WITH PASSWORD = 'pGFD4bb925DGvbd2439587y' ;
GO
-- Get the signature binary BLOB for the sp_signature_demo procedure.
SELECT cp.crypt_property
FROM sys.crypt_properties AS cp
JOIN sys.certificates AS cer
ON cp.thumbprint = cer.thumbprint
WHERE cer.name = 'cert_signature_demo' ;
GO

The crypt_property signature that is returned by this statement will be different each time you create a procedure. Make a note of the result for use later in this example. For
this example, the result demonstrated is:
0x831F5530C86CC8ED606E5BC2720DA835351E46219A6D5DE9CE546297B88AEF3B6A7051891AF3EE7A68EAB37CD8380988B4C3F7469C8EABDD9579A2A5C507A4482905C2F24024FFB2F9BD7A953DD5E98470C4AA90CE83237739BB5FAE7BAC796E7710BDE
.

-- Drop the signature so that it can be signed again.


DROP SIGNATURE FROM [sp_signature_demo]
BY CERTIFICATE [cert_signature_demo];
GO
-- Add the signature. Use the signature BLOB obtained earlier.
ADD SIGNATURE TO [sp_signature_demo]
BY CERTIFICATE [cert_signature_demo]
WITH SIGNATURE =
0x831F5530C86CC8ED606E5BC2720DA835351E46219A6D5DE9CE546297B88AEF3B6A7051891AF3EE7A68EAB37CD8380988B4C3F7469C8EABDD9579A2A5C507A4482905C2F24024FFB2F9BD7A953DD5E98470C4AA9
0CE83237739BB5FAE7BAC796E7710BDE291B03C43582F6F2D3B381F2102EEF8407731E01A51E24D808D54B373 ;
GO

C. Accessing a procedure using a countersignature


The following example shows how countersigning can help control access to an object.
-- Create tesT1 database
CREATE DATABASE testDB;
GO
USE testDB;
GO
-- Create table T1
CREATE TABLE T1 (c varchar(11));
INSERT INTO T1 VALUES ('This is T1.');

-- Create a TestUser user to own table T1


CREATE USER TestUser WITHOUT LOGIN;
ALTER AUTHORIZATION ON T1 TO TestUser;

-- Create a certificate for signing


CREATE CERTIFICATE csSelectT
ENCRYPTION BY PASSWORD = 'SimplePwd01'
WITH SUBJECT = 'Certificate used to grant SELECT on T1';
CREATE USER ucsSelectT1 FROM CERTIFICATE csSelectT;
GRANT SELECT ON T1 TO ucsSelectT1;

-- Create a principal with low privileges


CREATE LOGIN Alice WITH PASSWORD = 'SimplePwd01';
CREATE USER Alice;

-- Verify Alice cannoT1 access T1;


EXECUTE AS LOGIN = 'Alice';
SELECT * FROM T1;
REVERT;

-- Create a procedure that directly accesses T1


CREATE PROCEDURE procSelectT1 AS
BEGIN
PRINT 'Now selecting from T1...';
SELECT * FROM T1;
END;
GO
GRANT EXECUTE ON procSelectT1 to public;

-- Create special procedure for accessing T1


CREATE PROCEDURE procSelectT1ForAlice AS
BEGIN
IF USER_ID() <> USER_ID('Alice')
BEGIN
PRINT 'Only Alice can use this.';
RETURN
END
EXEC procSelectT1;
END;
GO;
GRANT EXECUTE ON procSelectT1ForAlice TO PUBLIC;

-- Verify procedure works for a sysadmin user


EXEC procSelectT1ForAlice;

-- Alice still can't use the procedure yet


EXECUTE AS LOGIN = 'Alice';
EXEC procSelectT1ForAlice;
REVERT;

-- Sign procedure to grant it SELECT permission


ADD SIGNATURE TO procSelectT1ForAlice BY CERTIFICATE csSelectT
WITH PASSWORD = 'SimplePwd01';

-- Counter sign proc_select_t, to make this work


ADD COUNTER SIGNATURE TO procSelectT1 BY CERTIFICATE csSelectT
WITH PASSWORD = 'SimplePwd01';

-- Now the proc works.


-- Note that calling procSelectT1 directly still doesn't work
EXECUTE AS LOGIN = 'Alice';
EXEC procSelectT1ForAlice;
EXEC procSelectT1;
REVERT;

-- Cleanup
USE master;
GO
DROP DATABASE testDB;
DROP LOGIN Alice;

See Also
sys.crypt_properties (Transact-SQL)
DROP SIGNATURE (Transact-SQL)
CLOSE MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Closes the master key of the current database.
Transact-SQL Syntax Conventions

Syntax
CLOSE MASTER KEY

Arguments
Takes no arguments.

Remarks
This statement reverses the operation performed by OPEN MASTER KEY. CLOSE MASTER KEY only succeeds
when the database master key was opened in the current session by using the OPEN MASTER KEY statement.

Permissions
No permissions are required.

Examples
USE AdventureWorks2012;
CLOSE MASTER KEY;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


USE master;
OPEN MASTER KEY DECRYPTION BY PASSWORD = '43987hkhj4325tsku7';
GO
CLOSE MASTER KEY;
GO

See Also
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
Encryption Hierarchy
CLOSE SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Closes a symmetric key, or closes all symmetric keys open in the current session.
Transact-SQL Syntax Conventions

Syntax
CLOSE { SYMMETRIC KEY key_name | ALL SYMMETRIC KEYS }

Arguments
Key_name
Is the name of the symmetric key to be closed.

Remarks
Open symmetric keys are bound to the session not to the security context. An open key will continue to be
available until it is either explicitly closed or the session is terminated. CLOSE ALL SYMMETRIC KEYS will close
any database master key that was opened in the current session by using the OPEN MASTER KEY statement.
Information about open keys is visible in the sys.openkeys (Transact-SQL ) catalog view.

Permissions
No explicit permission is required to close a symmetric key.

Examples
A. Closing a symmetric key
The following example closes the symmetric key ShippingSymKey04 .

CLOSE SYMMETRIC KEY ShippingSymKey04;


GO

B. Closing all symmetric keys


The following example closes all symmetric keys that are open in the current session, and also the explicitly
opened database master key.

CLOSE ALL SYMMETRIC KEYS;


GO

See Also
CREATE SYMMETRIC KEY (Transact-SQL )
ALTER SYMMETRIC KEY (Transact-SQL )
OPEN SYMMETRIC KEY (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL )
DENY (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies a permission to a principal. Prevents that principal from inheriting the permission through its group or
role memberships. DENY takes precedence over all permissions, except that DENY does not apply to object
owners or members of the sysadmin fixed server role. Security Note Members of the sysadmin fixed server
role and object owners cannot be denied permissions."
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

-- Simplified syntax for DENY


DENY { ALL [ PRIVILEGES ] }
| <permission> [ ( column [ ,...n ] ) ] [ ,...n ]
[ ON [ <class> :: ] securable ]
TO principal [ ,...n ]
[ CASCADE] [ AS principal ]
[;]

<permission> ::=
{ see the tables below }

<class> ::=
{ see the tables below }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

DENY
<permission> [ ,...n ]
[ ON [ <class_> :: ] securable ]
TO principal [ ,...n ]
[ CASCADE ]
[;]

<permission> ::=
{ see the tables below }

<class> ::=
{
LOGIN
| DATABASE
| OBJECT
| ROLE
| SCHEMA
| USER
}

Arguments
ALL
This option does not deny all possible permissions. Denying ALL is equivalent to denying the following
permissions.
If the securable is a database, ALL means BACKUP DATABASE, BACKUP LOG, CREATE DATABASE,
CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and
CREATE VIEW.
If the securable is a scalar function, ALL means EXECUTE and REFERENCES.
If the securable is a table-valued function, ALL means DELETE, INSERT, REFERENCES, SELECT, and
UPDATE.
If the securable is a stored procedure, ALL means EXECUTE.
If the securable is a table, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
If the securable is a view, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.

NOTE
The DENY ALL syntax is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Avoid using
this feature in new development work, and plan to modify applications that currently use this feature. Deny specific
permissions instead.

PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the sub-topics
listed below.
column
Specifies the name of a column in a table on which permissions are being denied. The parentheses () are
required.
class
Specifies the class of the securable on which the permission is being denied. The scope qualifier :: is required.
securable
Specifies the securable on which the permission is being denied.
TO principal
Is the name of a principal. The principals to which permissions on a securable can be denied vary, depending on
the securable. See the securable-specific topics listed below for valid combinations.
CASCADE
Indicates that the permission is denied to the specified principal and to all other principals to which the principal
granted the permission. Required when the principal has the permission with GRANT OPTION.
AS principal
Use the AS principal clause to indicate that the principal recorded as the denier of the permission should be a
principal other than the person executing the statement. For example, presume that user Mary is principal_id 12
and user Raul is principal 15. Mary executes DENY SELECT ON OBJECT::X TO Steven WITH GRANT OPTION AS Raul;
Now the sys.database_permissions table will indicate that the grantor_prinicpal_id of the deny statement was 15
(Raul) even though the statement was actually executed by user 13 (Mary).
The use of AS in this statement does not imply the ability to impersonate another user.
Remarks
The full syntax of the DENY statement is complex. The syntax diagram above was simplified to draw attention to
its structure. Complete syntax for denying permissions on specific securables is described in the topics listed
below.
DENY will fail if CASCADE is not specified when denying a permission to a principal that was granted that
permission with GRANT OPTION specified.
The sp_helprotect system stored procedure reports permissions on a database-level securable.
Cau t i on

A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the permissions
hierarchy has been preserved for the sake of backward compatibility. It will be removed in a future release.
Cau t i on

Denying CONTROL permission on a database implicitly denies CONNECT permission on the database. A
principal that is denied CONTROL permission on a database will not be able to connect to that database.
Cau t i on

Denying CONTROL SERVER permission implicitly denies CONNECT SQL permission on the server. A principal
that is denied CONTROL SERVER permission on a server will not be able to connect to that server.

Permissions
The caller (or the principal specified with the AS option) must have either CONTROL permission on the
securable, or a higher permission that implies CONTROL permission on the securable. If using the AS option,
the specified principal must own the securable on which a permission is being denied.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can deny any
permission on any securable in the server. Grantees of CONTROL permission on the database, such as members
of the db_owner fixed database role, can deny any permission on any securable in the database. Grantees of
CONTROL permission on a schema can deny any permission on any object in the schema. If the AS clause is
used, the specified principal must own the securable on which permissions are being denied.

Examples
The following table lists the securables and the topics that describe the securable-specific syntax.

Application Role DENY Database Principal Permissions (Transact-SQL)

Assembly DENY Assembly Permissions (Transact-SQL)

Asymmetric Key DENY Asymmetric Key Permissions (Transact-SQL)

Availability Group DENY Availability Group Permissions (Transact-SQL)

Certificate DENY Certificate Permissions (Transact-SQL)

Contract DENY Service Broker Permissions (Transact-SQL)

Database DENY Database Permissions (Transact-SQL)

Database Scoped Credential DENY Database Scoped Credential (Transact-SQL)


Endpoint DENY Endpoint Permissions (Transact-SQL)

Full-Text Catalog DENY Full-Text Permissions (Transact-SQL)

Full-Text Stoplist DENY Full-Text Permissions (Transact-SQL)

Function DENY Object Permissions (Transact-SQL)

Login DENY Server Principal Permissions (Transact-SQL)

Message Type DENY Service Broker Permissions (Transact-SQL)

Object DENY Object Permissions (Transact-SQL)

Queue DENY Object Permissions (Transact-SQL)

Remote Service Binding DENY Service Broker Permissions (Transact-SQL)

Role DENY Database Principal Permissions (Transact-SQL)

Route DENY Service Broker Permissions (Transact-SQL)

Schema DENY Schema Permissions (Transact-SQL)

Search Property List DENY Search Property List Permissions (Transact-SQL)

Server DENY Server Permissions (Transact-SQL)

Service DENY Service Broker Permissions (Transact-SQL)

Stored Procedure DENY Object Permissions (Transact-SQL)

Symmetric Key DENY Symmetric Key Permissions (Transact-SQL)

Synonym DENY Object Permissions (Transact-SQL)

System Objects DENY System Object Permissions (Transact-SQL)

Table DENY Object Permissions (Transact-SQL)

Type DENY Type Permissions (Transact-SQL)

User DENY Database Principal Permissions (Transact-SQL)

View DENY Object Permissions (Transact-SQL)

XML Schema Collection DENY XML Schema Collection Permissions (Transact-SQL)

See Also
REVOKE (Transact-SQL )
sp_addlogin (Transact-SQL )
sp_adduser (Transact-SQL )
sp_changedbowner (Transact-SQL )
sp_dropuser (Transact-SQL )
sp_helprotect (Transact-SQL )
sp_helpuser (Transact-SQL )
DENY Assembly Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an assembly.
Transact-SQL Syntax Conventions

Syntax
DENY { permission [ ,...n ] } ON ASSEMBLY :: assembly_name
TO database_principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]

Arguments
permission
Specifies a permission that can be denied on an assembly. Listed below.
ON ASSEMBLY ::assembly_name
Specifies the assembly on which the permission is being denied. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
An assembly is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be denied on an assembly are listed below, together with the
more general permissions that include them by implication.

ASSEMBLY PERMISSION IMPLIED BY ASSEMBLY PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ASSEMBLY

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the assembly. If using the AS option, the specified principal must own the
assembly.

See Also
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASSEMBLY (Transact-SQL )
Encryption Hierarchy
DENY Asymmetric Key Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an asymmetric key.
Transact-SQL Syntax Conventions

Syntax
DENY { permission [ ,...n ] }
ON ASYMMETRIC KEY :: asymmetric_key_name
TO database_principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]

Arguments
permission
Specifies a permission that can be denied on an asymmetric key. Listed below.
ON ASYMMETRIC KEY ::asymmetric_key_name
Specifies the asymmetric key on which the permission is being denied. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
An asymmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on an asymmetric key are listed below,
together with the more general permissions that include them by implication.

ASYMMETRIC KEY PERMISSION IMPLIED BY ASYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ASYMMETRIC KEY

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the asymmetric key. If the AS clause is used, the specified principal must own
the asymmetric key.

See Also
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
DENY Availability Group Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an Always On availability group in SQL Server.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] ON AVAILABILITY GROUP :: availability_group_name
TO < server_principal > [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey

Arguments
permission
Specifies a permission that can be denied on an availability group. For a list of the permissions, see the Remarks
section later in this topic.
ON AVAIL ABILITY GROUP ::availability_group_name
Specifies the availability group on which the permission is being denied. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login to which the permission is being denied.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to deny the
permission.
Remarks
Permissions at the server scope can be denied only when the current database is master.
Information about availability groups is visible in the sys.availability_groups (Transact-SQL ) catalog view.
Information about server permissions is visible in the sys.server_permissions catalog view, and information about
server principals is visible in the sys.server_principals catalog view.
An availability group is a server-level securable. The most specific and limited permissions that can be denied on
an availability group are listed in the following table, together with the more general permissions that include
them by implication.

IMPLIED BY AVAILABILITY GROUP


AVAILABILITY GROUP PERMISSION PERMISSION IMPLIED BY SERVER PERMISSION

ALTER CONTROL ALTER ANY AVAILABILITY GROUP

CONNECT CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

Permissions
Requires CONTROL permission on the availability group or ALTER ANY AVAIL ABILTIY GROUP permission on
the server.

Examples
A. Denying VIEW DEFINITION permission on an availability group
The following example denies VIEW DEFINITION permission on availability group MyAg to SQL Server login
ZArifin .

USE master;
DENY VIEW DEFINITION ON AVAILABILITY GROUP::MyAg TO ZArifin;
GO

B. Denying TAKE OWNERSHIP permission with the CASCADE OPTION


The following example denies TAKE OWNERSHIP permission on availability group MyAg to SQL Server user
PKomosinski with the CASCADE option.

USE master;
DENY TAKE OWNERSHIP ON AVAILABILITY GROUP::MyAg TO PKomosinski
CASCADE;
GO

See Also
REVOKE Availability Group Permissions (Transact-SQL )
GRANT Availability Group Permissions (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
sys.availability_groups (Transact-SQL )
Always On Availability Groups Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
DENY Certificate Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a certificate.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ]
ON CERTIFICATE :: certificate_name
TO principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]

Arguments
permission
Specifies a permission that can be denied on a certificate. Listed below.
ON CERTIFICATE ::certificate_name
Specifies the certificate on which the permission is being denied. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
A certificate is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be denied on a certificate are listed below, together with the
more general permissions that include them by implication.

CERTIFICATE PERMISSION IMPLIED BY CERTIFICATE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY CERTIFICATE

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the certificate. If the AS clause is used, the specified principal must own the
certificate.

See Also
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
DENY Database Permissions (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a database in SQL Server.
Transact-SQL Syntax Conventions

Syntax
DENY <permission> [ ,...n ]
TO <database_principal> [ ,...n ] [ CASCADE ]
[ AS <database_principal> ]

<permission> ::=
permission | ALL [ PRIVILEGES ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be denied on a database. For a list of the permissions, see the Remarks section later
in this topic.
ALL
This option does not deny all possible permissions. Denying ALL is equivalent to denying the following
permissions: BACKUP DATABASE, BACKUP LOG, CREATE DATABASE, CREATE DEFAULT, CREATE FUNCTION,
CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and CREATE VIEW.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
CASCADE
Indicates that the permission will also be denied to principals to which the specified principal granted it.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
A database is a securable contained by the server that is its parent in the permissions hierarchy. The most specific
and limited permissions that can be denied on a database are listed in the following table, together with the more
general permissions that include them by implication.

DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

ADMINISTER DATABASE BULK CONTROL CONTROL SERVER


OPERATIONS
Applies to: SQL Database.

ALTER CONTROL ALTER ANY DATABASE

ALTER ANY APPLICATION ROLE ALTER CONTROL SERVER

ALTER ANY ASSEMBLY ALTER CONTROL SERVER

ALTER ANY ASYMMETRIC KEY ALTER CONTROL SERVER

ALTER ANY CERTIFICATE ALTER CONTROL SERVER

ALTER ANY COLUMN ENCRYPTION KEY ALTER CONTROL SERVER

ALTER ANY COLUMN MASTER KEY ALTER CONTROL SERVER


DEFINITION

ALTER ANY CONTRACT ALTER CONTROL SERVER

ALTER ANY DATABASE AUDIT ALTER ALTER ANY SERVER AUDIT

ALTER ANY DATABASE DDL TRIGGER ALTER CONTROL SERVER

ALTER ANY DATABASE EVENT ALTER ALTER ANY EVENT NOTIFICATION


NOTIFICATION
DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

ALTER ANY DATABASE EVENT SESSION ALTER ALTER ANY EVENT SESSION
Applies to: Azure SQL Database.

ALTER ANY DATABASE SCOPED CONTROL CONTROL SERVER


CONFIGURATION
Applies to: SQL Server 2016 (13.x)
through SQL Server 2017, SQL
Database.

ALTER ANY DATASPACE ALTER CONTROL SERVER

ALTER ANY EXTERNAL DATA SOURCE ALTER CONTROL SERVER

ALTER ANY EXTERNAL FILE FORMAT ALTER CONTROL SERVER

ALTER ANY EXTERNAL LIBRARY CONTROL CONTROL SERVER


Applies to: SQL Server 2017 (14.x).

ALTER ANY FULLTEXT CATALOG ALTER CONTROL SERVER

ALTER ANY MASK CONTROL CONTROL SERVER

ALTER ANY MESSAGE TYPE ALTER CONTROL SERVER

ALTER ANY REMOTE SERVICE BINDING ALTER CONTROL SERVER

ALTER ANY ROLE ALTER CONTROL SERVER

ALTER ANY ROUTE ALTER CONTROL SERVER

ALTER ANY SECURITY POLICY CONTROL CONTROL SERVER


Applies to: SQL Server 2016 (13.x)
through SQL Server 2017, Azure SQL
Database.

ALTER ANY SCHEMA ALTER CONTROL SERVER

ALTER ANY SERVICE ALTER CONTROL SERVER

ALTER ANY SYMMETRIC KEY ALTER CONTROL SERVER

ALTER ANY USER ALTER CONTROL SERVER

AUTHENTICATE CONTROL AUTHENTICATE SERVER

BACKUP DATABASE CONTROL CONTROL SERVER

BACKUP LOG CONTROL CONTROL SERVER

CHECKPOINT CONTROL CONTROL SERVER

CONNECT CONNECT REPLICATION CONTROL SERVER


DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

CONNECT REPLICATION CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

CREATE AGGREGATE ALTER CONTROL SERVER

CREATE ASSEMBLY ALTER ANY ASSEMBLY CONTROL SERVER

CREATE ASYMMETRIC KEY ALTER ANY ASYMMETRIC KEY CONTROL SERVER

CREATE CERTIFICATE ALTER ANY CERTIFICATE CONTROL SERVER

CREATE CONTRACT ALTER ANY CONTRACT CONTROL SERVER

CREATE DATABASE CONTROL CREATE ANY DATABASE

CREATE DATABASE DDL EVENT ALTER ANY DATABASE EVENT CREATE DDL EVENT NOTIFICATION
NOTIFICATION NOTIFICATION

CREATE DEFAULT ALTER CONTROL SERVER

CREATE FULLTEXT CATALOG ALTER ANY FULLTEXT CATALOG CONTROL SERVER

CREATE FUNCTION ALTER CONTROL SERVER

CREATE MESSAGE TYPE ALTER ANY MESSAGE TYPE CONTROL SERVER

CREATE PROCEDURE ALTER CONTROL SERVER

CREATE QUEUE ALTER CONTROL SERVER

CREATE REMOTE SERVICE BINDING ALTER ANY REMOTE SERVICE BINDING CONTROL SERVER

CREATE ROLE ALTER ANY ROLE CONTROL SERVER

CREATE ROUTE ALTER ANY ROUTE CONTROL SERVER

CREATE RULE ALTER CONTROL SERVER

CREATE SCHEMA ALTER ANY SCHEMA CONTROL SERVER

CREATE SERVICE ALTER ANY SERVICE CONTROL SERVER

CREATE SYMMETRIC KEY ALTER ANY SYMMETRIC KEY CONTROL SERVER

CREATE SYNONYM ALTER CONTROL SERVER

CREATE TABLE ALTER CONTROL SERVER

CREATE TYPE ALTER CONTROL SERVER


DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

CREATE VIEW ALTER CONTROL SERVER

CREATE XML SCHEMA COLLECTION ALTER CONTROL SERVER

DELETE CONTROL CONTROL SERVER

EXECUTE CONTROL CONTROL SERVER

EXECUTE ANY EXTERNAL SCRIPT CONTROL CONTROL SERVER


Applies to: SQL Server 2016 (13.x).

INSERT CONTROL CONTROL SERVER

KILL DATABASE CONNECTION CONTROL ALTER ANY CONNECTION


Applies to: Azure SQL Database.

REFERENCES CONTROL CONTROL SERVER

SELECT CONTROL CONTROL SERVER

SHOWPLAN CONTROL ALTER TRACE

SUBSCRIBE QUERY NOTIFICATIONS CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

UNMASK CONTROL CONTROL SERVER

UPDATE CONTROL CONTROL SERVER

VIEW ANY COLUMN ENCRYPTION KEY CONTROL VIEW ANY DEFINITION

VIEW ANY MASTER KEY DEFINITION CONTROL VIEW ANY DEFINITION

VIEW DATABASE STATE CONTROL VIEW SERVER STATE

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

Permissions
The principal that executes this statement (or the principal specified with the AS option) must have CONTROL
permission on the database or a higher permission that implies CONTROL permission on the database.
If you are using the AS option, the specified principal must own the database.

Examples
A. Denying permission to create certificates
The following example denies CREATE CERTIFICATE permission on the AdventureWorks2012 database to user
MelanieK .
USE AdventureWorks2012;
DENY CREATE CERTIFICATE TO MelanieK;
GO

B. Denying REFERENCES permission to an application role


The following example denies REFERENCES permission on the AdventureWorks2012 database to application role
AuditMonitor .

Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.

USE AdventureWorks2012;
DENY REFERENCES TO AuditMonitor;
GO

C. Denying VIEW DEFINITION with CASCADE


The following example denies VIEW DEFINITION permission on the AdventureWorks2012 database to user
CarmineEs and to all principals to which CarmineEs has granted VIEW DEFINITION permission.

USE AdventureWorks2012;
DENY VIEW DEFINITION TO CarmineEs CASCADE;
GO

See Also
sys.database_permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
DENY Database Principal Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions granted on a database user, database role, or application role in SQL Server.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ]
ON
{ [ USER :: database_user ]
| [ ROLE :: database_role ]
| [ APPLICATION ROLE :: application_role ]
}
TO <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be denied on the database principal. For a list of the permissions, see the Remarks
section later in this topic.
USER ::database_user
Specifies the class and name of the user on which the permission is being denied. The scope qualifier (::) is
required.
ROLE ::database_role
Specifies the class and name of the role on which the permission is being denied. The scope qualifier (::) is
required.
APPLICATION ROLE ::application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies the class and name of the application role on which the permission is being denied. The scope qualifier
(::) is required.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Database User Permissions
A database user is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a database user are listed in the
following table, together with the more general permissions that include them by implication.

DATABASE USER PERMISSION IMPLIED BY DATABASE USER PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

IMPERSONATE CONTROL CONTROL

ALTER CONTROL ALTER ANY USER

VIEW DEFINITION CONTROL VIEW DEFINITION

Database Role Permissions


A database role is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a database role are listed in the
following table, together with the more general permissions that include them by implication.

DATABASE ROLE PERMISSION IMPLIED BY DATABASE ROLE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL


DATABASE ROLE PERMISSION IMPLIED BY DATABASE ROLE PERMISSION IMPLIED BY DATABASE PERMISSION

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ROLE

VIEW DEFINITION CONTROL VIEW DEFINITION

Application Role Permissions


An application role is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on an application role are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY APPLICATION ROLE


APPLICATION ROLE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

ALTER CONTROL ALTER ANY APPLICATION ROLE

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the specified principal, or a higher permission that implies CONTROL
permission.
Grantees of CONTROL permission on a database, such as members of the db_owner fixed database role, can
deny any permission on any securable in the database.

Examples
A. Denying CONTROL permission on a user to another user
The following example denies CONTROL permission on the AdventureWorks2012 user Wanida to user RolandX .

USE AdventureWorks2012;
DENY CONTROL ON USER::Wanida TO RolandX;
GO

B. Denying VIEW DEFINITION permission on a role to a user to which it was granted with GRANT OPTION
The following example denies VIEW DEFINITION permission on the AdventureWorks2012 role SammamishParking
to database user JinghaoLiu . The CASCADE option is specified because user JinghaoLiu was granted VIEW
DEFINITION permission WITH GRANT OPTION.

USE AdventureWorks2012;
DENY VIEW DEFINITION ON ROLE::SammamishParking
TO JinghaoLiu CASCADE;
GO
C. Denying IMPERSONATE permission on a user to an application role
The following example denies IMPERSONATE permission on user HamithaL to the AdventureWorks2012
application role AccountsPayable17 .
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.

USE AdventureWorks2012;
DENY IMPERSONATE ON USER::HamithaL TO AccountsPayable17;
GO

See Also
GRANT Database Principal Permissions (Transact-SQL )
REVOKE Database Principal Permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
sys.database_permissions (Transact-SQL )
CREATE USER (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
DENY Database Scoped Credential (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a database scoped credential.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ]
ON DATABASE SCOPED CREDENTIAL :: credential_name
TO principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]

Arguments
permission
Specifies a permission that can be denied on a database scoped credential. Listed below.
ON DATABASE SCOPED CREDENTIAL ::credential_name
Specifies the database scoped credential on which the permission is being denied. The scope qualifier "::" is
required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
A database scoped credential is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be denied on a database scoped
credential are listed below, together with the more general permissions that include them by implication.

DATABASE SCOPED CREDENTIAL IMPLIED BY DATABASE SCOPED


PERMISSION CREDENTIAL PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL CONTROL

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the database scoped credential. If the AS clause is used, the specified principal
must own the database scoped credential.

See Also
DENY (Transact-SQL )
GRANT database scoped credential (Transact-SQL )
REVOKE database scoped credential (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
DENY Endpoint Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an endpoint.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] ON ENDPOINT :: endpoint_name
TO < server_principal > [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey

Arguments
permission
Specifies a permission that can be denied on an endpoint. For a list of the permissions, see the Remarks section
later in this topic.
ON ENDPOINT ::endpoint_name
Specifies the endpoint on which the permission is being denied. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login to which the permission is being denied.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to deny the
permission.
Remarks
Permissions at the server scope can be denied only when the current database is master.
Information about endpoints is visible in the sys.endpoints catalog view. Information about server permissions is
visible in the sys.server_permissions catalog view, and information about server principals is visible in the
sys.server_principals catalog view.
An endpoint is a server-level securable. The most specific and limited permissions that can be denied on an
endpoint are listed in the following table, together with the more general permissions that include them by
implication.

ENDPOINT PERMISSION IMPLIED BY ENDPOINT PERMISSION IMPLIED BY SERVER PERMISSION

ALTER CONTROL ALTER ANY ENDPOINT

CONNECT CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

Permissions
Requires CONTROL permission on the endpoint or ALTER ANY ENDPOINT permission on the server.

Examples
A. Denying VIEW DEFINITION permission on an endpoint
The following example denies VIEW DEFINITION permission on the endpoint Mirror7 to the SQL Server login
ZArifin .

USE master;
DENY VIEW DEFINITION ON ENDPOINT::Mirror7 TO ZArifin;
GO

B. Denying TAKE OWNERSHIP permission with CASCADE option


The following example denies TAKE OWNERSHIP permission on the endpoint Shipping83 to the SQL Server user
PKomosinski and to principals to which PKomosinski granted TAKE OWNERSHIP .

USE master;
DENY TAKE OWNERSHIP ON ENDPOINT::Shipping83 TO PKomosinski
CASCADE;
GO

See Also
GRANT Endpoint Permissions (Transact-SQL )
REVOKE Endpoint Permissions (Transact-SQL )
CREATE ENDPOINT (Transact-SQL )
Endpoints Catalog Views (Transact-SQL )
sys.endpoints (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
DENY Full-Text Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a full-text catalog and full-text stoplists.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] ON
FULLTEXT
{
CATALOG :: full-text_catalog_name
|
STOPLIST :: full-text_stoplist_name
}
TO database_principal [ ,...n ] [ CASCADE ]
[ AS denying_principal ]

Arguments
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON FULLTEXT CATALOG ::full-text_catalog_name
Specifies the full-text catalog on which the permission is being denied. The scope qualifier :: is required.
ON FULLTEXT STOPLIST ::full-text_stoplist_name
Specifies the full-text stoplist on which the permission is being denied. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

FULLTEXT CATALOG Permissions


A full-text catalog is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a full-text catalog are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY FULL-TEX T CATALOG


FULL-TEX T CATALOG PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

FULLTEXT STOPLIST Permissions


A full-text stoplist is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a full-text stoplist are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY FULL-TEX T STOPLIST


FULL-TEX T STOPLIST PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL


IMPLIED BY FULL-TEX T STOPLIST
FULL-TEX T STOPLIST PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the full-text catalog. If using the AS option, the specified principal must own
the full-text catalog.

See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
CREATE FULLTEXT STOPLIST (Transact-SQL )
DENY (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT Full-Text Permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fulltext_catalogs (Transact-SQL )
sys.fulltext_stoplists (Transact-SQL )
DENY Object Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a member of the OBJECT class of securables. These are the members of the OBJECT
class: tables, views, table-valued functions, stored procedures, extended stored procedures, scalar functions,
aggregate functions, service queues, and synonyms.
Transact-SQL Syntax Conventions

Syntax
DENY <permission> [ ,...n ] ON
[ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ]
TO <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<permission> ::=
ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be denied on a schema-contained object. For a list of the permissions, see the
Remarks section later in this topic.
ALL
Denying ALL does not deny all possible permissions. Denying ALL is equivalent to denying all ANSI-92
permissions applicable to the specified object. The meaning of ALL varies as follows:
Scalar function permissions: EXECUTE, REFERENCES.
Table-valued function permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
Stored Procedure permissions: EXECUTE.
Table permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
View permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
PRIVILEGES
Included for ANSI-92 compliance. Does not change the behavior of ALL.
column
Specifies the name of a column in a table, view, or table-valued function on which the permission is being denied.
The parentheses ( ) are required. Only SELECT, REFERENCES, and UPDATE permissions can be denied on a
column. column can be specified in the permissions clause or after the securable name.
Cau t i on

A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the permissions
hierarchy has been preserved for backward compatibility.
ON [ OBJECT :: ] [ schema_name ] . object_name
Specifies the object on which the permission is being denied. The OBJECT phrase is optional if schema_name is
specified. If the OBJECT phrase is used, the scope qualifier (::) is required. If schema_name is not specified, the
default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
TO <database_principal>
Specifies the principal to which the permission is being denied.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about objects is visible in various catalog views. For more information, see Object Catalog Views
(Transact-SQL ).
An object is a schema-level securable contained by the schema that is its parent in the permissions hierarchy. The
most specific and limited permissions that can be denied on an object are listed in the following table, together
with the more general permissions that include them by implication.

OBJECT PERMISSION IMPLIED BY OBJECT PERMISSION IMPLIED BY SCHEMA PERMISSION

ALTER CONTROL ALTER


OBJECT PERMISSION IMPLIED BY OBJECT PERMISSION IMPLIED BY SCHEMA PERMISSION

CONTROL CONTROL CONTROL

DELETE CONTROL DELETE

EXECUTE CONTROL EXECUTE

INSERT CONTROL INSERT

RECEIVE CONTROL CONTROL

REFERENCES CONTROL REFERENCES

SELECT RECEIVE SELECT

TAKE OWNERSHIP CONTROL CONTROL

UPDATE CONTROL UPDATE

VIEW CHANGE TRACKING CONTROL VIEW CHANGE TRACKING

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the object.
If you use the AS clause, the specified principal must own the object on which permissions are being denied.

Examples
The following examples use the AdventureWorks database.
A. Denying SELECT permission on a table
The following example denies SELECT permission to the user RosaQdM on the table Person.Address .

DENY SELECT ON OBJECT::Person.Address TO RosaQdM;


GO

B. Denying EXECUTE permission on a stored procedure


The following example denies EXECUTE permission on the stored procedure
HumanResources.uspUpdateEmployeeHireInfo to an application role called Recruiting11 .

DENY EXECUTE ON OBJECT::HumanResources.uspUpdateEmployeeHireInfo


TO Recruiting11;
GO

C. Denying REFERENCES permission on a view with CASCADE


The following example denies REFERENCES permission on the column BusinessEntityID in the view
HumanResources.vEmployee to the user Wanida with CASCADE .
DENY REFERENCES (BusinessEntityID) ON OBJECT::HumanResources.vEmployee
TO Wanida CASCADE;
GO

See Also
GRANT Object Permissions (Transact-SQL )
REVOKE Object Permissions (Transact-SQL )
Object Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Securables
sys.fn_builtin_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
DENY Schema Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a schema.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] } ON SCHEMA :: schema_name
TO database_principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]

Arguments
permission
Specifies a permission that can be denied on a schema. For a list of these permissions, see the Remarks section
later in this topic.
ON SCHEMA :: schema_name
Specifies the schema on which the permission is being denied. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being denied. database_principal can be one of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
denying_principal can be one of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal

Remarks
A schema is a database-level securable that is contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a schema are listed in the following
table, together with the more general permissions that include them by implication.

SCHEMA PERMISSION IMPLIED BY SCHEMA PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY SCHEMA

CONTROL CONTROL CONTROL

CREATE SEQUENCE ALTER ALTER ANY SCHEMA

DELETE CONTROL DELETE

EXECUTE CONTROL EXECUTE

INSERT CONTROL INSERT

REFERENCES CONTROL REFERENCES

SELECT CONTROL SELECT

TAKE OWNERSHIP CONTROL CONTROL

UPDATE CONTROL UPDATE

VIEW CHANGE TRACKING CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the schema. If you are using the AS option, the specified principal must own
the schema.

See Also
CREATE SCHEMA (Transact-SQL )
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
DENY Search Property List Permissions (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a search property list.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] ON
SEARCH PROPERTY LIST :: search_property_list_name
TO database_principal [ ,...n ] [ CASCADE ]
[ AS denying_principal ]

Arguments
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON SEARCH PROPERTY LIST ::search_property_list_name
Specifies the search property list on which the permission is being denied. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being denied. The principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission. The
principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
SEARCH PROPERTY LIST Permissions
A search property list is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a search property list are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY SEARCH PROPERTY LIST


SEARCH PROPERTY LIST PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the full-text catalog. If using the AS option, the specified principal must own
the full-text catalog.

See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE SEARCH PROPERTY LIST (Transact-SQL )
DENY (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT Search Property List Permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Principals (Database Engine)
REVOKE Search Property List Permissions (Transact-SQL )
sys.fn_builtin_permissions (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
Search Document Properties with Search Property Lists
DENY Server Permissions (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a server.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ]
TO <grantee_principal> [ ,...n ]
[ CASCADE ]
[ AS <grantor_principal> ]

<grantee_principal> ::= SQL_Server_login


| SQL_Server_login_mapped_to_Windows_login
| SQL_Server_login_mapped_to_Windows_group
| SQL_Server_login_mapped_to_certificate
| SQL_Server_login_mapped_to_asymmetric_key
| server_role

<grantor_principal> ::= SQL_Server_login


| SQL_Server_login_mapped_to_Windows_login
| SQL_Server_login_mapped_to_Windows_group
| SQL_Server_login_mapped_to_certificate
| SQL_Server_login_mapped_to_asymmetric_key
| server_role

Arguments
permission
Specifies a permission that can be denied on a server. For a list of the permissions, see the Remarks section later in
this topic.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
TO <server_principal>
Specifies the principal to which the permission is denied.
AS <grantor_principal>
Specifies the principal from which the principal executing this query derives its right to deny the permission.
SQL_Server_login
Specifies a SQL Server login.
SQL_Server_login_mapped_to_Windows_login
Specifies a SQL Server login mapped to a Windows login.
SQL_Server_login_mapped_to_Windows_group
Specifies a SQL Server login mapped to a Windows group.
SQL_Server_login_mapped_to_certificate
Specifies a SQL Server login mapped to a certificate.
SQL_Server_login_mapped_to_asymmetric_key
Specifies a SQL Server login mapped to an asymmetric key.
server_role
Specifies a server role.

Remarks
Permissions at the server scope can be denied only when the current database is master.
Information about server permissions can be viewed in the sys.server_permissions catalog view, and information
about server principals can be viewed in the sys.server_principals catalog view. Information about membership of
server roles can be viewed in the sys.server_role_members catalog view.
A server is the highest level of the permissions hierarchy. The most specific and limited permissions that can be
denies on a server are listed in the following table.

SERVER PERMISSION IMPLIED BY SERVER PERMISSION

ADMINISTER BULK OPERATIONS CONTROL SERVER

ALTER ANY AVAILABILITY GROUP CONTROL SERVER

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

ALTER ANY CONNECTION CONTROL SERVER

ALTER ANY CREDENTIAL CONTROL SERVER

ALTER ANY DATABASE CONTROL SERVER

ALTER ANY ENDPOINT CONTROL SERVER

ALTER ANY EVENT NOTIFICATION CONTROL SERVER

ALTER ANY EVENT SESSION CONTROL SERVER

ALTER ANY LINKED SERVER CONTROL SERVER

ALTER ANY LOGIN CONTROL SERVER

ALTER ANY SERVER AUDIT CONTROL SERVER

ALTER ANY SERVER ROLE CONTROL SERVER

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

ALTER RESOURCES CONTROL SERVER

ALTER SERVER STATE CONTROL SERVER


SERVER PERMISSION IMPLIED BY SERVER PERMISSION

ALTER SETTINGS CONTROL SERVER

ALTER TRACE CONTROL SERVER

AUTHENTICATE SERVER CONTROL SERVER

CONNECT ANY DATABASE CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

CONNECT SQL CONTROL SERVER

CONTROL SERVER CONTROL SERVER

CREATE ANY DATABASE ALTER ANY DATABASE

CREATE AVAILABILITY GROUP ALTER ANY AVAILABILITY GROUP

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

CREATE DDL EVENT NOTIFICATION ALTER ANY EVENT NOTIFICATION

CREATE ENDPOINT ALTER ANY ENDPOINT

CREATE SERVER ROLE ALTER ANY SERVER ROLE

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

CREATE TRACE EVENT NOTIFICATION ALTER ANY EVENT NOTIFICATION

EXTERNAL ACCESS ASSEMBLY CONTROL SERVER

IMPERSONATE ANY LOGIN CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

SELECT ALL USER SECURABLES CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

SHUTDOWN CONTROL SERVER

UNSAFE ASSEMBLY CONTROL SERVER

VIEW ANY DATABASE VIEW ANY DEFINITION

VIEW ANY DEFINITION CONTROL SERVER


SERVER PERMISSION IMPLIED BY SERVER PERMISSION

VIEW SERVER STATE ALTER SERVER STATE

Remarks
The following three server permissions were added in SQL Server 2014 (12.x).
CONNECT ANY DATABASE Permission
Grant CONNECT ANY DATABASE to a login that must connect to all databases that currently exist and to any
new databases that might be created in future. Does not grant any permission in any database beyond connect.
Combine with SELECT ALL USER SECURABLES or VIEW SERVER STATE to allow an auditing process to
view all data or all database states on the instance of SQL Server.
IMPERSONATE ANY LOGIN Permission
When granted, allows a middle-tier process to impersonate the account of clients connecting to it, as it connects to
databases. When denied, a high privileged login can be blocked from impersonating other logins. For example, a
login with CONTROL SERVER permission can be blocked from impersonating other logins.
SELECT ALL USER SECURABLES Permission
When granted, a login such as an auditor can view data in all databases that the user can connect to. When denied,
prevents access to objects unless they are in the sys schema.

Permissions
Requires CONTROL SERVER permission or ownership of the securable. If you use the AS clause, the specified
principal must own the securable on which permissions are being denied.

Examples
A. Denying CONNECT SQL permission to a SQL Server login and principals to which the login has regranted it
The following example denies CONNECT SQL permission to the SQL Server login Annika and to the principals to
which she has granted the permission.

USE master;
DENY CONNECT SQL TO Annika CASCADE;
GO

B. Denying CREATE ENDPOINT permission to a SQL Server login using the AS option
The following example denies CREATE ENDPOINT permission to the user ArifS . The example uses the AS option to
specify MandarP as the principal from which the executing principal derives the authority to do so.

USE master;
DENY CREATE ENDPOINT TO ArifS AS MandarP;
GO

See Also
GRANT (Transact-SQL )
DENY (Transact-SQL )
DENY Server Permissions (Transact-SQL )
REVOKE Server Permissions (Transact-SQL )
Permissions Hierarchy (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
DENY Server Principal Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions granted on a SQL Server login.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] }
ON
{ [ LOGIN :: SQL_Server_login ]
| [ SERVER ROLE :: server_role ] }
TO <server_principal> [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
| server_role

Arguments
permission
Specifies a permission that can be denies on a SQL Server login. For a list of the permissions, see the Remarks
section later in this topic.
LOGIN :: SQL_Server_login
Specifies the SQL Server login on which the permission is being denied. The scope qualifier (::) is required.
SERVER ROLE :: server_role
Specifies the server role on which the permission is being denied. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login or server role to which the permission is being granted.
TO SQL_Server_login
Specifies the SQL Server login to which the permission is being denied.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
server_role
Specifies the name of a server role.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to deny the
permission.

Remarks
Permissions at the server scope can be denied only when the current database is master.
Information about server permissions is available in the sys.server_permissions catalog view. Information about
server principals is available in the sys.server_principals catalog view.
The DENY statement fails if CASCADE is not specified when you are denying a permission to a principal that was
granted that permission with GRANT OPTION.
SQL Server logins and server roles are server-level securables. The most specific and limited permissions that can
be denied on a SQL Server login or server role are listed in the following table, together with the more general
permissions that include them by implication.

SQL SERVER LOGIN OR SERVER ROLE IMPLIED BY SQL SERVER LOGIN OR SERVER
PERMISSION ROLE PERMISSION IMPLIED BY SERVER PERMISSION

CONTROL CONTROL CONTROL SERVER

IMPERSONATE CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

ALTER CONTROL ALTER ANY LOGIN

ALTER ANY SERVER ROLE

Permissions
For logins, requires CONTROL permission on the login or ALTER ANY LOGIN permission on the server.
For server roles, requires CONTROL permission on the server role or ALTER ANY SERVER ROLE permission on
the server.

Examples
A. Denying IMPERSONATE permission on a login
The following example denies IMPERSONATE permission on the SQL Server login WanidaBenshoof to a SQL Server
login created from the Windows user AdvWorks\YoonM .
USE master;
DENY IMPERSONATE ON LOGIN::WanidaBenshoof TO [AdvWorks\YoonM];
GO

B. Denying VIEW DEFINITION permission with CASCADE


The following example denies VIEW DEFINITION permission on the SQL Server login EricKurjan to SQL Server
login RMeyyappan . The CASCADE option indicates that VIEW DEFINITION permission on EricKurjan will also be
denied to principals to which RMeyyappan granted this permission.

USE master;
DENY VIEW DEFINITION ON LOGIN::EricKurjan TO RMeyyappan
CASCADE;
GO

C. Denying VIEW DEFINITION permission on a server role


The following example denies VIEW DEFINITION on the Sales server role to the Auditors server role.

USE master;
DENY VIEW DEFINITION ON SERVER ROLE::Sales TO Auditors ;
GO

See Also
sys.server_principals (Transact-SQL )
sys.server_permissions (Transact-SQL )
GRANT Server Principal Permissions (Transact-SQL )
REVOKE Server Principal Permissions (Transact-SQL )
CREATE LOGIN (Transact-SQL )
Principals (Database Engine)
Permissions (Database Engine)
Security Functions (Transact-SQL )
Security Stored Procedures (Transact-SQL )
DENY Service Broker Permissions (Transact-SQL)
5/4/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a Service Broker contract, message type, remote service binding, route, or service.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] ON
{
[ CONTRACT :: contract_name ]
| [ MESSAGE TYPE :: message_type_name ]
| [ REMOTE SERVICE BINDING :: remote_binding_name ]
| [ ROUTE :: route_name ]
| [ SERVICE :: service_name ]
}
TO database_principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]

Arguments
permission
Specifies a permission that can be denied on a Service Broker securable. For a list of the permissions, see the
Remarks section later in this topic.
CONTRACT ::contract_name
Specifies the contract on which the permission is being denied. The scope qualifier :: is required.
MESSAGE TYPE ::message_type_name
Specifies the message type on which the permission is being denied. The scope qualifier :: is required.
REMOTE SERVICE BINDING ::remote_binding_name
Specifies the remote service binding on which the permission is being denied. The scope qualifier :: is required.
ROUTE ::route_name
Specifies the route on which the permission is being denied. The scope qualifier :: is required.
SERVICE ::message_type_name
Specifies the service on which the permission is being denied. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission. One of
the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal

Remarks
Service Broker Contracts
A Service Broker contract is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be denied on a Service Broker contract
are listed in the following table, together with the more general permissions that include them by implication.

IMPLIED BY SERVICE BROKER CONTRACT


SERVICE BROKER CONTRACT PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY CONTRACT

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Message Types


A Service Broker message type is a database-level securable that is contained by the database that is its parent in
the permissions hierarchy. The most specific and limited permissions that can be denied on a Service Broker
message type are listed in the following table, together with the more general permissions that include them by
implication.

SERVICE BROKER MESSAGE TYPE IMPLIED BY SERVICE BROKER MESSAGE


PERMISSION TYPE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL


SERVICE BROKER MESSAGE TYPE IMPLIED BY SERVICE BROKER MESSAGE
PERMISSION TYPE PERMISSION IMPLIED BY DATABASE PERMISSION

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY MESSAGE TYPE

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Remote Service Bindings


A Service Broker remote service binding is a database-level securable that is contained by the database that is its
parent in the permissions hierarchy. The most specific and limited permissions that can be denied on a Service
Broker remote service binding are listed in the following table, together with the more general permissions that
include them by implication.

SERVICE BROKER REMOTE SERVICE IMPLIED BY SERVICE BROKER REMOTE


BINDING PERMISSION SERVICE BINDING PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY REMOTE SERVICE BINDING

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Routes


A Service Broker route is a database-level securable that is contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be denied on a Service Broker route
are listed in the following table, together with the more general permissions that include them by implication.

IMPLIED BY SERVICE BROKER ROUTE


SERVICE BROKER ROUTE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ROUTE

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Services


A Service Broker service is a database-level securable that is contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be denied on a Service Broker service
are listed in the following table, together with the more general permissions that include them by implication.
IMPLIED BY SERVICE BROKER SERVICE
SERVICE BROKER SERVICE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

SEND CONTROL CONTROL

ALTER CONTROL ALTER ANY SERVICE

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the Service Broker contract, message type, remote service binding, route, or
service. If the AS clause is used, the specified principal must own the securable on which permissions are being
denied.

See Also
Principals (Database Engine)
REVOKE Service Broker Permissions (Transact-SQL )
DENY (Transact-SQL )
Permissions (Database Engine)
DENY Symmetric Key Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a symmetric key.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ]
ON SYMMETRIC KEY :: symmetric_key_name
TO <database_principal> [ ,...n ] [ CASCADE ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be denied on a symmetric key. For a list of the permissions, see the Remarks
section later in this topic.
ON SYMMETRIC KEY ::asymmetric_key_name
Specifies the symmetric key on which the permission is being denied. The scope qualifier (::) is required.
TO <database_principal>
Specifies the principal from which the permission is being revoked.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about symmetric keys is visible in the sys.symmetric_keys catalog view.
A symmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a symmetric key are listed in the
following table, together with the more general permissions that include them by implication.

SYMMETRIC KEY PERMISSION IMPLIED BY SYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY SYMMETRIC KEY

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the symmetric key or ALTER ANY SYMMETRIC KEY permission on the
database. If you use the AS option, the specified principal must own the symmetric key.

Examples
The following example denies ALTER permission on the symmetric key SamInventory42 to the database user
HamidS .

USE AdventureWorks2012;
DENY ALTER ON SYMMETRIC KEY::SamInventory42 TO HamidS;
GO

See Also
sys.symmetric_keys (Transact-SQL )
GRANT Symmetric Key Permissions (Transact-SQL )
REVOKE Symmetric Key Permissions (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
DENY System Object Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on system objects such as stored procedures, extended stored procedures, functions, and
views.
Transact-SQL Syntax Conventions

Syntax
DENY { SELECT | EXECUTE } ON [ sys.]system_object TO principal

Arguments
[ sys.]
The sys qualifier is required only when you are referring to catalog views and dynamic management views.
system_object
Specifies the object on which permission is being denied.
principal
Specifies the principal from which the permission is being revoked.

Remarks
This statement can be used to deny permissions on certain stored procedures, extended stored procedures, table-
valued functions, scalar functions, views, catalog views, compatibility views, INFORMATION_SCHEMA views,
dynamic management views, and system tables that are installed by SQL Server. Each of these system objects
exists as a unique record in the resource database (mssqlsystemresource). The resource database is read-only. A
link to the object is exposed as a record in the sys schema of every database.
Default name resolution resolves unqualified procedure names to the resource database. Therefore, the sys
qualifier is only required when you are specifying catalog views and dynamic management views.
Cau t i on

Denying permissions on system objects will cause applications that depend on them to fail. SQL Server
Management Studio uses catalog views and may not function as expected if you change the default permissions
on catalog views.
Denying permissions on triggers and on columns of system objects is not supported.
Permissions on system objects will be preserved during upgrades of SQL Server.
System objects are visible in the sys.system_objects catalog view. The permissions on system objects are visible in
the sys.database_permissions catalog view in the master database.
The following query returns information about permissions of system objects:
SELECT * FROM master.sys.database_permissions AS dp
JOIN sys.system_objects AS so
ON dp.major_id = so.object_id
WHERE dp.class = 1 AND so.parent_object_id = 0 ;
GO

Permissions
Requires CONTROL SERVER permission.

Examples
The following example denies EXECUTE permission on xp_cmdshell to public .

DENY EXECUTE ON sys.xp_cmdshell TO public;


GO

See Also
Transact-SQL Syntax Conventions (Transact-SQL )
sys.database_permissions (Transact-SQL )
GRANT System Object Permissions (Transact-SQL )
REVOKE System Object Permissions (Transact-SQL )
DENY Type Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a type in SQL Server.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] ON TYPE :: [ schema_name . ] type_name
TO <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be denied on a type. For a list of the permissions, see the Remarks section later in
this topic.
ON TYPE :: [ schema_name. ] type_name
Specifies the type on which the permission is being denied. The scope qualifier (::) is required. If schema_name is
not specified, the default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
TO <database_principal>
Specifies the principal to which the permission is being denied.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
A type is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.

IMPORTANT
GRANT, DENY, and REVOKE permissions do not apply to system types. User-defined types can be granted permissions. For
more information about user-defined types, see Working with User-Defined Types in SQL Server.

The most specific and limited permissions that can be denied on a type are listed in the following table, together
with the more general permissions that include them by implication.

TYPE PERMISSION IMPLIED BY TYPE PERMISSION IMPLIED BY SCHEMA PERMISSION

CONTROL CONTROL CONTROL

EXECUTE CONTROL EXECUTE

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the type. If you use the AS clause, the specified principal must own the type on
which permissions are being denied.

Examples
The following example denies VIEW DEFINITION permission with CASCADE on the user-defined type PhoneNumber to
the KhalidR . PhoneNumber is located in schema Telemarketing .
DENY VIEW DEFINITION ON TYPE::Telemarketing.PhoneNumber
TO KhalidR CASCADE;
GO

See Also
GRANT Type Permissions (Transact-SQL )
REVOKE Type Permissions (Transact-SQL )
CREATE TYPE (Transact-SQL )
Principals (Database Engine)
Permissions (Database Engine)
Securables
DENY XML Schema Collection Permissions (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an XML schema collection.
Transact-SQL Syntax Conventions

Syntax
DENY permission [ ,...n ] ON
XML SCHEMA COLLECTION :: [ schema_name . ]
XML_schema_collection_name
TO <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be denied on an XML schema collection. For a list of the permissions, see the
Remarks section later in this topic.
ON XML SCHEMA COLLECTION :: [ schema_name. ] XML_schema_collection_name
Specifies the XML schema collection on which the permission is being denied. The scope qualifier (::) is required. If
schema_name is not specified, the default schema is used. If schema_name is specified, the schema scope qualifier
(.) is required.
TO <database_principal>
Specifies the principal to which the permission is being denied.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about XML schema collections is visible in the sys.xml_schema_collections catalog view.
An XML schema collection is a schema-level securable contained by the schema that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be denied on an XML schema collection
are listed in the following table, together with the more general permissions that include them by implication.

IMPLIED BY XML SCHEMA COLLECTION


XML SCHEMA COLLECTION PERMISSION PERMISSION IMPLIED BY SCHEMA PERMISSION

ALTER CONTROL ALTER

CONTROL CONTROL CONTROL

EXECUTE CONTROL EXECUTE

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL on the XML schema collection. If you use the AS option, the specified principal must own the
XML schema collection.

Examples
The following example denies EXECUTE permission on the XML schema collection Invoices4 to the user Wanida .
The XML schema collection Invoices4 is located inside the Sales schema of the AdventureWorks2012 database.
USE AdventureWorks2012;
DENY EXECUTE ON XML SCHEMA COLLECTION::Sales.Invoices4 TO Wanida;
GO

See Also
GRANT XML Schema Collection Permissions (Transact-SQL )
REVOKE XML Schema Collection Permissions (Transact-SQL )
sys.xml_schema_collections (Transact-SQL )
CREATE XML SCHEMA COLLECTION (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
EXECUTE AS (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets the execution context of a session.
By default, a session starts when a user logs in and ends when the user logs off. All operations during a session
are subject to permission checks against that user. When an EXECUTE AS statement is run, the execution context
of the session is switched to the specified login or user name. After the context switch, permissions are checked
against the login and user security tokens for that account instead of the person calling the EXECUTE AS
statement. In essence, the user or login account is impersonated for the duration of the session or module
execution, or the context switch is explicitly reverted.
Transact-SQL Syntax Conventions

Syntax
{ EXEC | EXECUTE } AS <context_specification>
[;]

<context_specification>::=
{ LOGIN | USER } = 'name'
[ WITH { NO REVERT | COOKIE INTO @varbinary_variable } ]
| CALLER

Arguments
LOGIN
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the execution context to be impersonated is a login. The scope of impersonation is at the server level.

NOTE
This option is not available in a contained database or in SQL Database.

USER
Specifies the context to be impersonated is a user in the current database. The scope of impersonation is restricted
to the current database. A context switch to a database user does not inherit the server-level permissions of that
user.

IMPORTANT
While the context switch to the database user is active, any attempt to access resources outside of the database will cause
the statement to fail. This includes USE database statements, distributed queries, and queries that reference another
database that uses three- or four-part identifiers.

' name '


Is a valid user or login name. name must be a member of the sysadmin fixed server role, or exist as a principal in
sys.database_principals or sys.server_principals, respectively.
name can be specified as a local variable.
name must be a singleton account, and cannot be a group, role, certificate, key, or built-in account, such as NT
AUTHORITY\LocalService, NT AUTHORITY\NetworkService, or NT AUTHORITY\LocalSystem.
For more information, see Specifying a User or Login Name later in this topic.
NO REVERT
Specifies that the context switch cannot be reverted back to the previous context. The NO REVERT option can
only be used at the adhoc level.
For more information about reverting to the previous context, see REVERT (Transact-SQL ).
COOKIE INTO @varbinary_variable
Specifies the execution context can only be reverted back to the previous context if the calling REVERT WITH
COOKIE statement contains the correct @varbinary_variable value. The Database Engine passes the cookie to
@varbinary_variable. The COOKIE INTO option can only be used at the adhoc level.
@ varbinary_variable is varbinary(8000).

NOTE
The cookie OUTPUT parameter for is currently documented as varbinary(8000) which is the correct maximum length.
However the current implementation returns varbinary(100). Applications should reserve varbinary(8000) so that the
application continues to operate correctly if the cookie return size increases in a future release.

CALLER
When used inside a module, specifies the statements inside the module are executed in the context of the caller of
the module.
When used outside a module, the statement has no action.

Remarks
The change in execution context remains in effect until one of the following occurs:
Another EXECUTE AS statement is run.
A REVERT statement is run.
The session is dropped.
The stored procedure or trigger where the command was executed exits.
You can create an execution context stack by calling the EXECUTE AS statement multiple times across multiple
principals. When called, the REVERT statement switches the context to the login or user in the next level up in the
context stack. For a demonstration of this behavior, see Example A.

Specifying a User or Login Name


The user or login name specified in EXECUTE AS <context_specification> must exist as a principal in
sys.database_principals or sys.server_principals, respectively, or the EXECUTE AS statement fails. Additionally,
IMPERSONATE permissions must be granted on the principal. Unless the caller is the database owner, or is a
member of the sysadmin fixed server role, the principal must exist even when the user is accessing the database
or instance of SQL Server through a Windows group membership. For example, assume the following conditions:
CompanyDomain\SQLUsers group has access to the Sales database.
CompanyDomain\SqlUser1 is a member of SQLUsers and, therefore, has implicit access to the Sales
database.
Although CompanyDomain\SqlUser1 has access to the database through membership in the SQLUsers
group, the statement EXECUTE AS USER = 'CompanyDomain\SqlUser1' fails because CompanyDomain\SqlUser1
does not exist as a principal in the database.
If the user is orphaned (the associated login no longer exists), and the user was not created with WITHOUT
LOGIN, EXECUTE AS will fail for the user.

Best Practice
Specify a login or user that has the least privileges required to perform the operations in the session. For example,
do not specify a login name with server-level permissions, if only database-level permissions are required; or do
not specify a database owner account unless those permissions are required.
Cau t i on

The EXECUTE AS statement can succeed as long as the Database Engine can resolve the name. If a domain user
exists, Windows might be able to resolve the user for the Database Engine, even though the Windows user does
not have access to SQL Server. This can lead to a condition where a login with no access to SQL Server appears
to be logged in, though the impersonated login would only have the permissions granted to public or guest.

Using WITH NO REVERT


When the EXECUTE AS statement includes the optional WITH NO REVERT clause, the execution context of a
session cannot be reset using REVERT or by executing another EXECUTE AS statement. The context set by the
statement remains in affect until the session is dropped.
When the WITH NO REVERT COOKIE = @varbinary_variable clause is specified, the SQL Server Database
Engine passes the cookie value to @varbinary_variable. The execution context set by that statement can only be
reverted to the previous context if the calling REVERT WITH COOKIE = @varbinary_variable statement contains
the same @varbinary_variable value.
This option is useful in an environment in which connection pooling is used. Connection pooling is the
maintenance of a group of database connections for reuse by applications on an application server. Because the
value passed to @varbinary_variable is known only to the caller of the EXECUTE AS statement, the caller can
guarantee that the execution context they establish cannot be changed by anyone else.

Determining the Original Login


Use the ORIGINAL_LOGIN function to return the name of the login that connected to the instance of SQL Server.
You can use this function to return the identity of the original login in sessions in which there are many explicit or
implicit context switches.

Permissions
To specify EXECUTE AS on a login, the caller must have IMPERSONATE permission on the specified login
name and must not be denied the IMPERSONATE ANY LOGIN permission. To specify EXECUTE AS on a
database user, the caller must have IMPERSONATE permissions on the specified user name. When EXECUTE
AS CALLER is specified, IMPERSONATE permissions are not required.

Examples
A. Using EXECUTE AS and REVERT to switch context
The following example creates a context execution stack using multiple principals. The REVERT statement is then
used to reset the execution context to the previous caller. The REVERT statement is executed multiple times moving
up the stack until the execution context is set to the original caller.

USE AdventureWorks2012;
GO
--Create two temporary principals
CREATE LOGIN login1 WITH PASSWORD = 'J345#$)thb';
CREATE LOGIN login2 WITH PASSWORD = 'Uor80$23b';
GO
CREATE USER user1 FOR LOGIN login1;
CREATE USER user2 FOR LOGIN login2;
GO
--Give IMPERSONATE permissions on user2 to user1
--so that user1 can successfully set the execution context to user2.
GRANT IMPERSONATE ON USER:: user2 TO user1;
GO
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- Set the execution context to login1.
EXECUTE AS LOGIN = 'login1';
--Verify the execution context is now login1.
SELECT SUSER_NAME(), USER_NAME();
--Login1 sets the execution context to login2.
EXECUTE AS USER = 'user2';
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- The execution context stack now has three principals: the originating caller, login1 and login2.
--The following REVERT statements will reset the execution context to the previous context.
REVERT;
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
REVERT;
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();

--Remove temporary principals.


DROP LOGIN login1;
DROP LOGIN login2;
DROP USER user1;
DROP USER user2;
GO

B. Using the WITH COOKIE clause


The following example sets the execution context of a session to a specified user and specifies the WITH NO
REVERT COOKIE = @varbinary_variable clause. The REVERT statement must specify the value passed to the
@cookie variable in the EXECUTE AS statement to successfully revert the context back to the caller. To run this
example, the login1 login and user1 user created in example A must exist.
DECLARE @cookie varbinary(8000);
EXECUTE AS USER = 'user1' WITH COOKIE INTO @cookie;
-- Store the cookie in a safe location in your application.
-- Verify the context switch.
SELECT SUSER_NAME(), USER_NAME();
--Display the cookie value.
SELECT @cookie;
GO
-- Use the cookie in the REVERT statement.
DECLARE @cookie varbinary(8000);
-- Set the cookie value to the one from the SELECT @cookie statement.
SET @cookie = <value from the SELECT @cookie statement>;
REVERT WITH COOKIE = @cookie;
-- Verify the context switch reverted.
SELECT SUSER_NAME(), USER_NAME();
GO

See Also
REVERT (Transact-SQL )
EXECUTE AS Clause (Transact-SQL )
EXECUTE AS Clause (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
In SQL Server you can define the execution context of the following user-defined modules: functions (except
inline table-valued functions), procedures, queues, and triggers.
By specifying the context in which the module is executed, you can control which user account the Database
Engine uses to validate permissions on objects that are referenced by the module. This provides additional
flexibility and control in managing permissions across the object chain that exists between user-defined modules
and the objects referenced by those modules. Permissions must be granted to users only on the module itself,
without having to grant them explicit permissions on the referenced objects. Only the user that the module is
running as must have permissions on the objects accessed by the module.
Transact-SQL Syntax Conventions

Syntax
-- SQL Server Syntax
Functions (except inline table-valued functions), Stored Procedures, and DML Triggers
{ EXEC | EXECUTE } AS { CALLER | SELF | OWNER | 'user_name' }

DDL Triggers with Database Scope


{ EXEC | EXECUTE } AS { CALLER | SELF | 'user_name' }

DDL Triggers with Server Scope and logon triggers


{ EXEC | EXECUTE } AS { CALLER | SELF | 'login_name' }

Queues
{ EXEC | EXECUTE } AS { SELF | OWNER | 'user_name' }

-- Windows Azure SQL Database Syntax


Functions (except inline table-valued functions), Stored Procedures, and DML Triggers

{ EXEC | EXECUTE } AS { CALLER | SELF | OWNER | 'user_name' }

DDL Triggers with Database Scope

{ EXEC | EXECUTE } AS { CALLER | SELF | 'user_name' }

Arguments
CALLER
Specifies the statements inside the module are executed in the context of the caller of the module. The user
executing the module must have appropriate permissions not only on the module itself, but also on any database
objects that are referenced by the module.
CALLER is the default for all modules except queues, and is the same as SQL Server 2005 behavior.
CALLER cannot be specified in a CREATE QUEUE or ALTER QUEUE statement.
SELF
EXECUTE AS SELF is equivalent to EXECUTE AS user_name, where the specified user is the person creating or
altering the module. The actual user ID of the person creating or modifying the modules is stored in the
execute_as_principal_id column in the sys.sql_modules or sys.service_queues catalog view.
SELF is the default for queues.

NOTE
To change the user ID of the execute_as_principal_id in the sys.service_queues catalog view, you must explicitly specify
the EXECUTE AS setting in the ALTER QUEUE statement.

OWNER
Specifies the statements inside the module executes in the context of the current owner of the module. If the
module does not have a specified owner, the owner of the schema of the module is used. OWNER cannot be
specified for DDL or logon triggers.

IMPORTANT
OWNER must map to a singleton account and cannot be a role or group.

' user_name '


Specifies the statements inside the module execute in the context of the user specified in user_name. Permissions
for any objects within the module are verified against user_name. user_name cannot be specified for DDL
triggers with server scope or logon triggers. Use login_name instead.
user_name must exist in the current database and must be a singleton account. user_name cannot be a group,
role, certificate, key, or built-in account, such as NT AUTHORITY\LocalService, NT
AUTHORITY\NetworkService, or NT AUTHORITY\LocalSystem.
The user ID of the execution context is stored in metadata and can be viewed in the execute_as_principal_id
column in the sys.sql_modules or sys.assembly_modules catalog view.
' login_name '
Specifies the statements inside the module execute in the context of the SQL Server login specified in
login_name. Permissions for any objects within the module are verified against login_name. login_name can be
specified only for DDL triggers with server scope or logon triggers.
login_name cannot be a group, role, certificate, key, or built-in account, such as NT AUTHORITY\LocalService,
NT AUTHORITY\NetworkService, or NT AUTHORITY\LocalSystem.

Remarks
How the Database Engine evaluates permissions on the objects that are referenced in the module depends on
the ownership chain that exists between calling objects and referenced objects. In earlier versions of SQL Server,
ownership chaining was the only method available to avoid having to grant the calling user access to all
referenced objects.
Ownership chaining has the following limitations:
Applies only to DML statements: SELECT, INSERT, UPDATE, and DELETE.
The owners of the calling and the called objects must be the same.
Does not apply to dynamic queries inside the module.
Regardless of the execution context that is specified in the module, the following actions always apply:
When the module is executed, the Database Engine first verifies that the user executing the module has
EXECUTE permission on the module.
Ownership chaining rules continue to apply. This means if the owners of the calling and called objects are
the same, no permissions are checked on the underlying objects.
When a user executes a module that has been specified to run in a context other than CALLER, the user's
permission to execute the module is checked, but additional permissions checks on objects that are
accessed by the module are performed against the user account specified in the EXECUTE AS clause. The
user executing the module is, in effect, impersonating the specified user.
The context specified in the EXECUTE AS clause of the module is valid only for the duration of the
module execution. Context reverts to the caller when the module execution is completed.

Specifying a User or Login Name


A database user or server login specified in the EXECUTE AS clause of a module cannot be dropped until the
module has been modified to execute under another context.
The user or login name specified in EXECUTE AS clause must exist as a principal in sys.database_principals or
sys.server_principals, respectively, or else the create or alter module operation fails. Additionally, the user that
creates or alters the module must have IMPERSONATE permissions on the principal.
If the user has implicit access to the database or instance of SQL Server through a Windows group membership,
the user specified in the EXECUTE AS clause is implicitly created when the module is created when one of the
following requirements exist:
The specified user or login is a member of the sysadmin fixed server role.
The user that is creating the module has permission to create principals.
When neither of these requirements are met, the create module operation fails.

IMPORTANT
If the SQL Server (MSSQLSERVER) service is running as a local account (local service or local user account), it will not have
privileges to obtain the group memberships of a Windows domain account that is specified in the EXECUTE AS clause. This
will cause the execution of the module to fail.

For example, assume the following conditions:


CompanyDomain\SQLUsers group has access to the Sales database.
CompanyDomain\SqlUser1 is a member of SQLUsers and, therefore, has access to the Sales
database.
The user that is creating or altering the module has permissions to create principals.
When the following CREATE PROCEDURE statement is run, the CompanyDomain\SqlUser1 is implicitly created
as a database principal in the Sales database.
USE Sales;
GO
CREATE PROCEDURE dbo.usp_Demo
WITH EXECUTE AS 'CompanyDomain\SqlUser1'
AS
SELECT user_name();
GO

Using EXECUTE AS CALLER Stand-Alone Statement


Use the EXECUTE AS CALLER stand-alone statement inside a module to set the execution context to the caller
of the module.
Assume the following stored procedure is called by SqlUser2 .

CREATE PROCEDURE dbo.usp_Demo


WITH EXECUTE AS 'SqlUser1'
AS
SELECT user_name(); -- Shows execution context is set to SqlUser1.
EXECUTE AS CALLER;
SELECT user_name(); -- Shows execution context is set to SqlUser2, the caller of the module.
REVERT;
SELECT user_name(); -- Shows execution context is set to SqlUser1.
GO

Using EXECUTE AS to Define Custom Permission Sets


Specifying an execution context for a module can be very useful when you want to define custom permission
sets. For example, some actions, such as TRUNCATE TABLE, do not have grantable permissions. By
incorporating the TRUNCATE TABLE statement within a module and specifying that module execute as a user
who has permissions to alter the table, you can extend the permissions to truncate the table to the user to whom
you grant EXECUTE permissions on the module.
To view the definition of the module with the specified execution context, use the sys.sql_modules (Transact-SQL )
catalog view.

Best Practice
Specify a login or user that has the least privileges required to perform the operations defined in the module. For
example, do not specify a database owner account unless those permissions are required.

Permissions
To execute a module specified with EXECUTE AS, the caller must have EXECUTE permissions on the module.
To execute a CLR module specified with EXECUTE AS that accesses resources in another database or server, the
target database or server must trust the authenticator of the database from which the module originates (the
source database).
To specify the EXECUTE AS clause when you create or modify a module, you must have IMPERSONATE
permissions on the specified principal and also permissions to create the module. You can always impersonate
yourself. When no execution context is specified or EXECUTE AS CALLER is specified, IMPERSONATE
permissions are not required.
To specify a login_name or user_name that has implicit access to the database through a Windows group
membership, you must have CONTROL permissions on the database.
Examples
The following example creates a stored procedure in the AdventureWorks2012 database and assigns the
execution context to OWNER .

CREATE PROCEDURE HumanResources.uspEmployeesInDepartment


@DeptValue int
WITH EXECUTE AS OWNER
AS
SET NOCOUNT ON;
SELECT e.BusinessEntityID, c.LastName, c.FirstName, e.JobTitle
FROM Person.Person AS c
INNER JOIN HumanResources.Employee AS e
ON c.BusinessEntityID = e.BusinessEntityID
INNER JOIN HumanResources.EmployeeDepartmentHistory AS edh
ON e.BusinessEntityID = edh.BusinessEntityID
WHERE edh.DepartmentID = @DeptValue
ORDER BY c.LastName, c.FirstName;
GO

-- Execute the stored procedure by specifying department 5.


EXECUTE HumanResources.uspEmployeesInDepartment 5;
GO

See Also
sys.assembly_modules (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.service_queues (Transact-SQL )
REVERT (Transact-SQL )
EXECUTE AS (Transact-SQL )
GRANT (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a securable to a principal. The general concept is to GRANT <some permission> ON
<some object> TO <some user, login, or group>. For a general discussion of permissions, see Permissions
(Database Engine).
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

-- Simplified syntax for GRANT


GRANT { ALL [ PRIVILEGES ] }
| permission [ ( column [ ,...n ] ) ] [ ,...n ]
[ ON [ class :: ] securable ] TO principal [ ,...n ]
[ WITH GRANT OPTION ] [ AS principal ]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

GRANT
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
TO principal [ ,...n ]
[ WITH GRANT OPTION ]
[;]

<permission> ::=
{ see the tables below }

<class_type> ::=
{
LOGIN
| DATABASE
| OBJECT
| ROLE
| SCHEMA
| USER
}

Arguments
ALL
This option is deprecated and maintained only for backward compatibility. It does not grant all possible
permissions. Granting ALL is equivalent to granting the following permissions:
If the securable is a database, ALL means BACKUP DATABASE, BACKUP LOG, CREATE DATABASE,
CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE,
and CREATE VIEW.
If the securable is a scalar function, ALL means EXECUTE and REFERENCES.
If the securable is a table-valued function, ALL means DELETE, INSERT, REFERENCES, SELECT, and
UPDATE.
If the securable is a stored procedure, ALL means EXECUTE.
If the securable is a table, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
If the securable is a view, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the subtopics
listed below.
column
Specifies the name of a column in a table on which permissions are being granted. The parentheses () are
required.
class
Specifies the class of the securable on which the permission is being granted. The scope qualifier :: is
required.
securable
Specifies the securable on which the permission is being granted.
TO principal
Is the name of a principal. The principals to which permissions on a securable can be granted vary, depending
on the securable. See the subtopics listed below for valid combinations.
GRANT OPTION
Indicates that the grantee will also be given the ability to grant the specified permission to other principals.
AS principal
Use the AS principal clause to indicate that the principal recorded as the grantor of the permission should be
a principal other than the person executing the statement. For example, presume that user Mary is
principal_id 12 and user Raul is principal 15. Mary executes
GRANT SELECT ON OBJECT::X TO Steven WITH GRANT OPTION AS Raul; Now the sys.database_permissions table
will indicate that the grantor_prinicpal_id was 15 (Raul) even though the statement was actually executed by
user 13 (Mary).
Using the AS clause is typically not recommended unless you need to explicitly define the permission chain.
For more information, see the Summary of the Permission Check Algorithm section of Permissions
(Database Engine).
The use of AS in this statement does not imply the ability to impersonate another user.

Remarks
The full syntax of the GRANT statement is complex. The syntax diagram above was simplified to draw
attention to its structure. Complete syntax for granting permissions on specific securables is described in the
articles listed below.
The REVOKE statement can be used to remove granted permissions, and the DENY statement can be used to
prevent a principal from gaining a specific permission through a GRANT.
Granting a permission removes DENY or REVOKE of that permission on the specified securable. If the same
permission is denied at a higher scope that contains the securable, the DENY takes precedence. But revoking
the granted permission at a higher scope does not take precedence.
Database-level permissions are granted within the scope of the specified database. If a user needs
permissions to objects in another database, create the user account in the other database, or grant the user
account access to the other database, as well as the current database.
Cau t i on

A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the
permissions hierarchy has been preserved for the sake of backward compatibility. It will be removed in a
future release.
The sp_helprotect system stored procedure reports permissions on a database-level securable.

WITH GRANT OPTION


The GRANT … WITH GRANT OPTION specifies that the security principal receiving the permission is
given the ability to grant the specified permission to other security accounts. When the principal that receives
the permission is a role or a Windows group, the AS clause must be used when the object permission needs
to be further granted to users who are not members of the group or role. Because only a user, rather than a
group or role, can execute a GRANT statement, a specific member of the group or role must use the AS
clause to explicitly invoke the role or group membership when granting the permission. The following
example shows how the WITH GRANT OPTION is used when granted to a role or Windows group.

-- Execute the following as a database owner


GRANT EXECUTE ON TestProc TO TesterRole WITH GRANT OPTION;
EXEC sp_addrolemember TesterRole, User1;
-- Execute the following as User1
-- The following fails because User1 does not have the permission as the User1
GRANT EXECUTE ON TestMe TO User2;
-- The following succeeds because User1 invokes the TesterRole membership
GRANT EXECUTE ON TestMe TO User2 AS TesterRole;

Chart of SQL Server Permissions


For a poster sized chart of all Database Engine permissions in pdf format, see https://aka.ms/sql-
permissions-poster.

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with
GRANT OPTION, or a higher permission that implies the permission being granted. If using the AS option,
additional requirements apply. See the securable-specific article for details.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant
any permission on any securable in the server. Grantees of CONTROL permission on a database, such as
members of the db_owner fixed database role, can grant any permission on any securable in the database.
Grantees of CONTROL permission on a schema can grant any permission on any object within the schema.

Examples
The following table lists the securables and the articles that describe the securable-specific syntax.
Application Role GRANT Database Principal Permissions (Transact-SQL)

Assembly GRANT Assembly Permissions (Transact-SQL)

Asymmetric Key GRANT Asymmetric Key Permissions (Transact-SQL)

Availability Group GRANT Availability Group Permissions (Transact-SQL)

Certificate GRANT Certificate Permissions (Transact-SQL)

Contract GRANT Service Broker Permissions (Transact-SQL)

Database GRANT Database Permissions (Transact-SQL)

Database Scoped Credential GRANT Database Scoped Credential (Transact-SQL)

Endpoint GRANT Endpoint Permissions (Transact-SQL)

Full-Text Catalog GRANT Full-Text Permissions (Transact-SQL)

Full-Text Stoplist GRANT Full-Text Permissions (Transact-SQL)

Function GRANT Object Permissions (Transact-SQL)

Login GRANT Server Principal Permissions (Transact-SQL)

Message Type GRANT Service Broker Permissions (Transact-SQL)

Object GRANT Object Permissions (Transact-SQL)

Queue GRANT Object Permissions (Transact-SQL)

Remote Service Binding GRANT Service Broker Permissions (Transact-SQL)

Role GRANT Database Principal Permissions (Transact-SQL)

Route GRANT Service Broker Permissions (Transact-SQL)

Schema GRANT Schema Permissions (Transact-SQL)

Search Property List GRANT Search Property List Permissions (Transact-SQL)

Server GRANT Server Permissions (Transact-SQL)

Service GRANT Service Broker Permissions (Transact-SQL)

Stored Procedure GRANT Object Permissions (Transact-SQL)

Symmetric Key GRANT Symmetric Key Permissions (Transact-SQL)

Synonym GRANT Object Permissions (Transact-SQL)


System Objects GRANT System Object Permissions (Transact-SQL)

Table GRANT Object Permissions (Transact-SQL)

Type GRANT Type Permissions (Transact-SQL)

User GRANT Database Principal Permissions (Transact-SQL)

View GRANT Object Permissions (Transact-SQL)

XML Schema Collection GRANT XML Schema Collection Permissions (Transact-SQL)

See Also
DENY (Transact-SQL )
REVOKE (Transact-SQL )
sp_addlogin (Transact-SQL )
sp_adduser (Transact-SQL )
sp_changedbowner (Transact-SQL )
sp_dropuser (Transact-SQL )
sp_helprotect (Transact-SQL )
sp_helpuser (Transact-SQL )
GRANT Assembly Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an assembly.
Transact-SQL Syntax Conventions

Syntax
GRANT { permission [ ,...n ] } ON ASSEMBLY :: assembly_name
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]

Arguments
permission
Specifies a permission that can be granted on an assembly. Listed below.
ON ASSEMBLY ::assembly_name
Specifies the assembly on which the permission is being granted. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
An assembly is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be granted on an assembly are listed below, together with the
more general permissions that include them by implication.

ASSEMBLY PERMISSION IMPLIED BY ASSEMBLY PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ASSEMBLY

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.

See Also
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
GRANT Asymmetric Key Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an asymmetric key.
Transact-SQL Syntax Conventions

Syntax
GRANT { permission [ ,...n ] }
ON ASYMMETRIC KEY :: asymmetric_key_name
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]

Arguments
permission
Specifies a permission that can be granted on an asymmetric key. Listed below.
ON ASYMMETRIC KEY ::asymmetric_key_name
Specifies the asymmetric key on which the permission is being granted. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
An asymmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on an asymmetric key are listed below,
together with the more general permissions that include them by implication.

ASYMMETRIC KEY PERMISSION IMPLIED BY ASYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ASYMMETRIC KEY

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.

See Also
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
GRANT Availability Group Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an Always On availability group.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] ON AVAILABILITY GROUP :: availability_group_name
TO < server_principal > [ ,...n ]
[ WITH GRANT OPTION ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey

Arguments
permission
Specifies a permission that can be granted on an availability group. For a list of the permissions, see the Remarks
section later in this topic.
ON AVAIL ABILITY GROUP ::availability_group_name
Specifies the availability group on which the permission is being granted. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login to which the permission is being granted.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to grant the
permission.
Remarks
Permissions at the server scope can be granted only when the current database is master.
Information about availability groups is visible in the sys.availability_groups (Transact-SQL ) catalog view.
Information about server permissions is visible in the sys.server_permissions catalog view, and information about
server principals is visible in the sys.server_principals catalog view.
An availability group is a server-level securable. The most specific and limited permissions that can be granted on
an availability group are listed in the following table, together with the more general permissions that include
them by implication.

IMPLIED BY AVAILABILITY GROUP


AVAILABILITY GROUP PERMISSION PERMISSION IMPLIED BY SERVER PERMISSION

ALTER CONTROL ALTER ANY AVAILABILITY GROUP

CONNECT CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

For a chart of all Database Engine permissions, see Database Engine Permission Poster.

Permissions
Requires CONTROL permission on the availability group or ALTER ANY AVAIL ABILTIY GROUP permission on
the server.

Examples
A. Granting VIEW DEFINITION permission on an availability group
The following example grants VIEW DEFINITION permission on availability group MyAg to SQL Server login
ZArifin .

USE master;
GRANT VIEW DEFINITION ON AVAILABILITY GROUP::MyAg TO ZArifin;
GO

B. Granting TAKE OWNERSHIP permission with the GRANT OPTION


The following example grants TAKE OWNERSHIP permission on availability group MyAg to SQL Server user
PKomosinski with the GRANT OPTION .

USE master;
GRANT TAKE OWNERSHIP ON AVAILABILITY GROUP::MyAg TO PKomosinski
WITH GRANT OPTION;
GO

C. Granting CONTROL permission on an availability group


The following example grants CONTROL permission on availability group MyAg to SQL Server user PKomosinski .
CONTROL allows the login complete control of the availability group, even though they are not the owner of the
availability group. To change the ownership, see ALTER AUTHORIZATION (Transact-SQL ).

USE master;
GRANT CONTROL ON AVAILABILITY GROUP::MyAg TO PKomosinski;
GO

See Also
REVOKE Availability Group Permissions (Transact-SQL )
DENY Availability Group Permissions (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
sys.availability_groups (Transact-SQL )
AlwaysOn Availability Groups Catalog Views (Transact-SQL ) Permissions (Database Engine)
Principals (Database Engine)
GRANT Certificate Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a certificate in SQL Server.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ]
ON CERTIFICATE :: certificate_name
TO principal [ ,...n ] [ WITH GRANT OPTION ]
[ AS granting_principal ]

Arguments
permission
Specifies a permission that can be granted on a certificate. Listed below.
ON CERTIFICATE ::certificate_name
Specifies the certificate on which the permission is being granted. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
A certificate is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be granted on a certificate are listed below, together with the
more general permissions that include them by implication.

CERTIFICATE PERMISSION IMPLIED BY CERTIFICATE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY CERTIFICATE

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.

See Also
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
GRANT Database Permissions (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a database in SQL Server.
Transact-SQL Syntax Conventions

Syntax
GRANT <permission> [ ,...n ]
TO <database_principal> [ ,...n ] [ WITH GRANT OPTION ]
[ AS <database_principal> ]

<permission>::=
permission | ALL [ PRIVILEGES ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be granted on a database. For a list of the permissions, see the Remarks section
later in this topic.
ALL
This option does not grant all possible permissions. Granting ALL is equivalent to granting the following
permissions: BACKUP DATABASE, BACKUP LOG, CREATE DATABASE, CREATE DEFAULT, CREATE
FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and CREATE VIEW.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
IMPORTANT
A combination of ALTER and REFERENCE permissions in some cases could allow the grantee to view data or execute
unauthorized functions. For example: A user with ALTER permission on a table and REFERENCE permission on a function can
create a computed column over a function and have it be executed. In this case, the user must also have SELECT permission
on the computed column.

A database is a securable contained by the server that is its parent in the permissions hierarchy. The most specific
and limited permissions that can be granted on a database are listed in the following table, together with the
more general permissions that include them by implication.

DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

ADMINISTER DATABASE BULK CONTROL CONTROL SERVER


OPERATIONS
Applies to: SQL Database.

ALTER CONTROL ALTER ANY DATABASE

ALTER ANY APPLICATION ROLE ALTER CONTROL SERVER

ALTER ANY ASSEMBLY ALTER CONTROL SERVER

ALTER ANY ASYMMETRIC KEY ALTER CONTROL SERVER

ALTER ANY CERTIFICATE ALTER CONTROL SERVER


DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

ALTER ANY COLUMN ENCRYPTION ALTER CONTROL SERVER


KEY

ALTER ANY COLUMN MASTER KEY ALTER CONTROL SERVER


DEFINITION

ALTER ANY CONTRACT ALTER CONTROL SERVER

ALTER ANY DATABASE AUDIT ALTER ALTER ANY SERVER AUDIT

ALTER ANY DATABASE DDL TRIGGER ALTER CONTROL SERVER

ALTER ANY DATABASE EVENT ALTER ALTER ANY EVENT NOTIFICATION


NOTIFICATION

ALTER ANY DATABASE EVENT SESSION ALTER ALTER ANY EVENT SESSION
Applies to: SQL Database.

ALTER ANY DATABASE SCOPED CONTROL CONTROL SERVER


CONFIGURATION
Applies to: SQL Server 2016 (13.x)
through SQL Server 2017, SQL
Database.

ALTER ANY DATASPACE ALTER CONTROL SERVER

ALTER ANY EXTERNAL DATA SOURCE ALTER CONTROL SERVER

ALTER ANY EXTERNAL FILE FORMAT ALTER CONTROL SERVER

ALTER ANY EXTERNAL LIBRARY CONTROL CONTROL SERVER


Applies to: SQL Server 2017 (14.x).

ALTER ANY FULLTEXT CATALOG ALTER CONTROL SERVER

ALTER ANY MASK CONTROL CONTROL SERVER

ALTER ANY MESSAGE TYPE ALTER CONTROL SERVER

ALTER ANY REMOTE SERVICE BINDING ALTER CONTROL SERVER

ALTER ANY ROLE ALTER CONTROL SERVER

ALTER ANY ROUTE ALTER CONTROL SERVER

ALTER ANY SCHEMA ALTER CONTROL SERVER

ALTER ANY SECURITY POLICY CONTROL CONTROL SERVER


Applies to: Azure SQL Database.

ALTER ANY SERVICE ALTER CONTROL SERVER


DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

ALTER ANY SYMMETRIC KEY ALTER CONTROL SERVER

ALTER ANY USER ALTER CONTROL SERVER

AUTHENTICATE CONTROL AUTHENTICATE SERVER

BACKUP DATABASE CONTROL CONTROL SERVER

BACKUP LOG CONTROL CONTROL SERVER

CHECKPOINT CONTROL CONTROL SERVER

CONNECT CONNECT REPLICATION CONTROL SERVER

CONNECT REPLICATION CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

CREATE AGGREGATE ALTER CONTROL SERVER

CREATE ANY EXTERNAL LIBRARY CONTROL CONTROL SERVER


Applies to: SQL Server 2017 (14.x).

CREATE ASSEMBLY ALTER ANY ASSEMBLY CONTROL SERVER

CREATE ASYMMETRIC KEY ALTER ANY ASYMMETRIC KEY CONTROL SERVER

CREATE CERTIFICATE ALTER ANY CERTIFICATE CONTROL SERVER

CREATE CONTRACT ALTER ANY CONTRACT CONTROL SERVER

CREATE DATABASE CONTROL CREATE ANY DATABASE

CREATE DATABASE DDL EVENT ALTER ANY DATABASE EVENT CREATE DDL EVENT NOTIFICATION
NOTIFICATION NOTIFICATION

CREATE DEFAULT ALTER CONTROL SERVER

CREATE FULLTEXT CATALOG ALTER ANY FULLTEXT CATALOG CONTROL SERVER

CREATE FUNCTION ALTER CONTROL SERVER

CREATE MESSAGE TYPE ALTER ANY MESSAGE TYPE CONTROL SERVER

CREATE PROCEDURE ALTER CONTROL SERVER

CREATE QUEUE ALTER CONTROL SERVER

CREATE REMOTE SERVICE BINDING ALTER ANY REMOTE SERVICE BINDING CONTROL SERVER
DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

CREATE ROLE ALTER ANY ROLE CONTROL SERVER

CREATE ROUTE ALTER ANY ROUTE CONTROL SERVER

CREATE RULE ALTER CONTROL SERVER

CREATE SCHEMA ALTER ANY SCHEMA CONTROL SERVER

CREATE SERVICE ALTER ANY SERVICE CONTROL SERVER

CREATE SYMMETRIC KEY ALTER ANY SYMMETRIC KEY CONTROL SERVER

CREATE SYNONYM ALTER CONTROL SERVER

CREATE TABLE ALTER CONTROL SERVER

CREATE TYPE ALTER CONTROL SERVER

CREATE VIEW ALTER CONTROL SERVER

CREATE XML SCHEMA COLLECTION ALTER CONTROL SERVER

DELETE CONTROL CONTROL SERVER

EXECUTE CONTROL CONTROL SERVER

EXECUTE ANY EXTERNAL SCRIPT CONTROL CONTROL SERVER


Applies to: SQL Server 2016 (13.x).

INSERT CONTROL CONTROL SERVER

KILL DATABASE CONNECTION CONTROL ALTER ANY CONNECTION


Applies to: Azure SQL Database.

REFERENCES CONTROL CONTROL SERVER

SELECT CONTROL CONTROL SERVER

SHOWPLAN CONTROL ALTER TRACE

SUBSCRIBE QUERY NOTIFICATIONS CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

UNMASK CONTROL CONTROL SERVER

UPDATE CONTROL CONTROL SERVER

VIEW ANY COLUMN ENCRYPTION KEY CONTROL VIEW ANY DEFINITION


DEFINITION
DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

VIEW ANY COLUMN MASTER KEY CONTROL VIEW ANY DEFINITION


DEFINITION

VIEW DATABASE STATE CONTROL VIEW SERVER STATE

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows Group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Object owners can grant permissions on the objects they own. Principals that have CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server.

Examples
A. Granting permission to create tables
The following example grants CREATE TABLE permission on the AdventureWorks database to user MelanieK .

USE AdventureWorks;
GRANT CREATE TABLE TO MelanieK;
GO

B. Granting SHOWPLAN permission to an application role


The following example grants SHOWPLAN permission on the AdventureWorks2012 database to application role
AuditMonitor .

Applies to: SQL Server 2008 through SQL Server 2017, SQL Database

USE AdventureWorks2012;
GRANT SHOWPLAN TO AuditMonitor;
GO

C. Granting CREATE VIEW with GRANT OPTION


The following example grants CREATE VIEW permission on the AdventureWorks2012 database to user CarmineEs
with the right to grant CREATE VIEW to other principals.

USE AdventureWorks2012;
GRANT CREATE VIEW TO CarmineEs WITH GRANT OPTION;
GO

See Also
sys.database_permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
GRANT Database Principal Permissions (Transact-
SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a database user, database role, or application role in SQL Server.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ]
ON
{ [ USER :: database_user ]
| [ ROLE :: database_role ]
| [ APPLICATION ROLE :: application_role ]
}
TO <database_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be granted on the database principal. For a list of the permissions, see the
Remarks section later in this topic.
USER ::database_user
Specifies the class and name of the user on which the permission is being granted. The scope qualifier (::) is
required.
ROLE ::database_role
Specifies the class and name of the role on which the permission is being granted. The scope qualifier (::) is
required.
APPLICATION ROLE ::application_role
Specifies the class and name of the application role on which the permission is being granted. The scope qualifier
(::) is required.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies an application role.
Database_user_mapped_to_Windows_User
pecifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about database principals is visible in the sys.database_principals catalog view. Information about
database-level permissions is visible in the sys.database_permissions catalog view.

Database User Permissions


A database user is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a database user are listed in the
following table, together with the more general permissions that include them by implication.

DATABASE USER PERMISSION IMPLIED BY DATABASE USER PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

IMPERSONATE CONTROL CONTROL

ALTER CONTROL ALTER ANY USER

VIEW DEFINITION CONTROL VIEW DEFINITION

Database Role Permissions


A database role is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a database role are listed in the
following table, together with the more general permissions that include them by implication.
DATABASE ROLE PERMISSION IMPLIED BY DATABASE ROLE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ROLE

VIEW DEFINITION CONTROL VIEW DEFINITION

Application Role Permissions


An application role is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on an application role are listed in the
following, together with the more general permissions that include them by implication.

IMPLIED BY APPLICATION ROLE


APPLICATION ROLE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

ALTER CONTROL ALTER ANY APPLICATION ROLE

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows User IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows Group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user mapped to an asymmetric key Membership in the db_securityadminfixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadminfixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Principals that have CONTROL permission on a securable can grant permission on that securable.
Grantees of CONTROL permission on a database, such as members of the db_owner fixed database role, can
grant any permission on any securable in the database.

Examples
A. Granting CONTROL permission on a user to another user
The following example grants CONTROL permission on AdventureWorks2012 user Wanida to user RolandX .

GRANT CONTROL ON USER::Wanida TO RolandX;


GO

B. Granting VIEW DEFINITION permission on a role to a user with GRANT OPTION


The following example grants VIEW DEFINITION permission on AdventureWorks2012 role SammamishParking
together with GRANT OPTION to database user JinghaoLiu .

GRANT VIEW DEFINITION ON ROLE::SammamishParking


TO JinghaoLiu WITH GRANT OPTION;
GO

C. Granting IMPERSONATE permission on a user to an application role


The following example grants IMPERSONATE permission on user HamithaL to AdventureWorks2012 application role
AccountsPayable17 .

Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.

GRANT IMPERSONATE ON USER::HamithaL TO AccountsPayable17;


GO

See Also
DENY Database Principal Permissions (Transact-SQL )
REVOKE Database Principal Permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
sys.database_permissions (Transact-SQL )
CREATE USER (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
GRANT Database Scoped Credential Permissions
(Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a database scoped credential.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ]
ON DATABASE SCOPED CREDENTIAL :: credential_name
TO principal [ ,...n ] [ WITH GRANT OPTION ]
[ AS granting_principal ]

Arguments
permission
Specifies a permission that can be granted on a database scoped credential. Listed below.
ON DATABASE SCOPED CREDENTIAL ::credential_name
Specifies the database scoped credential on which the permission is being granted. The scope qualifier "::" is
required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
A database scoped credential is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on a database scoped
credential are listed below, together with the more general permissions that include them by implication.

DATABASE SCOPED CREDENTIAL IMPLIED BY DATABASE SCOPED


PERMISSION CREDENTIAL PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL CONTROL

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.

See Also
GRANT (Transact-SQL )
REVOKE Database Scoped Credential (Transact-SQL )
DENY Database Scoped Credential (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
GRANT Endpoint Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an endpoint.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] ON ENDPOINT :: endpoint_name
TO < server_principal > [ ,...n ]
[ WITH GRANT OPTION ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey

Arguments
permission
Specifies a permission that can be granted on an endpoint. For a list of the permissions, see the Remarks section
later in this topic.
ON ENDPOINT ::endpoint_name
Specifies the endpoint on which the permission is being granted. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login to which the permission is being granted.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to grant the
permission.
Remarks
Permissions at the server scope can be granted only when the current database is master.
Information about endpoints is visible in the sys.endpoints catalog view. Information about server permissions is
visible in the sys.server_permissions catalog view, and information about server principals is visible in the
sys.server_principals catalog view.
An endpoint is a server-level securable. The most specific and limited permissions that can be granted on an
endpoint are listed in the following table, together with the more general permissions that include them by
implication.

ENDPOINT PERMISSION IMPLIED BY ENDPOINT PERMISSION IMPLIED BY SERVER PERMISSION

ALTER CONTROL ALTER ANY ENDPOINT

CONNECT CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

Permissions
Requires CONTROL permission on the endpoint or ALTER ANY ENDPOINT permission on the server.

Examples
A. Granting VIEW DEFINITION permission on an endpoint
The following example grants VIEW DEFINITION permission on endpoint Mirror7 to SQL Server login ZArifin .

USE master;
GRANT VIEW DEFINITION ON ENDPOINT::Mirror7 TO ZArifin;
GO

B. Granting TAKE OWNERSHIP permission with the GRANT OPTION


The following example grants TAKE OWNERSHIP permission on endpoint Shipping83 to SQL Server user
PKomosinski with the GRANT OPTION .

USE master;
GRANT TAKE OWNERSHIP ON ENDPOINT::Shipping83 TO PKomosinski
WITH GRANT OPTION;
GO

See Also
DENY Endpoint Permissions (Transact-SQL )
REVOKE Endpoint Permissions (Transact-SQL )
CREATE ENDPOINT (Transact-SQL )
Endpoints Catalog Views (Transact-SQL )
sys.endpoints (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
GRANT Full-Text Permissions (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a full-text catalog or full-text stoplist.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] ON
FULLTEXT
{
CATALOG :: full-text_catalog_name
|
STOPLIST :: full-text_stoplist_name
}
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]

Arguments
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON FULLTEXT CATALOG ::full-text_catalog_name
Specifies the full-text catalog on which the permission is being granted. The scope qualifier :: is required.
ON FULLTEXT STOPLIST ::full-text_stoplist_name
Specifies the full-text stoplist on which the permission is being granted. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
FULLTEXT CATALOG Permissions
A full-text catalog is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a full-text catalog are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY FULL-TEX T CATALOG


FULL-TEX T CATALOG PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

FULLTEXT STOPLIST Permissions


A full-text stoplist is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a full-text stoplist are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY FULL-TEX T STOPLIST


FULL-TEX T STOPLIST PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.

Examples
A. Granting permissions to a full-text catalog
The following example grants Ted the CONTROL permission on the full-text catalog ProductCatalog .
GRANT CONTROL
ON FULLTEXT CATALOG :: ProductCatalog
TO Ted ;

B. Granting permissions to a stoplist


The following example grants Mary the VIEW DEFINITION permission on the full-text stoplist ProductStoplist .

GRANT VIEW DEFINITION


ON FULLTEXT STOPLIST :: ProductStoplist
TO Mary ;

See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
CREATE FULLTEXT STOPLIST (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fulltext_catalogs (Transact-SQL )
sys.fulltext_stoplists (Transact-SQL )
GRANT Object Permissions (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a table, view, table-valued function, stored procedure, extended stored procedure, scalar
function, aggregate function, service queue, or synonym.
Transact-SQL Syntax Conventions

Syntax
GRANT <permission> [ ,...n ] ON
[ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ]
TO <database_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS <database_principal> ]

<permission> ::=
ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be granted on a schema-contained object. For a list of the permissions, see the
Remarks section later in this topic.
ALL
Granting ALL does not grant all possible permissions. Granting ALL is equivalent to granting all ANSI-92
permissions applicable to the specified object. The meaning of ALL varies as follows:
Scalar function permissions: EXECUTE, REFERENCES.
Table-valued function permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
Stored procedure permissions: EXECUTE.
Table permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
View permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
PRIVILEGES
Included for ANSI-92 compliance. Does not change the behavior of ALL.
column
Specifies the name of a column in a table, view, or table-valued function on which the permission is being
granted. The parentheses ( ) are required. Only SELECT, REFERENCES, and UPDATE permissions can be
granted on a column. column can be specified in the permissions clause or after the securable name.
Cau t i on

A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the permissions
hierarchy has been preserved for backward compatibility.
ON [ OBJECT :: ] [ schema_name ] . object_name
Specifies the object on which the permission is being granted. The OBJECT phrase is optional if schema_name is
specified. If the OBJECT phrase is used, the scope qualifier (::) is required. If schema_name is not specified, the
default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
TO <database_principal>
Specifies the principal to which the permission is being granted.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
IMPORTANT
A combination of ALTER and REFERENCE permissions in some cases could allow the grantee to view data or execute
unauthorized functions. For example: A user with ALTER permission on a table and REFERENCE permission on a function
can create a computed column over a function and have it be executed. In this case the user would also need SELECT
permission on the computed column.

Information about objects is visible in various catalog views. For more information, see Object Catalog Views
(Transact-SQL ).
An object is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be granted on an object are listed in the following table,
together with the more general permissions that include them by implication.

OBJECT PERMISSION IMPLIED BY OBJECT PERMISSION IMPLIED BY SCHEMA PERMISSION

ALTER CONTROL ALTER

CONTROL CONTROL CONTROL

DELETE CONTROL DELETE

EXECUTE CONTROL EXECUTE

INSERT CONTROL INSERT

RECEIVE CONTROL CONTROL

REFERENCES CONTROL REFERENCES

SELECT RECEIVE SELECT

TAKE OWNERSHIP CONTROL CONTROL

UPDATE CONTROL UPDATE

VIEW CHANGE TRACKING CONTROL VIEW CHANGE TRACKING

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.

AS ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows Group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS ADDITIONAL PERMISSION REQUIRED

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Examples
A. Granting SELECT permission on a table
The following example grants SELECT permission to user RosaQdM on table Person.Address in the
AdventureWorks2012 database.

GRANT SELECT ON OBJECT::Person.Address TO RosaQdM;


GO

B. Granting EXECUTE permission on a stored procedure


The following example grants EXECUTE permission on stored procedure
HumanResources.uspUpdateEmployeeHireInfo to an application role called Recruiting11 .

USE AdventureWorks2012;
GRANT EXECUTE ON OBJECT::HumanResources.uspUpdateEmployeeHireInfo
TO Recruiting11;
GO

C. Granting REFERENCES permission on a view with GRANT OPTION


The following example grants REFERENCES permission on column BusinessEntityID in view
HumanResources.vEmployee to user Wanida with GRANT OPTION .

GRANT REFERENCES (BusinessEntityID) ON OBJECT::HumanResources.vEmployee


TO Wanida WITH GRANT OPTION;
GO

D. Granting SELECT permission on a table without using the OBJECT phrase


The following example grants SELECT permission to user RosaQdM on table Person.Address in the
AdventureWorks2012 database.
GRANT SELECT ON Person.Address TO RosaQdM;
GO

E. Granting SELECT permission on a table to a domain account


The following example grants SELECT permission to user AdventureWorks2012\RosaQdM on table Person.Address
in the AdventureWorks2012 database.

GRANT SELECT ON Person.Address TO [AdventureWorks2012\RosaQdM];


GO

F. Granting EXECUTE permission on a procedure to a role


The following example creates a role and then grants EXECUTE permission to the role on procedure
uspGetBillOfMaterials in the AdventureWorks2012 database.

CREATE ROLE newrole ;


GRANT EXECUTE ON dbo.uspGetBillOfMaterials TO newrole ;
GO

See Also
DENY Object Permissions (Transact-SQL )
REVOKE Object Permissions (Transact-SQL )
Object Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Securables
sys.fn_builtin_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
GRANT Schema Permissions (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a schema.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] ON SCHEMA :: schema_name
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]

Arguments
permission
Specifies a permission that can be granted on a schema. For a list of the permissions, see the Remarks section later
in this topic..
ON SCHEMA :: schema_name
Specifies the schema on which the permission is being granted. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
IMPORTANT
A combination of ALTER and REFERENCE permissions in some cases could allow the grantee to view data or execute
unauthorized functions. For example: A user with ALTER permission on a table and REFERENCE permission on a function can
create a computed column over a function and have it be executed. In this case, the user must also have SELECT permission
on the computed column.

A schema is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be granted on a schema are listed below, together with the
more general permissions that include them by implication.

SCHEMA PERMISSION IMPLIED BY SCHEMA PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY SCHEMA

CONTROL CONTROL CONTROL

CREATE SEQUENCE ALTER ALTER ANY SCHEMA

DELETE CONTROL DELETE

EXECUTE CONTROL EXECUTE

INSERT CONTROL INSERT

REFERENCES CONTROL REFERENCES

SELECT CONTROL SELECT

TAKE OWNERSHIP CONTROL CONTROL

UPDATE CONTROL UPDATE

VIEW CHANGE TRACKING CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Cau t i on

A user with ALTER permission on a schema can use ownership chaining to access securables in other schemas,
including securables to which that user is explicitly denied access. This is because ownership chaining bypasses
permissions checks on referenced objects when they are owned by the principal that owns the objects that refer to
them. A user with ALTER permission on a schema can create procedures, synonyms, and views that are owned by
the schema's owner. Those objects will have access (via ownership chaining) to information in other schemas
owned by the schema's owner. When possible, you should avoid granting ALTER permission on a schema if the
schema's owner also owns other schemas.
For example, this issue may occur in the following scenarios. These scenarios assume that a user, referred as U1,
has the ALTER permission on the S1 schema. The U1 user is denied to access a table object, referred as T1, in the
schema S2. The S1 schema and the S2 schema are owned by the same owner.
The U1 user has the CREATE PROCEDURE permission on the database and the EXECUTE permission on the S1
schema. Therefore, the U1 user can create a stored procedure, and then access the denied object T1 in the stored
procedure.
The U1 user has the CREATE SYNONYM permission on the database and the SELECT permission on the S1
schema. Therefore, the U1 user can create a synonym in the S1 schema for the denied object T1, and then access
the denied object T1 by using the synonym.
The U1 user has the CREATE VIEW permission on the database and the SELECT permission on the S1 schema.
Therefore, the U1 user can create a view in the S1 schema to query data from the denied object T1, and then
access the denied object T1 by using the view.
For more information, see the Microsoft KB Article number 914847.

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.

Examples
A. Granting INSERT permission on schema HumanResources to guest

GRANT INSERT ON SCHEMA :: HumanResources TO guest;

B. Granting SELECT permission on schema Person to database user WilJo

GRANT SELECT ON SCHEMA :: Person TO WilJo WITH GRANT OPTION;

See Also
DENY Schema Permissions (Transact-SQL )
REVOKE Schema Permissions (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
GRANT Search Property List Permissions (Transact-
SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a search property list.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] ON
SEARCH PROPERTY LIST :: search_property_list_name
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]

Arguments
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON SEARCH PROPERTY LIST ::search_property_list_name
Specifies the search property list on which the permission is being granted. The scope qualifier :: is required.
To view the existing search property lists
sys.registered_search_property_lists (Transact-SQL )
database_principal
Specifies the principal to which the permission is being granted. The principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission.
The principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
SEARCH PROPERTY LIST Permissions
A search property list is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a search property list are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY SEARCH PROPERTY LIST


SEARCH PROPERTY LIST PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, the following additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.

Examples
Granting permissions to a search property list
The following example grants Mary the VIEW DEFINITION permission on the search property list
DocumentTablePropertyList .

GRANT VIEW DEFINITION


ON SEARCH PROPERTY LIST :: DocumentTablePropertyList
TO Mary ;

See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE SEARCH PROPERTY LIST (Transact-SQL )
DENY Search Property List Permissions (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Principals (Database Engine)
REVOKE Search Property List Permissions (Transact-SQL )
sys.fn_builtin_permissions (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
Search Document Properties with Search Property Lists
GRANT Server Permissions (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a server.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ]
TO <grantee_principal> [ ,...n ] [ WITH GRANT OPTION ]
[ AS <grantor_principal> ]

<grantee_principal> ::= SQL_Server_login


| SQL_Server_login_mapped_to_Windows_login
| SQL_Server_login_mapped_to_Windows_group
| SQL_Server_login_mapped_to_certificate
| SQL_Server_login_mapped_to_asymmetric_key
| server_role

<grantor_principal> ::= SQL_Server_login


| SQL_Server_login_mapped_to_Windows_login
| SQL_Server_login_mapped_to_Windows_group
| SQL_Server_login_mapped_to_certificate
| SQL_Server_login_mapped_to_asymmetric_key
| server_role

Arguments
permission
Specifies a permission that can be granted on a server. For a list of the permissions, see the Remarks section later
in this topic.
TO <grantee_principal> Specifies the principal to which the permission is being granted.
AS <grantor_principal> Specifies the principal from which the principal executing this query derives its right to
grant the permission.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
SQL_Server_login
Specifies a SQL Server login.
SQL_Server_login_mapped_to_Windows_login
Specifies a SQL Server login mapped to a Windows login.
SQL_Server_login_mapped_to_Windows_group
Specifies a SQL Server login mapped to a Windows group.
SQL_Server_login_mapped_to_certificate
Specifies a SQL Server login mapped to a certificate.
SQL_Server_login_mapped_to_asymmetric_key
Specifies a SQL Server login mapped to an asymmetric key.
server_role
Specifies a user-defined server role.

Remarks
Permissions at the server scope can be granted only when the current database is master.
Information about server permissions can be viewed in the sys.server_permissions catalog view, and information
about server principals can be viewed in the sys.server_principals catalog view. Information about membership of
server roles can be viewd in the sys.server_role_members catalog view.
A server is the highest level of the permissions hierarchy. The most specific and limited permissions that can be
granted on a server are listed in the following table.

SERVER PERMISSION IMPLIED BY SERVER PERMISSION

ADMINISTER BULK OPERATIONS CONTROL SERVER

ALTER ANY AVAILABILITY GROUP CONTROL SERVER

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

ALTER ANY CONNECTION CONTROL SERVER

ALTER ANY CREDENTIAL CONTROL SERVER

ALTER ANY DATABASE CONTROL SERVER

ALTER ANY ENDPOINT CONTROL SERVER

ALTER ANY EVENT NOTIFICATION CONTROL SERVER

ALTER ANY EVENT SESSION CONTROL SERVER

ALTER ANY LINKED SERVER CONTROL SERVER

ALTER ANY LOGIN CONTROL SERVER

ALTER ANY SERVER AUDIT CONTROL SERVER

ALTER ANY SERVER ROLE CONTROL SERVER

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

ALTER RESOURCES CONTROL SERVER

ALTER SERVER STATE CONTROL SERVER

ALTER SETTINGS CONTROL SERVER


SERVER PERMISSION IMPLIED BY SERVER PERMISSION

ALTER TRACE CONTROL SERVER

AUTHENTICATE SERVER CONTROL SERVER

CONNECT ANY DATABASE CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

CONNECT SQL CONTROL SERVER

CONTROL SERVER CONTROL SERVER

CREATE ANY DATABASE ALTER ANY DATABASE

CREATE AVAILABILITY GROUP ALTER ANY AVAILABILITY GROUP

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

CREATE DDL EVENT NOTIFICATION ALTER ANY EVENT NOTIFICATION

CREATE ENDPOINT ALTER ANY ENDPOINT

CREATE SERVER ROLE ALTER ANY SERVER ROLE

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

CREATE TRACE EVENT NOTIFICATION ALTER ANY EVENT NOTIFICATION

EXTERNAL ACCESS ASSEMBLY CONTROL SERVER

IMPERSONATE ANY LOGIN CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

SELECT ALL USER SECURABLES CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

SHUTDOWN CONTROL SERVER

UNSAFE ASSEMBLY CONTROL SERVER

VIEW ANY DATABASE VIEW ANY DEFINITION

VIEW ANY DEFINITION CONTROL SERVER

VIEW SERVER STATE ALTER SERVER STATE


Remarks
The following three server permissions were added in SQL Server 2014 (12.x).
CONNECT ANY DATABASE Permission
Grant CONNECT ANY DATABASE to a login that must connect to all databases that currently exist and to any
new databases that might be created in future. Does not grant any permission in any database beyond connect.
Combine with SELECT ALL USER SECURABLES or VIEW SERVER STATE to allow an auditing process to view
all data or all database states on the instance of SQL Server.
IMPERSONATE ANY LOGIN Permission
When granted, allows a middle-tier process to impersonate the account of clients connecting to it, as it connects to
databases. When denied, a high privileged login can be blocked from impersonating other logins. For example, a
login with CONTROL SERVER permission can be blocked from impersonating other logins.
SELECT ALL USER SECURABLES Permission
When granted, a login such as an auditor can view data in all databases that the user can connect to. When denied,
prevents access to objects unless they are in the sys schema.

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION or a higher permission that implies the permission being granted. Members of the sysadmin fixed server
role can grant any permission.

Examples
A. Granting a permission to a login
The following example grants CONTROL SERVER permission to the SQL Server login TerryEminhizer .

USE master;
GRANT CONTROL SERVER TO TerryEminhizer;
GO

B. Granting a permission that has GRANT permission


The following example grants ALTER ANY EVENT NOTIFICATION to SQL Server login JanethEsteves with the right to
grant that permission to another login.

USE master;
GRANT ALTER ANY EVENT NOTIFICATION TO JanethEsteves WITH GRANT OPTION;
GO

C. Granting a permission to a server role


The following example creates two server roles named ITDevAdmin and ITDevelopers . It grants the
ALTER ANY DATABASE permission to the ITDevAdmin user-defined server role including the WITH GRANT option so
that the ITDevAdmin server role can reassign the ALTER ANY DATABASE permission. Then, the example grants the
ITDevelopers the permission to use the ALTER ANY DATABASE permission of the ITDevAdmin server role.
USE master;
CREATE SERVER ROLE ITDevAdmin ;
CREATE SERVER ROLE ITDevelopers ;
GRANT ALTER ANY DATABASE TO ITDevAdmin WITH GRANT OPTION ;
GRANT ALTER ANY DATABASE TO ITDevelopers AS ITDevAdmin ;
GO

See Also
GRANT (Transact-SQL )
DENY (Transact-SQL )
DENY Server Permissions (Transact-SQL )
REVOKE Server Permissions (Transact-SQL )
Permissions Hierarchy (Database Engine)
Principals (Database Engine)
Permissions (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
GRANT Server Principal Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a SQL Server login.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] }
ON
{ [ LOGIN :: SQL_Server_login ]
| [ SERVER ROLE :: server_role ] }
TO <server_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
| server_role

Arguments
permission
Specifies a permission that can be granted on a SQL Server login. For a list of the permissions, see the Remarks
section later in this topic.
LOGIN :: SQL_Server_login
Specifies the SQL Server login on which the permission is being granted. The scope qualifier (::) is required.
SERVER ROLE :: server_role
Specifies the user-defined server role on which the permission is being granted. The scope qualifier (::) is required.
TO <server_principal> Specifies the SQL Server login or server role to which the permission is being granted.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
server_role
Specifies the name of a user-defined server role.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to grant the
permission.

Remarks
Permissions at the server scope can be granted only when the current database is master.
Information about server permissions is visible in the sys.server_permissions catalog view. Information about
server principals is visible in the sys.server_principals catalog view.
SQL Server logins and server roles are server-level securables. The most specific and limited permissions that can
be granted on a SQL Server login or server role are listed in the following table, together with the more general
permissions that include them by implication.

SQL SERVER LOGIN OR SERVER ROLE IMPLIED BY SQL SERVER LOGIN OR SERVER
PERMISSION ROLE PERMISSION IMPLIED BY SERVER PERMISSION

CONTROL CONTROL CONTROL SERVER

IMPERSONATE CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

ALTER CONTROL ALTER ANY LOGIN

ALTER ANY SERVER ROLE

Permissions
For logins, requires CONTROL permission on the login or ALTER ANY LOGIN permission on the server.
For server roles, requires CONTROL permission on the server role or ALTER ANY SERVER ROLE permission on
the server.

Examples
A. Granting IMPERSONATE permission on a login
The following example grants IMPERSONATE permission on the SQL Server login WanidaBenshoof to a SQL Server
login created from the Windows user AdvWorks\YoonM .

USE master;
GRANT IMPERSONATE ON LOGIN::WanidaBenshoof to [AdvWorks\YoonM];
GO

B. Granting VIEW DEFINITION permission with GRANT OPTION


The following example grants VIEW DEFINITION on the SQL Server login EricKurjan to the SQL Server login
RMeyyappan with GRANT OPTION .
USE master;
GRANT VIEW DEFINITION ON LOGIN::EricKurjan TO RMeyyappan
WITH GRANT OPTION;
GO

C. Granting VIEW DEFINITION permission on a server role


The following example grants VIEW DEFINITION on the Sales server role to the Auditors server role.

USE master;
GRANT VIEW DEFINITION ON SERVER ROLE::Sales TO Auditors ;
GO

See Also
sys.server_principals (Transact-SQL )
sys.server_permissions (Transact-SQL )
CREATE LOGIN (Transact-SQL )
Principals (Database Engine)
Permissions (Database Engine)
Security Functions (Transact-SQL )
Security Stored Procedures (Transact-SQL )
GRANT Service Broker Permissions (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a Service Broker contract, message type, remote binding, route, or service.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] ON
{
[ CONTRACT :: contract_name ]
| [ MESSAGE TYPE :: message_type_name ]
| [ REMOTE SERVICE BINDING :: remote_binding_name ]
| [ ROUTE :: route_name ]
| [ SERVICE :: service_name ]
}
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]

Arguments
permission
Specifies a permission that can be granted on a Service Broker securable. Listed below.
CONTRACT ::contract_name
Specifies the contract on which the permission is being granted. The scope qualifier "::" is required.
MESSAGE TYPE ::message_type_name
Specifies the message type on which the permission is being granted. The scope qualifier "::" is required.
REMOTE SERVICE BINDING ::remote_binding_name
Specifies the remote service binding on which the permission is being granted. The scope qualifier "::" is required.
ROUTE ::route_name
Specifies the route on which the permission is being granted. The scope qualifier "::" is required.
SERVICE ::service_name
Specifies the service on which the permission is being granted. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other
principals.
granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
Service Broker Contracts
A Service Broker contract is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on a Service Broker
contract are listed below, together with the more general permissions that include them by implication.

IMPLIED BY SERVICE BROKER CONTRACT


SERVICE BROKER CONTRACT PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY CONTRACT

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Message Types


A Service Broker message type is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on a Service Broker
message type are listed below, together with the more general permissions that include them by implication.
SERVICE BROKER MESSAGE TYPE IMPLIED BY SERVICE BROKER MESSAGE
PERMISSION TYPE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY MESSAGE TYPE

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Remote Service Bindings


A Service Broker remote service binding is a database-level securable contained by the database that is its parent
in the permissions hierarchy. The most specific and limited permissions that can be granted on a Service Broker
remote service binding are listed below, together with the more general permissions that include them by
implication.

SERVICE BROKER REMOTE SERVICE IMPLIED BY SERVICE BROKER REMOTE


BINDING PERMISSION SERVICE BINDING PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY REMOTE SERVICE BINDING

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Routes


A Service Broker route is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on a Service Broker route
are listed below, together with the more general permissions that include them by implication.

IMPLIED BY SERVICE BROKER ROUTE


SERVICE BROKER ROUTE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ROUTE

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Services


A Service Broker service is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on a Service Broker service
are listed below, together with the more general permissions that include them by implication.
IMPLIED BY SERVICE BROKER SERVICE
SERVICE BROKER SERVICE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

SEND CONTROL CONTROL

ALTER CONTROL ALTER ANY SERVICE

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.

See Also
SQL Server Service Broker
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
GRANT Symmetric Key Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a symmetric key.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ]
ON SYMMETRIC KEY :: symmetric_key_name
TO <database_principal> [ ,...n ] [ WITH GRANT OPTION ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be granted on a symmetric key. For a list of the permissions, see the Remarks
section later in this topic.
ON SYMMETRIC KEY ::asymmetric_key_name
Specifies the symmetric key on which the permission is being granted. The scope qualifier (::) is required.
TO <database_principal>
Specifies the principal to which the permission is being granted.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about symmetric keys is visible in the sys.symmetric_keys catalog view.
A symmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a symmetric key are listed in the
following table, together with the more general permissions that include them by implication.

SYMMETRIC KEY PERMISSION IMPLIED BY SYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY SYMMETRIC KEY

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.

AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Principals with CONTROL permission on a securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database.

Examples
The following example grants ALTER permission on the symmetric key SamInventory42 to the database user
HamidS .

USE AdventureWorks2012;
GRANT ALTER ON SYMMETRIC KEY::SamInventory42 TO HamidS;
GO

See Also
sys.symmetric_keys (Transact-SQL )
DENY Symmetric Key Permissions (Transact-SQL )
REVOKE Symmetric Key Permissions (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
GRANT System Object Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on system objects such as system stored procedures, extended stored procedures, functions,
and views.
Transact-SQL Syntax Conventions

Syntax
GRANT { SELECT | EXECUTE } ON [ sys.]system_object TO principal

Arguments
[ sys.] .
The sys qualifier is required only when you are referring to catalog views and dynamic management views.
system_object
Specifies the object on which permission is being granted.
principal
Specifies the principal to which the permission is being granted.

Remarks
This statement can be used to grant permissions on certain stored procedures, extended stored procedures, table-
valued functions, scalar functions, views, catalog views, compatibility views, INFORMATION_SCHEMA views,
dynamic management views, and system tables that are installed by SQL Server. Each of these system objects
exists as a unique record in the resource database of the server (mssqlsystemresource). The resource database is
read-only. A link to the object is exposed as a record in the sys schema of every database. Permission to execute or
select a system object can be granted, denied, and revoked.
Granting permission to execute or select an object does not necessarily convey all the permissions required to use
the object. Most objects perform operations for which additional permissions are required. For example, a user
that is granted EXECUTE permission on sp_addlinkedserver cannot create a linked server unless the user is also a
member of the sysadmin fixed server role.
Default name resolution resolves unqualified procedure names to the resource database. Therefore, the sys
qualifier is only required when you are specifying catalog views and dynamic management views.
Granting permissions on triggers and on columns of system objects is not supported.
Permissions on system objects will be preserved during upgrades of SQL Server.
System objects are visible in the sys.system_objects catalog view. The permissions on system objects are visible in
the sys.database_permissions catalog view in the master database.
The following query returns information about permissions of system objects:
SELECT * FROM master.sys.database_permissions AS dp
JOIN sys.system_objects AS so
ON dp.major_id = so.object_id
WHERE dp.class = 1 AND so.parent_object_id = 0 ;
GO

Permissions
Requires CONTROL SERVER permission.

Examples
A. Granting SELECT permission on a view
The following example grants the SQL Server login Sylvester1 permission to select a view that lists SQL Server
logins. The example then grants the additional permission that is required to view metadata on SQL Server logins
that are not owned by the user.

USE AdventureWorks2012;
GRANT SELECT ON sys.sql_logins TO Sylvester1;
GRANT VIEW SERVER STATE to Sylvester1;
GO

B. Granting EXECUTE permission on an extended stored procedure


The following example grants EXECUTE permission on xp_readmail to Sylvester1 .

GRANT EXECUTE ON xp_readmail TO Sylvester1;


GO

See Also
sys.system_objects (Transact-SQL )
sys.database_permissions (Transact-SQL )
REVOKE System Object Permissions (Transact-SQL )
DENY System Object Permissions (Transact-SQL )
GRANT Type Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a type.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] ON TYPE :: [ schema_name . ] type_name
TO <database_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be granted on a type. For a list of the permissions, see the Remarks section later in
this topic.
ON TYPE :: [ schema_name. ] type_name
Specifies the type on which the permission is being granted. The scope qualifier (::) is required. If schema_name is
not specified, the default schema will be used. If schema_name is specified, the schema scope qualifier (.) is
required.
TO <database_principal> Specifies the principal to which the permission is being granted.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
A type is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.

IMPORTANT
GRANT, DENY, and REVOKE permissions do not apply to system types. User-defined types can be granted permissions. For
more information about user-defined types, see Working with User-Defined Types in SQL Server.

The most specific and limited permissions that can be granted on a type are listed in the following table, together
with the more general permissions that include them by implication.

TYPE PERMISSION IMPLIED BY TYPE PERMISSION IMPLIED BY SCHEMA PERMISSION

CONTROL CONTROL CONTROL

EXECUTE CONTROL EXECUTE

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.
AS ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.

Examples
The following example grants VIEW DEFINITION permission with GRANT OPTION on the user-defined type
PhoneNumber to user KhalidR . PhoneNumber is located in the schema Telemarketing .

GRANT VIEW DEFINITION ON TYPE::Telemarketing.PhoneNumber


TO KhalidR WITH GRANT OPTION;
GO

See Also
DENY Type Permissions (Transact-SQL )
REVOKE Type Permissions (Transact-SQL )
CREATE TYPE (Transact-SQL )
Permissions (Database Engine)
Securables
Principals (Database Engine)
GRANT XML Schema Collection Permissions
(Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an XML schema collection.
Transact-SQL Syntax Conventions

Syntax
GRANT permission [ ,...n ] ON
XML SCHEMA COLLECTION :: [ schema_name . ]
XML_schema_collection_name
TO <database_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be granted on an XML schema collection. For a list of the permissions, see the
Remarks section later in this topic.
ON XML SCHEMA COLLECTION :: [ schema_name. ] XML_schema_collection_name
Specifies the XML schema collection on which the permission is being granted. The scope qualifier (::) is required.
If schema_name is not specified, the default schema will be used. If schema_name is specified, the schema scope
qualifier (.) is required.
<database_principal> Specifies the principal to which the permission is being granted.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about XML schema collections is visible in the sys.xml_schema_collections catalog view.
An XML schema collection is a schema-level securable contained by the schema that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on an XML schema
collection are listed in the following table, together with the more general permissions that include them by
implication.

IMPLIED BY XML SCHEMA COLLECTION


XML SCHEMA COLLECTION PERMISSION PERMISSION IMPLIED BY SCHEMA PERMISSION

ALTER CONTROL ALTER

CONTROL CONTROL CONTROL

EXECUTE CONTROL EXECUTE

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.

AS ADDITIONAL PERMISSION REQUIRED


AS ADDITIONAL PERMISSION REQUIRED

Database user IMPERSONATE permission on the user, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.

Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Database role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Application role ALTER permission on the role, membership in the


db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.

Examples
The following example grants EXECUTE permission on the XML schema collection Invoices4 to the user Wanida .
The XML schema collection Invoices4 is located inside the Sales schema of the AdventureWorks2012 database.

USE AdventureWorks2012;
GRANT EXECUTE ON XML SCHEMA COLLECTION::Sales.Invoices4 TO Wanida;
GO

See Also
DENY XML Schema Collection Permissions (Transact-SQL )
REVOKE XML Schema Collection Permissions (Transact-SQL )
sys.xml_schema_collections (Transact-SQL )
CREATE XML SCHEMA COLLECTION (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
OPEN MASTER KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Opens the Database Master Key of the current database.
Transact-SQL Syntax Conventions

Syntax
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'password'

Arguments
'password'
The password with which the Database Master Key was encrypted.

Remarks
If the database master key was encrypted with the service master key, it will be automatically opened when it is
needed for decryption or encryption. In this case, it is not necessary to use the OPEN MASTER KEY statement.
When a database is first attached or restored to a new instance of SQL Server, a copy of the database master key
(encrypted by the service master key) is not yet stored in the server. You must use the OPEN MASTER KEY
statement to decrypt the database master key (DMK). Once the DMK has been decrypted, you have the option
of enabling automatic decryption in the future by using the ALTER MASTER KEY REGENERATE statement to
provision the server with a copy of the DMK, encrypted with the service master key (SMK). When a database
has been upgraded from an earlier version, the DMK should be regenerated to use the newer AES algorithm.
For more information about regenerating the DMK, see ALTER MASTER KEY (Transact-SQL ). The time required
to regenerate the DMK key to upgrade to AES depends upon the number of objects protected by the DMK.
Regenerating the DMK key to upgrade to AES is only necessary once, and has no impact on future
regenerations as part of a key rotation strategy.
You can exclude the Database Master Key of a specific database from automatic key management by using the
ALTER MASTER KEY statement with the DROP ENCRYPTION BY SERVICE MASTER KEY option. Afterward,
you must explicitly open the Database Master Key with a password.
If a transaction in which the Database Master Key was explicitly opened is rolled back, the key will remain open.

Permissions
Requires CONTROL permission on the database.

Examples
The following example opens the Database Master Key of the AdventureWorks2012 database, which has been
encrypted with a password.
USE AdventureWorks2012;
OPEN MASTER KEY DECRYPTION BY PASSWORD = '43987hkhj4325tsku7';
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


The following example opens the database master, which has been encrypted with a password.

USE master;
OPEN MASTER KEY DECRYPTION BY PASSWORD = '43987hkhj4325tsku7';
GO
CLOSE MASTER KEY;
GO

See Also
CREATE MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
BACKUP MASTER KEY (Transact-SQL )
RESTORE MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
DROP MASTER KEY (Transact-SQL )
Encryption Hierarchy
OPEN SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Decrypts a symmetric key and makes it available for use.
Transact-SQL Syntax Conventions

Syntax
OPEN SYMMETRIC KEY Key_name DECRYPTION BY <decryption_mechanism>

<decryption_mechanism> ::=
CERTIFICATE certificate_name [ WITH PASSWORD = 'password' ]
|
ASYMMETRIC KEY asym_key_name [ WITH PASSWORD = 'password' ]
|
SYMMETRIC KEY decrypting_Key_name
|
PASSWORD = 'decryption_password'

Arguments
Key_name
Is the name of the symmetric key to be opened.
CERTIFICATE certificate_name
Is the name of a certificate whose private key will be used to decrypt the symmetric key.
ASYMMETRIC KEY asym_key_name
Is the name of an asymmetric key whose private key will be used to decrypt the symmetric key.
WITH PASSWORD ='password'
Is the password that was used to encrypt the private key of the certificate or asymmetric key.
SYMMETRIC KEY decrypting_key_name
Is the name of a symmetric key that will be used to decrypt the symmetric key that is being opened.
PASSWORD ='password'
Is the password that was used to protect the symmetric key.

Remarks
Open symmetric keys are bound to the session not to the security context. An open key will continue to be
available until it is either explicitly closed or the session is terminated. If you open a symmetric key and then switch
context, the key will remain open and be available in the impersonated context. Information about open symmetric
keys is visible in the sys.openkeys (Transact-SQL ) catalog view.
If the symmetric key was encrypted with another key, that key must be opened first.
If the symmetric key is already open, the query is a NO_OP.
If the password, certificate, or key supplied to decrypt the symmetric key is incorrect, the query will fail.
Symmetric keys created from encryption providers cannot be opened. Encryption and decryption operations using
this kind of symmetric key succeed without the OPEN statement because the Encryption Provider is opening and
closing the key.

Permissions
The caller must have some permission on the key and must not have been denied VIEW DEFINITION permission
on the key. Additional requirements vary, depending on the decryption mechanism:
DECRYPTION BY CERTIFICATE: CONTROL permission on the certificate and knowledge of the password
that encrypts its private key.
DECRYPTION BY ASYMMETRIC KEY: CONTROL permission on the asymmetric key and knowledge of
the password that encrypts its private key.
DECRYPTION BY PASSWORD: knowledge of one of the passwords that is used to encrypt the symmetric
key.

Examples
A. Opening a symmetric key by using a certificate
The following example opens the symmetric key SymKeyMarketing3 and decrypts it by using the private key of
certificate MarketingCert9 .

USE AdventureWorks2012;
OPEN SYMMETRIC KEY SymKeyMarketing3
DECRYPTION BY CERTIFICATE MarketingCert9;
GO

B. Opening a symmetric key by using another symmetric key


The following example opens the symmetric key MarketingKey11 and decrypts it by using symmetric key
HarnpadoungsatayaSE3 .

USE AdventureWorks2012;
-- First open the symmetric key that you want for decryption.
OPEN SYMMETRIC KEY HarnpadoungsatayaSE3
DECRYPTION BY CERTIFICATE sariyaCert01;
-- Use the key that is already open to decrypt MarketingKey11.
OPEN SYMMETRIC KEY MarketingKey11
DECRYPTION BY SYMMETRIC KEY HarnpadoungsatayaSE3;
GO

See Also
CREATE SYMMETRIC KEY (Transact-SQL )
ALTER SYMMETRIC KEY (Transact-SQL )
CLOSE SYMMETRIC KEY (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
Extensible Key Management (EKM )
Permissions: GRANT, DENY, REVOKE (Azure SQL
Data Warehouse, Parallel Data Warehouse)
5/4/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Use SQL Data Warehouse or Parallel Data WarehouseGRANT and DENY statements to grant or deny a
permission (such as UPDATE ) on a securable (such as a database, table, view, etc.) to a security principal (a login, a
database user, or a database role). Use REVOKE to remove the grant or deny of a permission.
Server level permissions are applied to logins. Database level permissions are applied to database users and
database roles.
To see what permissions have been granted and denied, query the sys.server_permissions and
sys.database_permissions views. Permissions that are not explicitly granted or denied to a security principal can be
inherited by having membership in a role that has permissions. The permissions of the fixed database roles cannot
be changed and do not appear in the sys.server_permissions and sys.database_permissions views.
GRANT explicitly grants one or more permissions.
DENY explicitly denies the principal from having one or more permissions.
REVOKE removes existing GRANT or DENY permissions.
Transact-SQL Syntax Conventions (Transact-SQL )

Syntax
-- Azure SQL Data Warehouse and Parallel Data Warehouse
GRANT
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
TO principal [ ,...n ]
[ WITH GRANT OPTION ]
[;]

DENY
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
TO principal [ ,...n ]
[ CASCADE ]
[;]

REVOKE
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
[ FROM | TO ] principal [ ,...n ]
[ CASCADE ]
[;]

<permission> ::=
{ see the tables below }

<class_type> ::=
{
LOGIN
| DATABASE
| OBJECT
| ROLE
| SCHEMA
| USER
}

Arguments
<permission>[ ,...n ]
One or more permissions to grant, deny, or revoke.
ON [ <class_type> :: ] securable The ON clause describes the securable parameter on which to grant, deny, or
revoke permissions.
<class_type> The class type of the securable. This can be LOGIN, DATABASE, OBJECT, SCHEMA, ROLE, or
USER. Permissions can also be granted to the SERVERclass_type, but SERVER is not specified for those
permissions. DATABASE is not specified when the permission includes the word DATABASE (for example ALTER
ANY DATABASE ). When no class_type is specified and the permission type is not restricted to the server or
database class, the class is assumed to be OBJECT.
securable
The name of the login, database, table, view, schema, procedure, role, or user on which to grant, deny, or revoke
permissions. The object name can be specified with the three-part naming rules that are described in Transact-SQL
Syntax Conventions (Transact-SQL ).
TO principal [ ,...n ]
One or more principals being granted, denied, or revoked permissions. Principal is the name of a login, database
user, or database role.
FROM principal [ ,...n ]
One or more principals to revoke permissions from. Principal is the name of a login, database user, or database
role. FROM can only be used with a REVOKE statement. TO can be used with GRANT, DENY, or REVOKE.
WITH GRANT OPTION
Indicates that the grantee will also be given the ability to grant the specified permission to other principals.
CASCADE
Indicates that the permission is denied or revoked to the specified principal and to all other principals to which the
principal granted the permission. Required when the principal has the permission with GRANT OPTION.
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked. This is required when you are using the
CASCADE argument.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

Permissions
To grant a permission, the grantor must have either the permission itself with the WITH GRANT OPTION, or
must have a higher permission that implies the permission being granted. Object owners can grant permissions on
the objects they own. Principals with CONTROL permission on a securable can grant permission on that securable.
Members of the db_owner and db_securityadmin fixed database roles can grant any permission in the database.

General Remarks
Denying or revoking permissions to a principal will not affect requests that have passed authorization and are
currently running. To restrict access immediately, you must cancel active requests or kill current sessions.

NOTE
Most fixed server roles are not available in this release. Use user-defined database roles instead. Logins cannot be added to
the sysadmin fixed server role. Granting the CONTROL SERVER permission approximates membership in the sysadmin
fixed server role.

Some statements require multiple permissions. For example, to create a table requires the CREATE TABLE
permissions in the database, and the ALTER SCHEMA permission for the table that will contain the table.
PDW sometimes executes stored procedures to distribute user actions to the compute nodes. Therefore, the
execute permission for an entire database cannot be denied. (For example
DENY EXECUTE ON DATABASE::<name> TO <user>; will fail.) As a work around, deny the execute permission to user-
schemas or specific objects (procedures).
Implicit and Explicit Permissions
An explicit permission is a GRANT or DENY permission given to a principal by a GRANT or DENY statement.
An implicit permission is a GRANT or DENY permission that a principal (login, user, or database role) has
inherited from another database role.
An implicit permission can also be inherited from a covering or parent permission. For example, UPDATE
permission on a table can be inherited by having UPDATE permission on the schema that contains the table, or
CONTROL permission on the table.
Ownership Chaining
When multiple database objects access each other sequentially, the sequence is known as a chain. Although such
chains do not independently exist, when SQL Server traverses the links in a chain, SQL Server evaluates
permissions on the constituent objects differently than it would if it were accessing the objects separately.
Ownership chaining has important implications for managing security. For more information about ownership
chains, see Ownership Chains and Tutorial: Ownership Chains and Context Switching.

Permission List
Server Level Permissions
Server level permissions can be granted, denied, and revoked from logins.
Permissions that apply to servers
CONTROL SERVER
ADMINISTER BULK OPERATIONS
ALTER ANY CONNECTION
ALTER ANY DATABASE
CREATE ANY DATABASE
ALTER ANY EXTERNAL DATA SOURCE
ALTER ANY EXTERNAL FILE FORMAT
ALTER ANY LOGIN
ALTER SERVER STATE
CONNECT SQL
VIEW ANY DEFINITION
VIEW ANY DATABASE
VIEW SERVER STATE
Permissions that apply to logins
CONTROL ON LOGIN
ALTER ON LOGIN
IMPERSONATE ON LOGIN
VIEW DEFINITION
Database Level Permissions
Database level permissions can be granted, denied, and revoked from database users and user-defined database
roles.
Permissions that apply to all database classes
CONTROL
ALTER
VIEW DEFINITION
Permissions that apply to all database classes except users
TAKE OWNERSHIP
Permissions that apply only to databases
ALTER ANY DATABASE
ALTER ON DATABASE
ALTER ANY DATASPACE
ALTER ANY ROLE
ALTER ANY SCHEMA
ALTER ANY USER
BACKUP DATABASE
CONNECT ON DATABASE
CREATE PROCEDURE
CREATE ROLE
CREATE SCHEMA
CREATE TABLE
CREATE VIEW
SHOWPL AN
Permissions that apply only to users
IMPERSONATE
Permissions that apply to databases, schemas, and objects
ALTER
DELETE
EXECUTE
INSERT
SELECT
UPDATE
REFRENCES
For a definition of each type of permission, see Permissions (Database Engine).
Chart of Permissions
All permissions are graphically represented on this poster. This is the easiest way to see nested hierarchy of
permissions. For example the ALTER ON LOGIN permission can be granted by itself, but it is also included if a
login is granted the CONTROL permission on that login, or if a login is granted the ALTER ANY LOGIN
permission.
To download a full size version of this poster, see SQL Server PDW Permissionsin the files section of the APS
Yammer site (or request by e-mail from apsdoc@microsoft.com.

Default Permissions
The following list describes the default permissions:
When a login is created by using the CREATE LOGIN statement the new login receives the CONNECT
SQL permission.
All logins are members of the public server role and cannot be removed from public.
When a database user is created by using the CREATE USER permission, the database user receives the
CONNECT permission in the database.
All principals, including the public role, have no explicit or implicit permissions by default.
When a login or user becomes the owner of a database or object, the login or user always has all
permissions on the database or object. The ownership permissions cannot be changed and are not visible as
explicit permissions. The GRANT, DENY, and REVOKE statements have no effect on owners.
The sa login has all permissions on the appliance. Similar to ownership permissions, the sa permissions
cannot be changed and are not visible as explicit permissions. The GRANT, DENY, and REVOKE
statements have no effect on sa login. The sa login cannot be renamed.
The USE statement does not require permissions. All principals can run the USE statement on any
database.

Examples: SQL Data Warehouse and Parallel Data Warehouse


A. Granting a server level permission to a login
The following two statements grant a server level permission to a login.

GRANT CONTROL SERVER TO [Ted];


GRANT ALTER ANY DATABASE TO Mary;

B. Granting a server level permission to a login


The following example grants a server level permission on a login to a server principal (another login).

GRANT VIEW DEFINITION ON LOGIN::Ted TO Mary;

C. Granting a database level permission to a user


The following example grants a database level permission on a user to a database principal (another user).

GRANT VIEW DEFINITION ON USER::[Ted] TO Mary;

D. Granting, denying, and revoking a schema permission


The following GRANT statement grants Yuen the ability to select data from any table or view in the dbo schema.

GRANT SELECT ON SCHEMA::dbo TO [Yuen];

The following DENY statement prevents Yuen from selecting data from any table or view in the dbo schema. Yuen
cannot read the data even if he has permission in some other way, such as through a role membership.

DENY SELECT ON SCHEMA::dbo TO [Yuen];

The following REVOKE statement removes the DENY permission. Now Yuen's explicit permissions are neutral.
Yuen might be able to select data from any table through some other implicit permission such as a role
membership.

REVOKE SELECT ON SCHEMA::dbo TO [Yuen];

E. Demonstrating the optional OBJECT:: clause


Because OBJECT is the default class for a permission statement, the following two statements are the same. The
OBJECT:: clause is optional.

GRANT UPDATE ON OBJECT::dbo.StatusTable TO [Ted];

GRANT UPDATE ON dbo.StatusTable TO [Ted];


REVERT (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Switches the execution context back to the caller of the last EXECUTE AS statement.
Transact-SQL Syntax Conventions

Syntax
REVERT
[ WITH COOKIE = @varbinary_variable ]

Arguments
WITH COOKIE = @varbinary_variable
Specifies the cookie that was created in a corresponding EXECUTE AS stand-alone statement.
@varbinary_variable is varbinary(100).

Remarks
REVERT can be specified within a module such as a stored procedure or user-defined function, or as a stand-alone
statement. When specified inside a module, REVERT is applicable only to EXECUTE AS statements defined in the
module. For example, the following stored procedure issues an EXECUTE AS statement followed by a REVERT
statement.

CREATE PROCEDURE dbo.usp_myproc


WITH EXECUTE AS CALLER
AS
SELECT SUSER_NAME(), USER_NAME();
EXECUTE AS USER = 'guest';
SELECT SUSER_NAME(), USER_NAME();
REVERT;
SELECT SUSER_NAME(), USER_NAME();
GO

Assume that in the session in which the stored procedure is run, the execution context of the session is explicitly
changed to login1 , as shown in the following example.

-- Sets the execution context of the session to 'login1'.


EXECUTE AS LOGIN = 'login1';
GO
EXECUTE dbo.usp_myproc;

The REVERT statement that is defined inside usp_myproc switches the execution context set inside the module, but
does not affect the execution context set outside the module. That is, the execution context for the session remains
set to login1 .
When specified as a standalone statement, REVERT applies to EXECUTE AS statements defined within a batch or
session. REVERT has no effect if the corresponding EXECUTE AS statement contains the WITH NO REVERT
clause. In this case, the execution context remains in effect until the session is dropped.

Using REVERT WITH COOKIE


The EXECUTE AS statement that is used to set the execution context of a session can include the optional clause
WITH NO REVERT COOKIE = @varbinary_variable. When this statement is run, the Database Engine passes the
cookie to @varbinary_variable. The execution context set by that statement can only be reverted to the previous
context if the calling REVERT WITH COOKIE = @varbinary_variable statement contains the correct
@varbinary_variable value.
This mechanism is useful in an environment in which connection pooling is used. Connection pooling is the
maintenance of a group of database connections for reuse by applications across multiple end users. Because the
value passed to @varbinary_variable is known only to the caller of the EXECUTE AS statement (in this case, the
application), the caller can guarantee that the execution context they establish cannot be changed by the end user
that invokes the application. After the execution context is reverted, the application can switch context to another
principal.

Permissions
No permissions are required.

Examples
A. Using EXECUTE AS and REVERT to switch context
The following example creates a context execution stack by using multiple principals. The REVERT statement is
then used to reset the execution context to the previous caller. The REVERT statement is executed multiple times
moving up the stack until the execution context is set to the original caller.
USE AdventureWorks2012;
GO
-- Create two temporary principals.
CREATE LOGIN login1 WITH PASSWORD = 'J345#$)thb';
CREATE LOGIN login2 WITH PASSWORD = 'Uor80$23b';
GO
CREATE USER user1 FOR LOGIN login1;
CREATE USER user2 FOR LOGIN login2;
GO
-- Give IMPERSONATE permissions on user2 to user1
-- so that user1 can successfully set the execution context to user2.
GRANT IMPERSONATE ON USER:: user2 TO user1;
GO
-- Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- Set the execution context to login1.
EXECUTE AS LOGIN = 'login1';
-- Verify that the execution context is now login1.
SELECT SUSER_NAME(), USER_NAME();
-- Login1 sets the execution context to login2.
EXECUTE AS USER = 'user2';
-- Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- The execution context stack now has three principals: the originating caller, login1, and login2.
-- The following REVERT statements will reset the execution context to the previous context.
REVERT;
-- Display the current execution context.
SELECT SUSER_NAME(), USER_NAME();
REVERT;
-- Display the current execution context.
SELECT SUSER_NAME(), USER_NAME();

-- Remove the temporary principals.


DROP LOGIN login1;
DROP LOGIN login2;
DROP USER user1;
DROP USER user2;
GO

B. Using the WITH COOKIE clause


The following example sets the execution context of a session to a specified user and specifies the WITH NO
REVERT COOKIE = @varbinary_variable clause. The REVERT statement must specify the value passed to the
@cookie variable in the EXECUTE AS statement to successfully revert the context back to the caller. To run this
example, the login1 login and user1 user created in example A must exist.

DECLARE @cookie varbinary(100);


EXECUTE AS USER = 'user1' WITH COOKIE INTO @cookie;
-- Store the cookie somewhere safe in your application.
-- Verify the context switch.
SELECT SUSER_NAME(), USER_NAME();
--Display the cookie value.
SELECT @cookie;
GO
-- Use the cookie in the REVERT statement.
DECLARE @cookie varbinary(100);
-- Set the cookie value to the one from the SELECT @cookie statement.
SET @cookie = <value from the SELECT @cookie statement>;
REVERT WITH COOKIE = @cookie;
-- Verify the context switch reverted.
SELECT SUSER_NAME(), USER_NAME();
GO
See Also
EXECUTE AS (Transact-SQL )
EXECUTE AS Clause (Transact-SQL )
EXECUTE (Transact-SQL )
SUSER_NAME (Transact-SQL )
USER_NAME (Transact-SQL )
CREATE LOGIN (Transact-SQL )
CREATE USER (Transact-SQL )
REVOKE (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a previously granted or denied permission.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

-- Simplified syntax for REVOKE


REVOKE [ GRANT OPTION FOR ]
{
[ ALL [ PRIVILEGES ] ]
|
permission [ ( column [ ,...n ] ) ] [ ,...n ]
}
[ ON [ class :: ] securable ]
{ TO | FROM } principal [ ,...n ]
[ CASCADE] [ AS principal ]

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

REVOKE
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
[ FROM | TO ] principal [ ,...n ]
[ CASCADE ]
[;]

<permission> ::=
{ see the tables below }

<class_type> ::=
{
LOGIN
| DATABASE
| OBJECT
| ROLE
| SCHEMA
| USER
}

Arguments
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked. This is required when you are using
the CASCADE argument.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

ALL
Applies to: SQL Server 2008 through SQL Server 2017
This option does not revoke all possible permissions. Revoking ALL is equivalent to revoking the following
permissions.
If the securable is a database, ALL means BACKUP DATABASE, BACKUP LOG, CREATE DATABASE,
CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and
CREATE VIEW.
If the securable is a scalar function, ALL means EXECUTE and REFERENCES.
If the securable is a table-valued function, ALL means DELETE, INSERT, REFERENCES, SELECT, and
UPDATE.
If the securable is a stored procedure, ALL means EXECUTE.
If the securable is a table, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
If the securable is a view, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.

NOTE
The REVOKE ALL syntax is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Avoid using
this feature in new development work, and plan to modify applications that currently use this feature. Revoke specific
permissions instead.

PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the topics listed in
Securable-specific Syntax later in this topic.
column
Specifies the name of a column in a table on which permissions are being revoked. The parentheses are required.
class
Specifies the class of the securable on which the permission is being revoked. The scope qualifier :: is required.
securable
Specifies the securable on which the permission is being revoked.
TO | FROM principal
Is the name of a principal. The principals from which permissions on a securable can be revoked vary, depending
on the securable. For more information about valid combinations, see the topics listed in Securable-specific
Syntax later in this topic.
CASCADE
Indicates that the permission that is being revoked is also revoked from other principals to which it has been
granted by this principal. When you are using the CASCADE argument, you must also include the GRANT
OPTION FOR argument.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS principal
Use the AS principal clause to indicate that you are revoking a permission that was granted by a principal other
than you. For example, presume that user Mary is principal_id 12 and user Raul is principal 15. Both Mary and
Raul grant a user named Steven the same permission. The sys.database_permissions table will indicate the
permissions twice but they will each have a different grantor_prinicpal_id value. Mary could revoke the
permission using the AS RAUL clause to remove Raul's grant of the permission.
The use of AS in this statement does not imply the ability to impersonate another user.

Remarks
The full syntax of the REVOKE statement is complex. The syntax diagram above was simplified to draw attention
to its structure. Complete syntax for revoking permissions on specific securables is described in the topics listed
in Securable-specific Syntax later in this topic.
The REVOKE statement can be used to remove granted permissions, and the DENY statement can be used to
prevent a principal from gaining a specific permission through a GRANT.
Granting a permission removes DENY or REVOKE of that permission on the specified securable. If the same
permission is denied at a higher scope that contains the securable, the DENY takes precedence. However,
revoking the granted permission at a higher scope does not take precedence.
Cau t i on

A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the permissions
hierarchy has been preserved for backward compatibility. It will be removed in a future release.
The sp_helprotect system stored procedure reports permissions on a database-level securable
The REVOKE statement will fail if CASCADE is not specified when you are revoking a permission from a
principal that was granted that permission with GRANT OPTION specified.

Permissions
Principals with CONTROL permission on a securable can revoke permission on that securable. Object owners
can revoke permissions on the objects they own.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can revoke any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members
of the db_owner fixed database role, can revoke any permission on any securable in the database. Grantees of
CONTROL permission on a schema can revoke any permission on any object within the schema.

Securable-specific Syntax
The following table lists the securables and the topics that describe the securable-specific syntax.

SECURABLE TOPIC

Application Role REVOKE Database Principal Permissions (Transact-SQL)

Assembly REVOKE Assembly Permissions (Transact-SQL)

Asymmetric Key REVOKE Asymmetric Key Permissions (Transact-SQL)


SECURABLE TOPIC

Availability Group REVOKE Availability Group Permissions (Transact-SQL)

Certificate REVOKE Certificate Permissions (Transact-SQL)

Contract REVOKE Service Broker Permissions (Transact-SQL)

Database REVOKE Database Permissions (Transact-SQL)

Endpoint REVOKE Endpoint Permissions (Transact-SQL)

Database Scoped Credential REVOKE Database Scoped Credential (Transact-SQL)

Full-text Catalog REVOKE Full-Text Permissions (Transact-SQL)

Full-Text Stoplist REVOKE Full-Text Permissions (Transact-SQL)

Function REVOKE Object Permissions (Transact-SQL)

Login REVOKE Server Principal Permissions (Transact-SQL)

Message Type REVOKE Service Broker Permissions (Transact-SQL)

Object REVOKE Object Permissions (Transact-SQL)

Queue REVOKE Object Permissions (Transact-SQL)

Remote Service Binding REVOKE Service Broker Permissions (Transact-SQL)

Role REVOKE Database Principal Permissions (Transact-SQL)

Route REVOKE Service Broker Permissions (Transact-SQL)

Schema REVOKE Schema Permissions (Transact-SQL)

Search Property List REVOKE Search Property List Permissions (Transact-SQL)

Server REVOKE Server Permissions (Transact-SQL)

Service REVOKE Service Broker Permissions (Transact-SQL)

Stored Procedure REVOKE Object Permissions (Transact-SQL)

Symmetric Key REVOKE Symmetric Key Permissions (Transact-SQL)

Synonym REVOKE Object Permissions (Transact-SQL)

System Objects REVOKE System Object Permissions (Transact-SQL)

Table REVOKE Object Permissions (Transact-SQL)


SECURABLE TOPIC

Type REVOKE Type Permissions (Transact-SQL)

User REVOKE Database Principal Permissions (Transact-SQL)

View REVOKE Object Permissions (Transact-SQL)

XML Schema Collection REVOKE XML Schema Collection Permissions (Transact-SQL)

See Also
Permissions Hierarchy (Database Engine)
DENY (Transact-SQL )
GRANT (Transact-SQL )
sp_addlogin (Transact-SQL )
sp_adduser (Transact-SQL )
sp_changedbowner (Transact-SQL )
sp_dropuser (Transact-SQL )
sp_helprotect (Transact-SQL )
sp_helpuser (Transact-SQL )
REVOKE Assembly Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on an assembly.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON ASSEMBLY :: assembly_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]

Arguments
GRANT OPTION FOR
Indicates that the ability to grant or deny the specified permission will be revoked. The permission itself will not be
revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

permission
Specifies a permission that can be revoked on an assembly. Listed below.
ON ASSEMBLY ::assembly_name
Specifies the assembly on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted or denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
An assembly is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be revoked on an assembly are listed below, together with the
more general permissions that include them by implication.

ASSEMBLY PERMISSION IMPLIED BY ASSEMBLY PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ASSEMBLY

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the assembly

See Also
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE ASSEMBLY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
REVOKE Asymmetric Key Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on an asymmetric key.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] { permission [ ,...n ] }
ON ASYMMETRIC KEY :: asymmetric_key_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]

Arguments
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

permission
Specifies a permission that can be revoked on an assembly. Listed below.
ON ASYMMETRIC KEY ::asymmetric_key_name
Specifies the asymmetric key on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted or denied by this principal. The permission itself will not be revoked.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal.

Remarks
An asymmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on an asymmetric key are listed below,
together with the more general permissions that include them by implication.

ASYMMETRIC KEY PERMISSION IMPLIED BY ASYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ASYMMETRIC KEY

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the asymmetric key.

See Also
REVOKE (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
REVOKE Availability Group Permissions (Transact-
SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on an Always On availability group.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON AVAILABILITY GROUP :: availability_group_name
{ FROM | TO } < server_principal > [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey

Arguments
permission
Specifies a permission that can be revoked on an availability group. For a list of the permissions, see the Remarks
section later in this topic.
ON AVAIL ABILITY GROUP ::availability_group_name
Specifies the availability group on which the permission is being revoked. The scope qualifier (::) is required.
{ FROM | TO } <server_principal> Specifies the SQL Server login to which the permission is being revoked.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.

IMPORTANT
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of that permission.

AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to revoke the
permission.

Remarks
Permissions at the server scope can be revoked only when the current database is master.
Information about availability groups is visible in the sys.availability_groups (Transact-SQL ) catalog view.
Information about server permissions is visible in the sys.server_permissions catalog view, and information about
server principals is visible in the sys.server_principals catalog view.
An availability group is a server-level securable. The most specific and limited permissions that can be revoked on
an availability group are listed in the following table, together with the more general permissions that include
them by implication.

IMPLIED BY AVAILABILITY GROUP


AVAILABILITY GROUP PERMISSION PERMISSION IMPLIED BY SERVER PERMISSION

ALTER CONTROL ALTER ANY AVAILABILITY GROUP

CONNECT CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

Permissions
Requires CONTROL permission on the availability group or ALTER ANY AVAIL ABILTIY GROUP permission on
the server.

Examples
A. Revoking VIEW DEFINITION permission on an availability group
The following example revokes VIEW DEFINITION permission on availability group MyAg to SQL Server login
ZArifin .
USE master;
REVOKE VIEW DEFINITION ON AVAILABILITY GROUP::MyAg TO ZArifin;
GO

B. Revoking TAKE OWNERSHIP permission with the CASCADE


The following example revokes TAKE OWNERSHIP permission on availability group MyAg to SQL Server user
PKomosinski and from all principals to which PKomosinski granted TAKE OWNERSHIP on MyAg.

USE master;
REVOKE TAKE OWNERSHIP ON AVAILABILITY GROUP::MyAg TO PKomosinski
CASCADE;
GO

C. Revoking a previously granted WITH GRANT OPTION clause


If a permission was granted using the WITH GRANT OPTION, use REVOKE GRANT OPTION FOR … to remove
the WITH GRANT OPTION. The following example grants the permission and then removes the WITH GRANT
portion of the permission.

USE master;
GRANT CONTROL ON AVAILABILITY GROUP::MyAg TO PKomosinski
WITH GRANT OPTION;
GO
REVOKE GRANT OPTION FOR CONTROL ON AVAILABILITY GROUP::MyAg TO PKomosinski
CASCADE
GO

See Also
GRANT Availability Group Permissions (Transact-SQL )
DENY Availability Group Permissions (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
sys.availability_groups (Transact-SQL )
Always On Availability Groups Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Certificate Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a certificate.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON CERTIFICATE :: certificate_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]

Arguments
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked. The permission itself will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

permission
Specifies a permission that can be revoked on a certificate. Listed below.
ON CERTIFICATE ::certificate_name
Specifies the certificate on which the permission is being revoked. The scope qualifier "::" is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
A certificate is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be revoked on a certificate are listed below, together with the
more general permissions that include them by implication.

CERTIFICATE PERMISSION IMPLIED BY CERTIFICATE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY CERTIFICATE

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the certificate.

See Also
REVOKE (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
REVOKE Database Permissions (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted and denied on a database.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] <permission> [ ,...n ]
{ TO | FROM } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<permission> ::=
permission | ALL [ PRIVILEGES ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be denied on a database. For a list of the permissions, see the Remarks section later
in this topic.
ALL
This option does not revoke all possible permissions. Revoking ALL is equivalent to revoking the following
permissions: BACKUP DATABASE, BACKUP LOG, CREATE DATABASE, CREATE DEFAULT, CREATE FUNCTION,
CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and CREATE VIEW.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
The statement will fail if CASCADE is not specified when you are revoking a permission to a principal that was
granted that permission with the GRANT OPTION specified.
A database is a securable contained by the server that is its parent in the permissions hierarchy. The most specific
and limited permissions that can be revoked on a database are listed in the following table, together with the more
general permissions that include them by implication.

DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

ADMINISTER DATABASE BULK CONTROL CONTROL SERVER


OPERATIONS
Applies to: SQL Database.

ALTER CONTROL ALTER ANY DATABASE


DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

ALTER ANY APPLICATION ROLE ALTER CONTROL SERVER

ALTER ANY ASSEMBLY ALTER CONTROL SERVER

ALTER ANY ASYMMETRIC KEY ALTER CONTROL SERVER

ALTER ANY CERTIFICATE ALTER CONTROL SERVER

ALTER ANY COLUMN ENCRYPTION KEY ALTER CONTROL SERVER

ALTER ANY COLUMN MASTER KEY ALTER CONTROL SERVER


DEFINITION

ALTER ANY CONTRACT ALTER CONTROL SERVER

ALTER ANY DATABASE AUDIT ALTER ALTER ANY SERVER AUDIT

ALTER ANY DATABASE DDL TRIGGER ALTER CONTROL SERVER

ALTER ANY DATABASE EVENT ALTER ALTER ANY EVENT NOTIFICATION


NOTIFICATION

ALTER ANY DATABASE EVENT SESSION ALTER ALTER ANY EVENT SESSION
Applies to: Azure SQL Database.

ALTER ANY DATABASE SCOPED CONTROL CONTROL SERVER


CONFIGURATION
Applies to: SQL Server 2016 (13.x)
through SQL Server 2017, SQL
Database.

ALTER ANY DATASPACE ALTER CONTROL SERVER

ALTER ANY EXTERNAL DATA SOURCE ALTER CONTROL SERVER

ALTER ANY EXTERNAL FILE FORMAT ALTER CONTROL SERVER

ALTER ANY EXTERNAL LIBRARY CONTROL CONTROL SERVER


Applies to: SQL Server 2017 (14.x).

ALTER ANY FULLTEXT CATALOG ALTER CONTROL SERVER

ALTER ANY MASK CONTROL CONTROL SERVER

ALTER ANY MESSAGE TYPE ALTER CONTROL SERVER

ALTER ANY REMOTE SERVICE BINDING ALTER CONTROL SERVER

ALTER ANY ROLE ALTER CONTROL SERVER

ALTER ANY ROUTE ALTER CONTROL SERVER


DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

ALTER ANY SCHEMA ALTER CONTROL SERVER

ALTER ANY SECURITY POLICY CONTROL CONTROL SERVER


Applies to: Azure SQL Database.

ALTER ANY SERVICE ALTER CONTROL SERVER

ALTER ANY SYMMETRIC KEY ALTER CONTROL SERVER

ALTER ANY USER ALTER CONTROL SERVER

AUTHENTICATE CONTROL AUTHENTICATE SERVER

BACKUP DATABASE CONTROL CONTROL SERVER

BACKUP LOG CONTROL CONTROL SERVER

CHECKPOINT CONTROL CONTROL SERVER

CONNECT CONNECT REPLICATION CONTROL SERVER

CONNECT REPLICATION CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

CREATE AGGREGATE ALTER CONTROL SERVER

CREATE ASSEMBLY ALTER ANY ASSEMBLY CONTROL SERVER

CREATE ASYMMETRIC KEY ALTER ANY ASYMMETRIC KEY CONTROL SERVER

CREATE CERTIFICATE ALTER ANY CERTIFICATE CONTROL SERVER

CREATE CONTRACT ALTER ANY CONTRACT CONTROL SERVER

CREATE DATABASE CONTROL CREATE ANY DATABASE

CREATE DATABASE DDL EVENT ALTER ANY DATABASE EVENT CREATE DDL EVENT NOTIFICATION
NOTIFICATION NOTIFICATION

CREATE DEFAULT ALTER CONTROL SERVER

CREATE FULLTEXT CATALOG ALTER ANY FULLTEXT CATALOG CONTROL SERVER

CREATE FUNCTION ALTER CONTROL SERVER

CREATE MESSAGE TYPE ALTER ANY MESSAGE TYPE CONTROL SERVER

CREATE PROCEDURE ALTER CONTROL SERVER


DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

CREATE QUEUE ALTER CONTROL SERVER

CREATE REMOTE SERVICE BINDING ALTER ANY REMOTE SERVICE BINDING CONTROL SERVER

CREATE ROLE ALTER ANY ROLE CONTROL SERVER

CREATE ROUTE ALTER ANY ROUTE CONTROL SERVER

CREATE RULE ALTER CONTROL SERVER

CREATE SCHEMA ALTER ANY SCHEMA CONTROL SERVER

CREATE SERVICE ALTER ANY SERVICE CONTROL SERVER

CREATE SYMMETRIC KEY ALTER ANY SYMMETRIC KEY CONTROL SERVER

CREATE SYNONYM ALTER CONTROL SERVER

CREATE TABLE ALTER CONTROL SERVER

CREATE TYPE ALTER CONTROL SERVER

CREATE VIEW ALTER CONTROL SERVER

CREATE XML SCHEMA COLLECTION ALTER CONTROL SERVER

DELETE CONTROL CONTROL SERVER

EXECUTE CONTROL CONTROL SERVER

EXECUTE ANY EXTERNAL SCRIPT CONTROL CONTROL SERVER


Applies to: SQL Server 2016 (13.x).

INSERT CONTROL CONTROL SERVER

KILL DATABASE CONNECTION CONTROL ALTER ANY CONNECTION


Applies to: Azure SQL Database.

REFERENCES CONTROL CONTROL SERVER

SELECT CONTROL CONTROL SERVER

SHOWPLAN CONTROL ALTER TRACE

SUBSCRIBE QUERY NOTIFICATIONS CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

UNMASK CONTROL CONTROL SERVER


DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION

UPDATE CONTROL CONTROL SERVER

VIEW ANY COLUMN ENCRYPTION KEY CONTROL VIEW ANY DEFINITION


DEFINITION

VIEW ANY COLUMN MASTER KEY CONTROL VIEW ANY DEFINITION


DEFINITION

VIEW DATABASE STATE CONTROL VIEW SERVER STATE

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

Permissions
The principal that executes this statement (or the principal specified with the AS option) must have CONTROL
permission on the database or a higher permission that implies CONTROL permission on the database.
If you are using the AS option, the specified principal must own the database.

Examples
A. Revoking permission to create certificates
The following example revokes CREATE CERTIFICATE permission on the AdventureWorks2012 database from user
MelanieK .

Applies to: SQL Server 2008 through SQL Server 2017

USE AdventureWorks2012;
REVOKE CREATE CERTIFICATE FROM MelanieK;
GO

B. Revoking REFERENCES permission from an application role


The following example revokes REFERENCES permission on the AdventureWorks2012 database from application role
AuditMonitor .

Applies to: SQL Server 2008 through SQL Server 2017, SQL Database

USE AdventureWorks2012;
REVOKE REFERENCES FROM AuditMonitor;
GO

C. Revoking VIEW DEFINITION with CASCADE


The following example revokes VIEW DEFINITION permission on the AdventureWorks2012 database from user
CarmineEs and from all principals to which CarmineEs has granted VIEW DEFINITION permission.

USE AdventureWorks2012;
REVOKE VIEW DEFINITION FROM CarmineEs CASCADE;
GO

See Also
sys.database_permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
GRANT Database Permissions (Transact-SQL )
DENY Database Permissions (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Database Principal Permissions (Transact-
SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted or denied on a database user, database role, or application role.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON
{ [ USER :: database_user ]
| [ ROLE :: database_role ]
| [ APPLICATION ROLE :: application_role ]
}
{ FROM | TO } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be revoked on the database principal. For a list of the permissions, see the
Remarks section later in this topic.
USER ::database_user
Specifies the class and name of the user on which the permission is being revoked. The scope qualifier (::) is
required.
ROLE ::database_role
Specifies the class and name of the role on which the permission is being revoked. The scope qualifier (::) is
required.
APPLICATION ROLE ::application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies the class and name of the application role on which the permission is being revoked. The scope qualifier
(::) is required.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Database User Permissions
A database user is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on a database user are listed in the
following table, together with the more general permissions that include them by implication.
DATABASE USER PERMISSION IMPLIED BY DATABASE USER PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

IMPERSONATE CONTROL CONTROL

ALTER CONTROL ALTER ANY USER

VIEW DEFINITION CONTROL VIEW DEFINITION

Database Role Permissions


A database role is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on a database role are listed in the
following table, together with the more general permissions that include them by implication.

DATABASE ROLE PERMISSION IMPLIED BY DATABASE ROLE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ROLE

VIEW DEFINITION CONTROL VIEW DEFINITION

Application Role Permissions


An application role is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on an application role are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY APPLICATION ROLE


APPLICATION ROLE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

ALTER CONTROL ALTER ANY APPLICATION ROLE

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the specified principal, or a higher permission that implies CONTROL
permission.
Grantees of CONTROL permission on a database, such as members of the db_owner fixed database role, can
grant any permission on any securable in the database.

Examples
A. Revoking CONTROL permission on a user from another user
The following example revokes CONTROL permission on AdventureWorks2012 user Wanida from user RolandX .

USE AdventureWorks2012;
REVOKE CONTROL ON USER::Wanida FROM RolandX;
GO

B. Revoking VIEW DEFINITION permission on a role from a user to which it was granted WITH GRANT
OPTION
The following example revokes VIEW DEFINITION permission on AdventureWorks2012 role SammamishParking
from database user JinghaoLiu . The CASCADE option is specified because the user JinghaoLiu was granted
VIEW DEFINITION permission WITH GRANT OPTION .

USE AdventureWorks2012;
REVOKE VIEW DEFINITION ON ROLE::SammamishParking
FROM JinghaoLiu CASCADE;
GO

C. Revoking IMPERSONATE permission on a user from an application role


The following example revokes IMPERSONATE permission on the user HamithaL from AdventureWorks2012
application role AccountsPayable17 .
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database

USE AdventureWorks2012;
REVOKE IMPERSONATE ON USER::HamithaL FROM AccountsPayable17;
GO

See Also
GRANT Database Principal Permissions (Transact-SQL )
DENY Database Principal Permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
sys.database_permissions (Transact-SQL )
CREATE USER (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Database Scoped Credential (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a database scoped credential.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON DATABASE SCOPED CREDENTIAL :: credential_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]

Arguments
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked. The permission itself will not be
revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

permission
Specifies a permission that can be revoked on a database scoped credential. Listed below.
ON CERTIFICATE ::credential_name
Specifies the database scoped credential on which the permission is being revoked. The scope qualifier "::" is
required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
A database scoped credential is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be revoked on a database scoped
credential are listed below, together with the more general permissions that include them by implication.

DATABASE SCOPED CREDENTIAL IMPLIED BY DATABASE SCOPED


PERMISSION CREDENTIAL PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL CONTROL

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the database scoped credential.

See Also
REVOKE (Transact-SQL )
GRANT Database Scoped Credential (Transact-SQL )
DENY Database Scoped Credential (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
REVOKE Endpoint Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted or denied on an endpoint.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON ENDPOINT :: endpoint_name
{ FROM | TO } <server_principal> [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey

Arguments
permission
Specifies a permission that can be granted on an endpoint. For a list of the permissions, see the Remarks section
later in this topic.
ON ENDPOINT ::endpoint_name
Specifies the endpoint on which the permission is being granted. The scope qualifier (::) is required.
{ FROM | TO } <server_principal> Specifies the SQL Server login from which the permission is being revoked.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to revoke the
permission.

Remarks
Permissions at the server scope can be revoked only when the current database is master.
Information about endpoints is visible in the sys.endpoints catalog view. Information about server permissions is
visible in the sys.server_permissions catalog view, and information about server principals is visible in the
sys.server_principals catalog view.
An endpoint is a server-level securable. The most specific and limited permissions that can be revoked on an
endpoint are listed in the following table, together with the more general permissions that include them by
implication.

ENDPOINT PERMISSION IMPLIED BY ENDPOINT PERMISSION IMPLIED BY SERVER PERMISSION

ALTER CONTROL ALTER ANY ENDPOINT

CONNECT CONTROL CONTROL SERVER

CONTROL CONTROL CONTROL SERVER

TAKE OWNERSHIP CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

Permissions
Requires CONTROL permission on the endpoint or ALTER ANY ENDPOINT permission on the server.

Examples
A. Revoking VIEW DEFINITION permission on an endpoint
The following example revokes VIEW DEFINITION permission on the endpoint Mirror7 from the SQL Server login
ZArifin .

USE master;
REVOKE VIEW DEFINITION ON ENDPOINT::Mirror7 FROM ZArifin;
GO
B. Revoking TAKE OWNERSHIP permission with the CASCADE option
The following example revokes TAKE OWNERSHIP permission on the endpoint Shipping83 from the SQL Server
user PKomosinski and from all principals to which PKomosinski granted TAKE OWNERSHIP on Shipping83 .

USE master;
REVOKE TAKE OWNERSHIP ON ENDPOINT::Shipping83 FROM PKomosinski
CASCADE;
GO

See Also
GRANT Endpoint Permissions (Transact-SQL )
DENY Endpoint Permissions (Transact-SQL )
CREATE ENDPOINT (Transact-SQL )
Endpoints Catalog Views (Transact-SQL )
sys.endpoints (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Full-Text Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a full-text catalog or full-text stoplist.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] ON
FULLTEXT
{
CATALOG :: full-text_catalog_name
|
STOPLIST :: full-text_stoplist_name
}
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]

Arguments
GRANT OPTION FOR
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON FULLTEXT CATALOG ::full-text_catalog_name
Specifies the full-text catalog on which the permission is being revoked. The scope qualifier :: is required.
ON FULLTEXT STOPLIST ::full-text_stoplist_name
Specifies the full-text stoplist on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
FULLTEXT CATALOG Permissions
A full-text catalog is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on a full-text catalog are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY FULL-TEX T CATALOG


FULL-TEX T CATALOG PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

FULLTEXT STOPLIST Permissions


A full-text stoplist is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on a full-text stoplist are listed in the
following table, together with the more general permissions that include them by implication.

IMPLIED BY FULL-TEX T STOPLIST


FULL-TEX T STOPLIST PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the full-text catalog.

See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
CREATE FULLTEXT STOPLIST (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT Full-Text Permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE (Transact-SQL )
sys.fn_builtin_permissions (Transact-SQL )
sys.fulltext_catalogs (Transact-SQL )
sys.fulltext_stoplists (Transact-SQL )
REVOKE Object Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a table, view, table-valued function, stored procedure, extended stored procedure, scalar
function, aggregate function, service queue, or synonym.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] <permission> [ ,...n ] ON
[ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ]
{ FROM | TO } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<permission> ::=
ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be revoked on a schema-contained object. For a list of the permissions, see the
Remarks section later in this topic.
ALL
Revoking ALL does not revoke all possible permissions. Revoking ALL is equivalent to revoking all ANSI-92
permissions applicable to the specified object. The meaning of ALL varies as follows:
Scalar function permissions: EXECUTE, REFERENCES.
Table-valued function permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
Stored Procedure permissions: EXECUTE.
Table permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
View permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
PRIVILEGES
Included for ANSI-92 compliance. Does not change the behavior of ALL.
column
Specifies the name of a column in a table, view, or table-valued function on which the permission is being
revoked. The parentheses ( ) are required. Only SELECT, REFERENCES, and UPDATE permissions can be denied
on a column. column can be specified in the permissions clause or after the securable name.
ON [ OBJECT :: ] [ schema_name ] . object_name
Specifies the object on which the permission is being revoked. The OBJECT phrase is optional if schema_name
is specified. If the OBJECT phrase is used, the scope qualifier (::) is required. If schema_name is not specified, the
default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
{ FROM | TO } <database_principal> Specifies the principal from which the permission is being revoked.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about objects is visible in various catalog views. For more information, see Object Catalog Views
(Transact-SQL ).
An object is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be revoked on an object are listed in the following table,
together with the more general permissions that include them by implication.

OBJECT PERMISSION IMPLIED BY OBJECT PERMISSION IMPLIED BY SCHEMA PERMISSION

ALTER CONTROL ALTER

CONTROL CONTROL CONTROL

DELETE CONTROL DELETE

EXECUTE CONTROL EXECUTE

INSERT CONTROL INSERT

RECEIVE CONTROL CONTROL

REFERENCES CONTROL REFERENCES

SELECT RECEIVE SELECT

TAKE OWNERSHIP CONTROL CONTROL

UPDATE CONTROL UPDATE

VIEW CHANGE TRACKING CONTROL VIEW CHANGE TRACKING

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the object.
If you use the AS clause, the specified principal must own the object on which permissions are being revoked.

Examples
A. Revoking SELECT permission on a table
The following example revokes SELECT permission from the user RosaQdM on the table Person.Address in the
AdventureWorks2012 database.

USE AdventureWorks2012;
REVOKE SELECT ON OBJECT::Person.Address FROM RosaQdM;
GO

B. Revoking EXECUTE permission on a stored procedure


The following example revokes EXECUTE permission on the stored procedure
HumanResources.uspUpdateEmployeeHireInfo from an application role called Recruiting11 .
USE AdventureWorks2012;
REVOKE EXECUTE ON OBJECT::HumanResources.uspUpdateEmployeeHireInfo
FROM Recruiting11;
GO

C. Revoking REFERENCES permission on a view with CASCADE


The following example revokes REFERENCES permission on the column BusinessEntityID in the view
HumanResources.vEmployee from the user Wanida with CASCADE .

USE AdventureWorks2012;
REVOKE REFERENCES (BusinessEntityID) ON OBJECT::HumanResources.vEmployee
FROM Wanida CASCADE;
GO

See Also
GRANT Object Permissions (Transact-SQL )
DENY Object Permissions (Transact-SQL )
Object Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Securables
sys.fn_builtin_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
REVOKE Schema Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a schema.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON SCHEMA :: schema_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]

Arguments
permission
Specifies a permission that can be revoked on a schema. The permissions that can be revoked on a schema are
listed in the "Remarks" section, later in this topic.
GRANT OPTION FOR
Indicates that the right to grant the specified right to other principals will be revoked. The permission itself will not
be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

ON SCHEMA :: schema_name
Specifies the schema on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on

Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
A schema is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be revoked on a schema are listed in the following table,
together with the more general permissions that include them by implication.

SCHEMA PERMISSION IMPLIED BY SCHEMA PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY SCHEMA

CONTROL CONTROL CONTROL

CREATE SEQUENCE ALTER ALTER ANY SCHEMA

DELETE CONTROL DELETE

EXECUTE CONTROL EXECUTE

INSERT CONTROL INSERT

REFERENCES CONTROL REFERENCES

SELECT CONTROL SELECT

TAKE OWNERSHIP CONTROL CONTROL

UPDATE CONTROL UPDATE

VIEW CHANGE TRACKING CONTROL CONTROL


SCHEMA PERMISSION IMPLIED BY SCHEMA PERMISSION IMPLIED BY DATABASE PERMISSION

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the schema.

See Also
CREATE SCHEMA (Transact-SQL )
REVOKE (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
REVOKE Search Property List Permissions (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a search property list.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] ON
SEARCH PROPERTY LIST :: search_property_list_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]

Arguments
GRANT OPTION FOR
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON SEARCH PROPERTY LIST ::search_property_list_name
Specifies the search property list on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. The principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. The
principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.

Remarks
SEARCH PROPERTY LIST Permissions
A search property list is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on a search property list are listed in the
following, together with the more general permissions that include them by implication.

IMPLIED BY SEARCH PROPERTY LIST


SEARCH PROPERTY LIST PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY FULLTEXT CATALOG

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the full-text catalog.

See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE SEARCH PROPERTY LIST (Transact-SQL )
DENY Search Property List Permissions (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT Search Property List Permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Principals (Database Engine)
REVOKE (Transact-SQL )
sys.fn_builtin_permissions (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
Search Document Properties with Search Property Lists
Search Document Properties with Search Property Lists
REVOKE Server Permissions (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes server-level GRANT and DENY permissions.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
{ TO | FROM } <grantee_principal> [ ,...n ]
[ CASCADE ]
[ AS <grantor_principal> ]

<grantee_principal> ::= SQL_Server_login


| SQL_Server_login_mapped_to_Windows_login
| SQL_Server_login_mapped_to_Windows_group
| SQL_Server_login_mapped_to_certificate
| SQL_Server_login_mapped_to_asymmetric_key
| server_role

<grantor_principal> ::= SQL_Server_login


| SQL_Server_login_mapped_to_Windows_login
| SQL_Server_login_mapped_to_Windows_group
| SQL_Server_login_mapped_to_certificate
| SQL_Server_login_mapped_to_asymmetric_key
| server_role

Arguments
permission
Specifies a permission that can be granted on a server. For a list of the permissions, see the Remarks section later
in this topic.
{ TO | FROM } <grantee_principal> Specifies the principal from which the permission is being revoked.
AS <grantor_principal> Specifies the principal from which the principal executing this query derives its right to
revoke the permission.
GRANT OPTION FOR
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
SQL_Server_login
Specifies a SQL Server login.
SQL_Server_login_mapped_to_Windows_login
Specifies a SQL Server login mapped to a Windows login.
SQL_Server_login_mapped_to_Windows_group
Specifies a SQL Server login mapped to a Windows group.
SQL_Server_login_mapped_to_certificate
Specifies a SQL Server login mapped to a certificate.
SQL_Server_login_mapped_to_asymmetric_key
Specifies a SQL Server login mapped to an asymmetric key.
server_role
Specifies a user-defined server role.

Remarks
Permissions at the server scope can be revoked only when the current database is master.
REVOKE removes both GRANT and DENY permissions.
Use REVOKE GRANT OPTION FOR to revoke the right to regrant the specified permission. If the principal has
the permission with the right to grant it, the right to grant the permission will be revoked, and the permission itself
will not be revoked. But if the principal has the specified permission without the GRANT option, the permission
itself will be revoked.
Information about server permissions can be viewed in the sys.server_permissions catalog view, and information
about server principals can be viewed in the sys.server_principals catalog view. Information about membership of
server roles can be viewed in the sys.server_role_members catalog view.
A server is the highest level of the permissions hierarchy. The most specific and limited permissions that can be
revoked on a server are listed in the following table.

SERVER PERMISSION IMPLIED BY SERVER PERMISSION

ADMINISTER BULK OPERATIONS CONTROL SERVER

ALTER ANY AVAILABILITY GROUP CONTROL SERVER

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

ALTER ANY CONNECTION CONTROL SERVER

ALTER ANY CREDENTIAL CONTROL SERVER

ALTER ANY DATABASE CONTROL SERVER

ALTER ANY ENDPOINT CONTROL SERVER


SERVER PERMISSION IMPLIED BY SERVER PERMISSION

ALTER ANY EVENT NOTIFICATION CONTROL SERVER

ALTER ANY EVENT SESSION CONTROL SERVER

ALTER ANY LINKED SERVER CONTROL SERVER

ALTER ANY LOGIN CONTROL SERVER

ALTER ANY SERVER AUDIT CONTROL SERVER

ALTER ANY SERVER ROLE CONTROL SERVER

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

ALTER RESOURCES CONTROL SERVER

ALTER SERVER STATE CONTROL SERVER

ALTER SETTINGS CONTROL SERVER

ALTER TRACE CONTROL SERVER

AUTHENTICATE SERVER CONTROL SERVER

CONNECT ANY DATABASE CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

CONNECT SQL CONTROL SERVER

CONTROL SERVER CONTROL SERVER

CREATE ANY DATABASE ALTER ANY DATABASE

CREATE AVAILABILITY GROUP ALTER ANY AVAILABILITY GROUP

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

CREATE DDL EVENT NOTIFICATION ALTER ANY EVENT NOTIFICATION

CREATE ENDPOINT ALTER ANY ENDPOINT

CREATE SERVER ROLE ALTER ANY SERVER ROLE

Applies to: SQL Server ( SQL Server 2012 (11.x) through


current version).

CREATE TRACE EVENT NOTIFICATION ALTER ANY EVENT NOTIFICATION


SERVER PERMISSION IMPLIED BY SERVER PERMISSION

EXTERNAL ACCESS ASSEMBLY CONTROL SERVER

IMPERSONATE ANY LOGIN CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

SELECT ALL USER SECURABLES CONTROL SERVER

Applies to: SQL Server ( SQL Server 2014 (12.x) through


current version).

SHUTDOWN CONTROL SERVER

UNSAFE ASSEMBLY CONTROL SERVER

VIEW ANY DATABASE VIEW ANY DEFINITION

VIEW ANY DEFINITION CONTROL SERVER

VIEW SERVER STATE ALTER SERVER STATE

Permissions
Requires CONTROL SERVER permission or membership in the sysadmin fixed server role.

Examples
A. Revoking a permission from a login
The following example revokes VIEW SERVER STATE permission from the SQL Server login WanidaBenshoof .

USE master;
REVOKE VIEW SERVER STATE FROM WanidaBenshoof;
GO

B. Revoking the WITH GRANT option


The following example revokes the right to grant CONNECT SQL from the SQL Server login JanethEsteves .

USE master;
REVOKE GRANT OPTION FOR CONNECT SQL FROM JanethEsteves;
GO

The login still has CONNECT SQL permission, but it can no longer grant that permission to other principals.

See Also
GRANT (Transact-SQL )
DENY (Transact-SQL )
DENY Server Permissions (Transact-SQL )
REVOKE Server Permissions (Transact-SQL )
Permissions Hierarchy (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
REVOKE Server Principal Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted or denied on a SQL Server login.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] }
ON
{ [ LOGIN :: SQL_Server_login ]
| [ SERVER ROLE :: server_role ] }
{ FROM | TO } <server_principal> [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]

<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
| server_role

Arguments
permission
Specifies a permission that can be revoked on a SQL Server login. For a list of the permissions, see the Remarks
section later in this topic.
LOGIN :: SQL_Server_login
Specifies the SQL Server login on which the permission is being revoked. The scope qualifier (::) is required.
SERVER ROLE :: server_role
Specifies the server role on which the permission is being revoked. The scope qualifier (::) is required.
{ FROM | TO } <server_principal> Specifies the SQL Server login or server role from which the permission is
being revoked.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
server_role
Specifies the name of a user-defined server role.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to revoke the
permission.

Remarks
SQL Server logins and server roles are server-level securables. The most specific and limited permissions that can
be revoked on a SQL Server login or server role are listed in the following table, together with the more general
permissions that include them by implication.

SQL SERVER LOGIN OR SERVER ROLE IMPLIED BY SQL SERVER LOGIN OR SERVER
PERMISSION ROLE PERMISSION IMPLIED BY SERVER PERMISSION

CONTROL CONTROL CONTROL SERVER

IMPERSONATE CONTROL CONTROL SERVER

VIEW DEFINITION CONTROL VIEW ANY DEFINITION

ALTER CONTROL ALTER ANY LOGIN

ALTER ANY SERVER ROLE

Permissions
For logins, requires CONTROL permission on the login or ALTER ANY LOGIN permission on the server.
For server roles, requires CONTROL permission on the server role or ALTER ANY SERVER ROLE permission on
the server.

Examples
A. Revoking IMPERSONATE permission on a login
The following example revokes IMPERSONATE permission on the SQL Server login WanidaBenshoof from a SQL
Server login created from the Windows user AdvWorks\YoonM .
USE master;
REVOKE IMPERSONATE ON LOGIN::WanidaBenshoof FROM [AdvWorks\YoonM];
GO

B. Revoking VIEW DEFINITION permission with CASCADE


The following example revokes VIEW DEFINITION permission on the SQL Server login EricKurjan from the SQL
Server login RMeyyappan . The CASCADE option indicates that VIEW DEFINITION permission on EricKurjan will also
be revoked from the principals to which RMeyyappan granted this permission.

USE master;
REVOKE VIEW DEFINITION ON LOGIN::EricKurjan FROM RMeyyappan
CASCADE;
GO

C. Revoking VIEW DEFINITION permission on a server role


The following example revokes VIEW DEFINITION on the Sales server role to the Auditors server role.

USE master;
REVOKE VIEW DEFINITION ON SERVER ROLE::Sales TO Auditors ;
GO

See Also
sys.server_principals (Transact-SQL )
sys.server_permissions (Transact-SQL )
GRANT Server Principal Permissions (Transact-SQL )
DENY Server Principal Permissions (Transact-SQL )
CREATE LOGIN (Transact-SQL )
Principals (Database Engine)
Permissions (Database Engine)
Security Functions (Transact-SQL )
Security Stored Procedures (Transact-SQL )
REVOKE Service Broker Permissions (Transact-SQL)
5/4/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a Service Broker contract, message type, remote service binding, route, or service.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] ON
{
[ CONTRACT :: contract_name ]
| [ MESSAGE TYPE :: message_type_name ]
| [ REMOTE SERVICE BINDING :: remote_binding_name ]
| [ ROUTE :: route_name ]
| [ SERVICE :: service_name ]
}
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]

Arguments
GRANT OPTION FOR
Indicates that the right to grant the specified right to other principals will be revoked. The permission itself will
not be revokes.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

permission
Specifies a permission that can be revoked on a Service Broker securable. For a list of these permissions, see the
Remarks section later in this topic.
CONTRACT ::contract_name
Specifies the contract on which the permission is being revoked. The scope qualifier :: is required.
MESSAGE TYPE ::message_type_name
Specifies the message type on which the permission is being revoked. The scope qualifier :: is required.
REMOTE SERVICE BINDING ::remote_binding_name
Specifies the remote service binding on which the permission is being revoked. The scope qualifier :: is required.
ROUTE ::route_name
Specifies the route on which the permission is being revoked. The scope qualifier :: is required.
SERVICE ::message_type_name
Specifies the service on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. database_principal can be one of the
following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted or denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission.
revoking_principal can be one of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal

Remarks
Service Broker Contracts
A Service Broker contract is a database-level securable that is contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be revoked on a Service Broker
contract are listed in the following table, together with the more general permissions that include them by
implication.
IMPLIED BY SERVICE BROKER CONTRACT
SERVICE BROKER CONTRACT PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY CONTRACT

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Message Types


A Service Broker message type is a database-level securable that is contained by the database that is its parent in
the permissions hierarchy. The most specific and limited permissions that can be revoked on a Service Broker
message type are listed in the following table, together with the more general permissions that include them by
implication.

SERVICE BROKER MESSAGE TYPE IMPLIED BY SERVICE BROKER MESSAGE


PERMISSION TYPE PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY MESSAGE TYPE

REFERENCES CONTROL REFERENCES

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Remote Service Bindings


A Service Broker remote service binding is a database-level securable that is contained by the database that is its
parent in the permissions hierarchy. The most specific and limited permissions that can be revoked on a Service
Broker remote service binding are listed in the following table, together with the more general permissions that
include them by implication.

SERVICE BROKER REMOTE SERVICE IMPLIED BY SERVICE BROKER REMOTE


BINDING PERMISSION SERVICE BINDING PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY REMOTE SERVICE BINDING

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Routes


A Service Broker route is a database-level securable that is contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be revoked on a Service Broker route
are listed in the following table, together with the more general permissions that include them by implication.

IMPLIED BY SERVICE BROKER ROUTE


SERVICE BROKER ROUTE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

ALTER CONTROL ALTER ANY ROUTE

VIEW DEFINITION CONTROL VIEW DEFINITION

Service Broker Services


A Service Broker service is a database-level securable that is contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be revoked on a Service Broker service
are listed in the following table, together with the more general permissions that include them by implication.

IMPLIED BY SERVICE BROKER SERVICE


SERVICE BROKER SERVICE PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION

CONTROL CONTROL CONTROL

TAKE OWNERSHIP CONTROL CONTROL

SEND CONTROL CONTROL

ALTER CONTROL ALTER ANY SERVICE

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the Service Broker contract, message type, remote service binding, route, or
service

See Also
GRANT Service Broker Permissions (Transact-SQL )
DENY Service Broker Permissions (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Symmetric Key Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted and denied on a symmetric key.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON SYMMETRIC KEY :: symmetric_key_name
{ TO | FROM } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be revoked on a symmetric key. For a list of the permissions, see the Remarks
section later in this topic.
ON SYMMETRIC KEY :: asymmetric_key_name
Specifies the symmetric key on which the permission is being revoked. The scope qualifier (::) is required.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
{ TO | FROM } <database_principal>
Specifies the principal from which the permission is being revoked.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about symmetric keys is visible in the sys.symmetric_keys catalog view.
The statement will fail if CASCADE is not specified when revoking a permission from a principal that was granted
that permission with GRANT OPTION specified.
A symmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a symmetric key are listed in the
following table, together with the more general permissions that include them by implication.

SYMMETRIC KEY PERMISSION IMPLIED BY SYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION

ALTER CONTROL ALTER ANY SYMMETRIC KEY

CONTROL CONTROL CONTROL

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the symmetric key or ALTER ANY SYMMETRIC KEY permission on the
database. If you use the AS option, the specified principal must own the symmetric key.
Examples
The following example revokes ALTER permission on the symmetric key SamInventory42 from the user HamidS
and from other principals to which HamidS has granted ALTER permission.

USE AdventureWorks2012;
REVOKE ALTER ON SYMMETRIC KEY::SamInventory42 TO HamidS CASCADE;
GO

See Also
sys.symmetric_keys (Transact-SQL )
GRANT Symmetric Key Permissions (Transact-SQL )
DENY Symmetric Key Permissions (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
REVOKE System Object Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on system objects such as stored procedures, extended stored procedures, functions, and
views from a principal.
Transact-SQL Syntax Conventions

Syntax
REVOKE { SELECT | EXECUTE } ON [sys.]system_object FROM principal

Arguments
[sys.] .
The sys qualifier is required only when you are referring to catalog views and dynamic management views.
system_object
Specifies the object on which permission is being revoked.
principal
Specifies the principal from which the permission is being revoked.

Remarks
This statement can be used to revoke permissions on certain stored procedures, extended stored procedures,
table-valued functions, scalar functions, views, catalog views, compatibility views, INFORMATION_SCHEMA
views, dynamic management views, and system tables that are installed by SQL Server. Each of these system
objects exists as a unique record in the resource database (mssqlsystemresource). The resource database is read-
only. A link to the object is exposed as a record in the sys schema of every database.
Default name resolution resolves unqualified procedure names to the resource database. Therefore, the sys.
qualifier is required only when you are specifying catalog views and dynamic management views.
Cau t i on

Revoking permissions on system objects will cause applications that depend on them to fail. SQL Server
Management Studio uses catalog views and may not function as expected if you change the default permissions
on catalog views.
Revoking permissions on triggers and on columns of system objects is not supported.
Permissions on system objects will be preserved during upgrades of SQL Server.
System objects are visible in the sys.system_objects catalog view.

Permissions
Requires CONTROL SERVER permission.
Examples
The following example revokes EXECUTE permission on sp_addlinkedserver from public .

REVOKE EXECUTE ON sys.sp_addlinkedserver FROM public;


GO

See Also
sys.system_objects (Transact-SQL )
sys.database_permissions (Transact-SQL )
GRANT System Object Permissions (Transact-SQL )
DENY System Object Permissions (Transact-SQL )
REVOKE Type Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a type.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON TYPE :: [ schema_name ]. type_name
{ FROM | TO } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be revoked on a type. For a list of the permissions, see the Remarks section later in
this topic.
ON TYPE :: [ schema_name ] . type_name
Specifies the type on which the permission is being revoked. The scope qualifier (::) is required. If schema_name is
not specified, the default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
{ FROM | TO } <database_principal> Specifies the principal from which the permission is being revoked.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
A type is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.

IMPORTANT
GRANT, DENY, and REVOKE permissions do not apply to system types. User-defined types can be granted permissions. For
more information about user-defined types, see Working with User-Defined Types in SQL Server.

The most specific and limited permissions that can be revoked on a type are listed in the following table, together
with the more general permissions that include them by implication.

TYPE PERMISSION IMPLIED BY TYPE PERMISSION IMPLIED BY SCHEMA PERMISSION

CONTROL CONTROL CONTROL

EXECUTE CONTROL EXECUTE


TYPE PERMISSION IMPLIED BY TYPE PERMISSION IMPLIED BY SCHEMA PERMISSION

REFERENCES CONTROL REFERENCES

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the type. If you use the AS clause, the specified principal must own the type.

Examples
The following example revokes VIEW DEFINITION permission on the user-defined type PhoneNumber from the user
KhalidR . The CASCADE option indicates that VIEW DEFINITION permission will also be revoked from principals to
which KhalidR granted it. PhoneNumber is located in schema Telemarketing .

REVOKE VIEW DEFINITION ON TYPE::Telemarketing.PhoneNumber


FROM KhalidR CASCADE;
GO

See Also
GRANT Type Permissions (Transact-SQL )
DENY Type Permissions (Transact-SQL )
CREATE TYPE (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Securables
REVOKE XML Schema Collection Permissions
(Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted or denied on an XML schema collection.
Transact-SQL Syntax Conventions

Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] ON
XML SCHEMA COLLECTION :: [ schema_name . ]
XML_schema_collection_name
{ TO | FROM } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]

<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login

Arguments
permission
Specifies a permission that can be revoked on an XML schema collection. For a list of the permissions, see the
Remarks section later in this topic.
ON XML SCHEMA COLLECTION :: [ schema_name. ] XML_schema_collection_name
Specifies the XML schema collection on which the permission is being revoked. The scope qualifier (::) is required.
If schema_name is not specified, the default schema will be used. If schema_name is specified, the schema scope
qualifier (.) is required.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.

IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.

CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on

A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
{ TO | FROM } <database_principal>
Specifies the principal from which the permission is being revoked.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.

Remarks
Information about XML schema collections is visible in the sys.xml_schema_collections catalog view.
The statement will fail if CASCADE is not specified when you are revoking a permission from a principal that was
granted that permission with GRANT OPTION specified.
An XML schema collection is a schema-level securable contained by the schema that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be revoked on an XML schema
collection are listed in the following table, together with the more general permissions that include them by
implication.

IMPLIED BY XML SCHEMA COLLECTION


XML SCHEMA COLLECTION PERMISSION PERMISSION IMPLIED BY SCHEMA PERMISSION

ALTER CONTROL ALTER

CONTROL CONTROL CONTROL

EXECUTE CONTROL EXECUTE

REFERENCES CONTROL REFERENCES


IMPLIED BY XML SCHEMA COLLECTION
XML SCHEMA COLLECTION PERMISSION PERMISSION IMPLIED BY SCHEMA PERMISSION

TAKE OWNERSHIP CONTROL CONTROL

VIEW DEFINITION CONTROL VIEW DEFINITION

Permissions
Requires CONTROL permission on the XML schema collection. If you use the AS option, the specified principal
must own the XML schema collection.

Examples
The following example revokes EXECUTE permission on the XML schema collection Invoices4 from the user
Wanida . The XML schema collection Invoices4 is located inside the Sales schema of the AdventureWorks2012
database.

USE AdventureWorks2012;
REVOKE EXECUTE ON XML SCHEMA COLLECTION::Sales.Invoices4 FROM Wanida;
GO

See Also
GRANT XML Schema Collection Permissions (Transact-SQL )
DENY XML Schema Collection Permissions (Transact-SQL )
sys.xml_schema_collections (Transact-SQL )
CREATE XML SCHEMA COLLECTION (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
SETUSER (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Allows a member of the sysadmin fixed server role or the owner of a database to impersonate another user.

IMPORTANT
SETUSER is included for backward compatibility only. SETUSER may not be supported in a future release of SQL Server. We
recommend that you use EXECUTE AS instead.

Transact-SQL Syntax Conventions

Syntax
SETUSER [ 'username' [ WITH NORESET ] ]

Arguments
' username '
Is the name of a SQL Server or Windows user in the current database that is impersonated. When username is not
specified, the original identity of the system administrator or database owner impersonating the user is reset.
WITH NORESET
Specifies that subsequent SETUSER statements (with no specified username) should not reset the user identity to
system administrator or database owner.

Remarks
SETUSER can be used by a member of the sysadmin fixed server role or the owner of a database to adopt the
identity of another user to test the permissions of the other user. Membership in the db_owner fixed database role
is not sufficient.
Only use SETUSER with SQL Server users. SETUSER is not supported with Windows users. When SETUSER has
been used to assume the identity of another user, any objects that the impersonating user creates are owned by the
user being impersonated. For example, if the database owner assumes the identity of user Margaret and creates a
table called orders, the orders table is owned by Margaret, not the system administrator.
SETUSER remains in effect until another SETUSER statement is issued or until the current database is changed
with the USE statement.

NOTE
If SETUSER WITH NORESET is used, the database owner or system administrator must log off and then log on again to
reestablish his or her own rights.
Permissions
Requires membership in the sysadmin fixed server role or must be the owner of the database. Membership in the
db_owner fixed database role is not sufficient

Examples
The following example shows how the database owner can adopt the identity of another user. User mary has
created a table called computer_types . By using SETUSER, the database owner impersonates mary to grant user
joe access to the computer_types table, and then resets his or her own identity.

SETUSER 'mary';
GO
GRANT SELECT ON computer_types TO joe;
GO
--To revert to the original user
SETUSER;

See Also
DENY (Transact-SQL )
GRANT (Transact-SQL )
REVOKE (Transact-SQL )
USE (Transact-SQL )
BEGIN CONVERSATION TIMER (Transact-SQL)
5/4/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Starts a timer. When the time-out expires, Service Broker puts a message of type
http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer on the local queue for the conversation.

Transact-SQL Syntax Conventions

Syntax
BEGIN CONVERSATION TIMER ( conversation_handle )
TIMEOUT = timeout
[ ; ]

Arguments
BEGIN CONVERSATION TIMER (conversation_handle)
Specifies the conversation to time. The conversation_handle must be of type uniqueidentifier.
TIMEOUT
Specifies, in seconds, the amount of time to wait before putting the message on the queue.

Remarks
A conversation timer provides a way for an application to receive a message on a conversation after a specific
amount of time. Calling BEGIN CONVERSATION TIMER on a conversation before the timer has expired sets the
timeout to the new value. Unlike the conversation lifetime, each side of the conversation has an independent
conversation timer. The DialogTimer message arrives on the local queue without affecting the remote side of the
conversation. Therefore, an application can use a timer message for any purpose.
For example, you can use the conversation timer to keep an application from waiting too long for an overdue
response. If you expect the application to complete a dialog in 30 seconds, you might set the conversation timer for
that dialog to 60 seconds (30 seconds plus a 30-second grace period). If the dialog is still open after 60 seconds,
the application receives a time-out message on the queue for that dialog.
Alternatively, an application can use a conversation timer to request activation at a particular time. For example,
you might create a service that reports the number of active connections every few minutes, or a service that
reports the number of open purchase orders every evening. The service sets a conversation timer to expire at the
desired time; when the timer expires, Service Broker sends a DialogTimer message. The DialogTimer message
causes Service Broker to start the activation stored procedure for the queue. The stored procedure sends a
message to the remote service and restarts the conversation timer.
BEGIN CONVERSATION TIMER is not valid in a user-defined function.

Permissions
Permission for setting a conversation timer defaults to users that have SEND permissions on the service for the
conversation, members of the sysadmin fixed server role, and members of the db_owner fixed database role.
Examples
The following example sets a two-minute time-out on the dialog identified by @dialog_handle .

-- @dialog_handle is of type uniqueidentifier and


-- contains a valid conversation handle.

BEGIN CONVERSATION TIMER (@dialog_handle)


TIMEOUT = 120 ;

See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
END CONVERSATION (Transact-SQL )
RECEIVE (Transact-SQL )
BEGIN DIALOG CONVERSATION (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Begins a dialog from one service to another service. A dialog is a conversation that provides exactly-once-in-order
messaging between two services.
Transact-SQL Syntax Conventions

Syntax
BEGIN DIALOG [ CONVERSATION ] @dialog_handle
FROM SERVICE initiator_service_name
TO SERVICE 'target_service_name'
[ , { 'service_broker_guid' | 'CURRENT DATABASE' }]
[ ON CONTRACT contract_name ]
[ WITH
[ { RELATED_CONVERSATION = related_conversation_handle
| RELATED_CONVERSATION_GROUP = related_conversation_group_id } ]
[ [ , ] LIFETIME = dialog_lifetime ]
[ [ , ] ENCRYPTION = { ON | OFF } ] ]
[ ; ]

Arguments
@ dialog_handle
Is a variable used to store the system-generated dialog handle for the new dialog that is returned by the BEGIN
DIALOG CONVERSATION statement. The variable must be of type uniqueidentifier.
FROM SERVICE initiator_service_name
Specifies the service that initiates the dialog. The name specified must be the name of a service in the current
database. The queue specified for the initiator service receives messages returned by the target service and
messages created by Service Broker for this conversation.
TO SERVICE 'target_service_name'
Specifies the target service with which to initiate the dialog. The target_service_name is of type nvarchar(256).
Service Broker uses a byte-by-byte comparison to match the target_service_name string. In other words, the
comparison is case-sensitive and does not take into account the current collation.
service_broker_guid
Specifies the database that hosts the target service. When more than one database hosts an instance of the target
service, you can communicate with a specific database by providing a service_broker_guid.
The service_broker_guid is of type nvarchar(128). To find the service_broker_guid for a database, run the
following query in the database:

SELECT service_broker_guid
FROM sys.databases
WHERE database_id = DB_ID() ;
NOTE
This option is not available in a contained database.

'CURRENT DATABASE'
Specifies that the conversation use the service_broker_guid for the current database.
ON CONTRACT contract_name
Specifies the contract that this conversation follows. The contract must exist in the current database. If the target
service does not accept new conversations on the contract specified, Service Broker returns an error message on
the conversation. When this clause is omitted, the conversation follows the contract named DEFAULT.
REL ATED_CONVERSATION =related_conversation_handle
Specifies the existing conversation group that the new dialog is added to. When this clause is present, the new
dialog belongs to the same conversation group as the dialog specified by related_conversation_handle. The
related_conversation_handlemust be of a type implicitly convertible to type uniqueidentifier. The statement fails
if the related_conversation_handle does not reference an existing dialog.
REL ATED_CONVERSATION_GROUP =related_conversation_group_id
Specifies the existing conversation group that the new dialog is added to. When this clause is present, the new
dialog will be added to the conversation group specified by related_conversation_group_id. The
related_conversation_group_idmust be of a type implicitly convertible to type uniqueidentifier. If
related_conversation_group_iddoes not reference an existing conversation group, the service broker creates a
new conversation group with the specified related_conversation_group_id and relates the new dialog to that
conversation group.
LIFETIME =dialog_lifetime
Specifies the maximum amount of time the dialog will remain open. For the dialog to complete successfully, both
endpoints must explicitly end the dialog before the lifetime expires. The dialog_lifetime value must be expressed
in seconds. Lifetime is of type int. When no LIFETIME clause is specified, the dialog lifetime is the maximum value
of the int data type.
ENCRYPTION
Specifies whether or not messages sent and received on this dialog must be encrypted when they are sent outside
of an instance of Microsoft SQL Server. A dialog that must be encrypted is a secured dialog. When ENCRYPTION
= ON and the certificates required to support encryption are not configured, Service Broker returns an error
message on the conversation. If ENCRYPTION = OFF, encryption is used if a remote service binding is
configured for the target_service_name; otherwise messages are sent unencrypted. If this clause is not present,
the default value is ON.

NOTE
Messages exchanged with services in the same instance of SQL Server are never encrypted. However, a database master key
and the certificates for encryption are still required for conversations that use encryption if the services for the conversation
are in different databases. This allows conversations to continue in the event that one of the databases is moved to a
different instance while the conversation is in progress.

Remarks
All messages are part of a conversation. Therefore, an initiating service must begin a conversation with the target
service before sending a message to the target service. The information specified in the BEGIN DIALOG
CONVERSATION statement is similar to the address on a letter; Service Broker uses the information to deliver
messages to the correct service. The service specified in the TO SERVICE clause is the address that messages are
sent to. The service specified in the FROM SERVICE clause is the return address used for reply messages.
The target of a conversation does not need to call BEGIN DIALOG CONVERSATION. Service Broker creates a
conversation in the target database when the first message in the conversation arrives from the initiator.
Beginning a dialog creates a conversation endpoint in the database for the initiating service, but does not create a
network connection to the instance that hosts the target service. Service Broker does not establish communication
with the target of the dialog until the first message is sent.
When the BEGIN DIALOG CONVERSATION statement does not specify a related conversation or a related
conversation group, Service Broker creates a new conversation group for the new conversation.
Service Broker does not allow arbitrary groupings of conversations. All conversations in a conversation group
must have the service specified in the FROM clause as either the initiator or the target of the conversation.
The BEGIN DIALOG CONVERSATION command locks the conversation group that contains the dialog_handle
returned. When the command includes a REL ATED_CONVERSATION_GROUP clause, the conversation group
for dialog_handle is the conversation group specified in the related_conversation_group_id parameter. When the
command includes a REL ATED_CONVERSATION clause, the conversation group for dialog_handle is the
conversation group associated with the related_conversation_handle specified.
BEGIN DIALOG CONVERSATION is not valid in a user-defined function.

Permissions
To begin a dialog, the current user must have RECEIVE permission on the queue for the service specified in the
FROM clause of the command and REFERENCES permission for the contract specified.

Examples
A. Beginning a dialog
The following example begins a dialog conversation and stores an identifier for the dialog in @dialog_handle. The
//Adventure-Works.com/ExpenseClient service is the initiator for the dialog, and the
//Adventure-Works.com/Expenses service is the target of the dialog. The dialog follows the contract
//Adventure-Works.com/Expenses/ExpenseSubmission .

DECLARE @dialog_handle UNIQUEIDENTIFIER ;

BEGIN DIALOG CONVERSATION @dialog_handle


FROM SERVICE [//Adventure-Works.com/ExpenseClient]
TO SERVICE '//Adventure-Works.com/Expenses'
ON CONTRACT [//Adventure-Works.com/Expenses/ExpenseSubmission] ;

B. Beginning a dialog with an explicit lifetime


The following example begins a dialog conversation and stores an identifier for the dialog in @dialog_handle . The
//Adventure-Works.com/ExpenseClient service is the initiator for the dialog, and the
//Adventure-Works.com/Expenses service is the target of the dialog. The dialog follows the contract
//Adventure-Works.com/Expenses/ExpenseSubmission . If the dialog has not been closed by the END
CONVERSATION command within 60 seconds, the broker ends the dialog with an error.
DECLARE @dialog_handle UNIQUEIDENTIFIER ;

BEGIN DIALOG CONVERSATION @dialog_handle


FROM SERVICE [//Adventure-Works.com/ExpenseClient]
TO SERVICE '//Adventure-Works.com/Expenses'
ON CONTRACT [//Adventure-Works.com/Expenses/ExpenseSubmission]
WITH LIFETIME = 60 ;

C. Beginning a dialog with a specific broker instance


The following example begins a dialog conversation and stores an identifier for the dialog in @dialog_handle . The
//Adventure-Works.com/ExpenseClient service is the initiator for the dialog, and the
//Adventure-Works.com/Expenses service is the target of the dialog. The dialog follows the contract
//Adventure-Works.com/Expenses/ExpenseSubmission . The broker routes messages on this dialog to the broker
identified by the GUID a326e034-d4cf-4e8b-8d98-4d7e1926c904.

DECLARE @dialog_handle UNIQUEIDENTIFIER ;

BEGIN DIALOG CONVERSATION @dialog_handle


FROM SERVICE [//Adventure-Works.com/ExpenseClient]
TO SERVICE '//Adventure-Works.com/Expenses',
'a326e034-d4cf-4e8b-8d98-4d7e1926c904'
ON CONTRACT [//Adventure-Works.com/Expenses/ExpenseSubmission] ;

D. Beginning a dialog, and relating it to an existing conversation group


The following example begins a dialog conversation and stores an identifier for the dialog in @dialog_handle . The
//Adventure-Works.com/ExpenseClient service is the initiator for the dialog, and the
//Adventure-Works.com/Expenses service is the target of the dialog. The dialog follows the contract
//Adventure-Works.com/Expenses/ExpenseSubmission . The broker associates the dialog with the conversation group
identified by @conversation_group_id instead of creating a new conversation group.

DECLARE @dialog_handle UNIQUEIDENTIFIER ;


DECLARE @conversation_group_id UNIQUEIDENTIFIER ;

SET @conversation_group_id = <retrieve conversation group ID from database>

BEGIN DIALOG CONVERSATION @dialog_handle


FROM SERVICE [//Adventure-Works.com/ExpenseClient]
TO SERVICE '//Adventure-Works.com/Expenses'
ON CONTRACT [//Adventure-Works.com/Expenses/ExpenseSubmission]
WITH RELATED_CONVERSATION_GROUP = @conversation_group_id ;

E. Beginning a dialog with an explicit lifetime, and relating the dialog to an existing conversation
The following example begins a dialog conversation and stores an identifier for the dialog in @dialog_handle . The
//Adventure-Works.com/ExpenseClient service is the initiator for the dialog, and the
//Adventure-Works.com/Expenses service is the target of the dialog. The dialog follows the contract
//Adventure-Works.com/Expenses/ExpenseSubmission . The new dialog belongs to the same conversation group that
@existing_conversation_handle belongs to. If the dialog has not been closed by the END CONVERSATION
command within 600 seconds, Service Broker ends the dialog with an error.
DECLARE @dialog_handle UNIQUEIDENTIFIER
DECLARE @existing_conversation_handle UNIQUEIDENTIFIER

SET @existing_conversation_handle = <retrieve conversation handle from database>

BEGIN DIALOG CONVERSATION @dialog_handle


FROM SERVICE [//Adventure-Works.com/ExpenseClient]
TO SERVICE '//Adventure-Works.com/Expenses'
ON CONTRACT [//Adventure-Works.com/Expenses/ExpenseSubmission]
WITH RELATED_CONVERSATION = @existing_conversation_handle
LIFETIME = 600 ;

F. Beginning a dialog with optional encryption


The following example begins a dialog and stores an identifier for the dialog in @dialog_handle . The
//Adventure-Works.com/ExpenseClient service is the initiator for the dialog, and the
//Adventure-Works.com/Expenses service is the target of the dialog. The dialog follows the contract
//Adventure-Works.com/Expenses/ExpenseSubmission . The conversation in this example allows the message to travel
over the network without encryption if encryption is not available.

DECLARE @dialog_handle UNIQUEIDENTIFIER

BEGIN DIALOG CONVERSATION @dialog_handle


FROM SERVICE [//Adventure-Works.com/ExpenseClient]
TO SERVICE '//Adventure-Works.com/Expenses'
ON CONTRACT [//Adventure-Works.com/Expenses/ExpenseSubmission]
WITH ENCRYPTION = OFF ;

See Also
BEGIN CONVERSATION TIMER (Transact-SQL )
END CONVERSATION (Transact-SQL )
MOVE CONVERSATION (Transact-SQL )
sys.conversation_endpoints (Transact-SQL )
END CONVERSATION (Transact-SQL)
5/4/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Ends one side of an existing conversation.
Transact-SQL Syntax Conventions

Syntax
END CONVERSATION conversation_handle
[ [ WITH ERROR = failure_code DESCRIPTION = 'failure_text' ]
| [ WITH CLEANUP ]
]
[ ; ]

Arguments
conversation_handle
Is the conversation handle for the conversation to end.
WITH ERROR =failure_code
Is the error code. The failure_code is of type int. The failure code is a user-defined code that is included in the
error message sent to the other side of the conversation. The failure code must be greater than 0.
DESCRIPTION =failure_text
Is the error message. The failure_text is of type nvarchar(3000). The failure text is user-defined text that is
included in the error message sent to the other side of the conversation.
WITH CLEANUP
Removes all messages and catalog view entries for one side of a conversation that cannot complete normally. The
other side of the conversation is not notified of the cleanup. Microsoft SQL Server drops the conversation
endpoint, all messages for the conversation in the transmission queue, and all messages for the conversation in
the service queue. Administrators can use this option to remove conversations which cannot complete normally.
For example, if the remote service has been permanently removed, an administrator can use WITH CLEANUP to
remove conversations to that service. Do not use WITH CLEANUP in the code of a Service Broker application. If
END CONVERSATION WITH CLEANUP is run before the receiving endpoint acknowledges receiving a message,
the sending endpoint will send the message again. This could potentially re-run the dialog.

Remarks
Ending a conversation locks the conversation group that the provided conversation_handle belongs to. When a
conversation ends, Service Broker removes all messages for the conversation from the service queue.
After a conversation ends, an application can no longer send or receive messages for that conversation. Both
participants in a conversation must call END CONVERSATION for the conversation to complete. If Service
Broker has not received an end dialog message or an Error message from the other participant in the
conversation, Service Broker notifies the other participant in the conversation that the conversation has ended. In
this case, although the conversation handle for the conversation is no longer valid, the endpoint for the
conversation remains active until the instance that hosts the remote service acknowledges the message.
If Service Broker has not already processed an end dialog or error message for the conversation, Service Broker
notifies the remote side of the conversation that the conversation has ended. The messages that Service Broker
sends to the remote service depend on the options specified:
If the conversation ends without errors, and the conversation to the remote service is still active, Service
Broker sends a message of type http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog to the remote
service. Service Broker adds this message to the transmission queue in conversation order. Service Broker
sends all messages for this conversation that are currently in the transmission queue before sending this
message.
If the conversation ends with an error and the conversation to the remote service is still active, Service
Broker sends a message of type http://schemas.microsoft.com/SQL/ServiceBroker/Error to the remote
service. Service Broker drops any other messages for this conversation currently in the transmission queue.
The WITH CLEANUP clause allows a database administrator to remove conversations that cannot
complete normally. This option removes all messages and catalog view entries for the conversation. Notice
that, in this case, the remote side of the conversation receives no indication that the conversation has ended,
and may not receive messages that have been sent by an application but not yet transmitted over the
network. Avoid this option unless the conversation cannot complete normally.
After a conversation ends, a Transact-SQL SEND statement that specifies the conversation handle causes a
Transact-SQL error. If messages for this conversation arrive from the other side of the conversation, Service
Broker discards those messages.
If a conversation ends while the remote service still has unsent messages for the conversation, the remote
service drops the unsent messages. This is not considered an error, and the remote service receives no
notification that messages have been dropped.
Failure codes specified in the WITH ERROR clause must be positive numbers. Negative numbers are
reserved for Service Broker error messages.
END CONVERSATION is not valid in a user-defined function.

Permissions
To end an active conversation, the current user must be the owner of the conversation, a member of the sysadmin
fixed server role or a member of the db_owner fixed database role.
A member of the sysadmin fixed server role or a member of the db_owner fixed database role may use the WITH
CLEANUP to remove the metadata for a conversation that has already completed.

Examples
A. Ending a conversation
The following example ends the dialog specified by @dialog_handle .

END CONVERSATION @dialog_handle ;

B. Ending a conversation with an error


The following example ends the dialog specified by @dialog_handle with an error if the processing statement
reports an error. Notice that this is a simplistic approach to error handling, and may not be appropriate for some
applications.
DECLARE @dialog_handle UNIQUEIDENTIFIER,
@ErrorSave INT,
@ErrorDesc NVARCHAR(100) ;
BEGIN TRANSACTION ;

<receive and process message>

SET @ErrorSave = @@ERROR ;

IF (@ErrorSave <> 0)
BEGIN
ROLLBACK TRANSACTION ;
SET @ErrorDesc = N'An error has occurred.' ;
END CONVERSATION @dialog_handle
WITH ERROR = @ErrorSave DESCRIPTION = @ErrorDesc ;
END
ELSE

COMMIT TRANSACTION ;

C. Cleaning up a conversation that cannot complete normally


The following example ends the dialog specified by @dialog_handle . SQL Server immediately removes all
messages from the service queue and the transmission queue, without notifying the remote service. Since ending
a dialog with cleanup does not notify the remote service, you should only use this in cases where the remote
service is not available to receive an EndDialog or Error message.

END CONVERSATION @dialog_handle WITH CLEANUP ;

See Also
BEGIN CONVERSATION TIMER (Transact-SQL )
BEGIN DIALOG CONVERSATION (Transact-SQL )
sys.conversation_endpoints (Transact-SQL )
GET CONVERSATION GROUP (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Returns the conversation group identifier for the next message to be received, and locks the conversation group
for the conversation that contains the message. The conversation group identifier can be used to retrieve
conversation state information before retrieving the message itself.
Transact-SQL Syntax Conventions

Syntax
[ WAITFOR ( ]
GET CONVERSATION GROUP @conversation_group_id
FROM <queue>
[ ) ] [ , TIMEOUT timeout ]
[ ; ]

<queue> ::=
{
[ database_name . [ schema_name ] . | schema_name . ] queue_name
}

Arguments
WAITFOR
Specifies that the GET CONVERSATION GROUP statement waits for a message to arrive on the queue if no
messages are currently present.
@conversation_group_id
Is a variable used to store the conversation group ID returned by the GET CONVERSATION GROUP statement.
The variable must be of type uniqueidentifier. If there are no conversation groups available, the variable is set to
NULL.
FROM
Specifies the queue to get the conversation group from.
database_name
Is the name of the database that contains the queue to get the conversation group from. When no database_name
is provided, defaults to the current database.
schema_name
Is the name of the schema that owns the queue to get the conversation group from. When no schema_name is
provided, defaults to the default schema for the current user.
queue_name
Is the name of the queue to get the conversation group from.
TIMEOUT timeout
Specifies the length of time, in milliseconds, that Service Broker waits for a message to arrive on the queue. This
clause may only be used with the WAITFOR clause. If a statement that uses WAITFOR does not include this clause
or the timeout is -1, the wait time is unlimited. If the timeout expires, GET CONVERSATION GROUP sets the
@conversation_group_id variable to NULL.

Remarks
IMPORTANT
If the GET CONVERSATION GROUP statement is not the first statement in a batch or stored procedure, the preceding
statement must be terminated with a semicolon (;), the Transact-SQL statement terminator.

If the queue specified in the GET CONVERSATION GROUP statement is unavailable, the statement fails with a
Transact-SQL error.
This statement returns the next conversation group where all of the following is true:
The conversation group can be successfully locked.
The conversation group has messages available in the queue.
The conversation group has the highest priority level of all the conversation groups that meet the
previously-listed criteria. The priority level of a conversation group is the highest priority level assigned to
any conversation that is a member of the group and has messages in the queue.
Successive calls to GET CONVERSATION GROUP within the same transaction may lock more than one
conversation group. If no conversation group is available, the statement returns NULL as the conversation
group identifier.
When the WAITFOR clause is specified, the statement waits for the timeout specified, or until a
conversation group is available. If the queue is dropped while the statement is waiting, the statement
immediately returns an error.
GET CONVERSATION GROUP is not valid in a user-defined function.

Permissions
To get a conversation group identifier from a queue, the current user must have RECEIVE permission on the
queue.

Examples
A. Getting a conversation group, waiting indefinitely
The following example sets @conversation_group_id to the conversation group identifier for the next available
message on ExpenseQueue . The command waits until a message becomes available.

DECLARE @conversation_group_id UNIQUEIDENTIFIER ;

WAITFOR (
GET CONVERSATION GROUP @conversation_group_id
FROM ExpenseQueue
) ;

B. Getting a conversation group, waiting one minute


The following example sets @conversation_group_id to the conversation group identifier for the next available
message on ExpenseQueue . If no message becomes available within one minute, GET CONVERSATION GROUP
returns without changing the value of @conversation_group_id .
DECLARE @conversation_group_id UNIQUEIDENTIFIER

WAITFOR (
GET CONVERSATION GROUP @conversation_group_id
FROM ExpenseQueue ),
TIMEOUT 60000 ;

C. Getting a conversation group, returning immediately


The following example sets @conversation_group_id to the conversation group identifier for the next available
message on ExpenseQueue . If no message is available, GET CONVERSATION GROUP returns immediately without
changing @conversation_group_id .

DECLARE @conversation_group_id UNIQUEIDENTIFIER ;

GET CONVERSATION GROUP @conversation_group_id


FROM AdventureWorks.dbo.ExpenseQueue ;

See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
MOVE CONVERSATION (Transact-SQL )
GET_TRANSMISSION_STATUS (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Returns the status for the last transmission for one side of a conversation.
Transact-SQL Syntax Conventions

Syntax
GET_TRANSMISSION_STATUS ( conversation_handle )

Arguments
conversation_id
Is the conversation handle for the conversation. This parameter is of type uniqueidentifier.

Return Types
nchar

Remarks
Returns a string describing the status of the last transmission attempt for the specified conversation. Returns an
empty string if the last transmission attempt succeeded, if no transmission attempt has yet been made, or if the
conversation_handle does not exist.
The information returned by this function is the same information displayed in the last_transmission_error column
of the management view sys.transmission_queue. However, this function can be used to find the transmission
status for conversations that do not currently have messages in the transmission queue.

NOTE
GET_TRANSMISSION_STATUS does not provide information for messages that do not have a conversation endpoint in the
current instance. That is, no information is available for messages to be forwarded.

Examples
The following example reports the transmission status for the conversation with the conversation handle
58ef1d2d-c405-42eb-a762-23ff320bddf0 .

SELECT Status =
GET_TRANSMISSION_STATUS('58ef1d2d-c405-42eb-a762-23ff320bddf0') ;

Here is a sample result set, edited for line length:


Status
-------------------------------
The Service Broker protocol transport is disabled or not configured.

In this case, SQL Server is not configured to allow Service Broker to communicate over the network.

See Also
sys.conversation_endpoints (Transact-SQL )
sys.transmission_queue (Transact-SQL )
MOVE CONVERSATION (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Moves a conversation to a different conversation group.
Transact-SQL Syntax Conventions

Syntax
MOVE CONVERSATION conversation_handle
TO conversation_group_id
[ ; ]

Arguments
conversation_handle
Is a variable or constant containing the conversation handle of the conversation to be moved. conversation_handle
must be of type uniqueidentifier.
TO conversation_group_id
Is a variable or constant containing the identifier of the conversation group where the conversation is to be moved.
conversation_group_id must be of type uniqueidentifier.

Remarks
The MOVE CONVERSATION statement moves the conversation specified by conversation_handle to the
conversation group identified by conversation_group_id. Dialogs can be only be redirected between conversation
groups that are associated with the same queue.

IMPORTANT
If the MOVE CONVERSATION statement is not the first statement in a batch or stored procedure, the preceding statement
must be terminated with a semicolon (;), the Transact-SQL statement terminator.

The MOVE CONVERSATION statement locks the conversation group associated with conversation_handle and
the conversation group specified by conversation_group_id until the transaction containing the statement commits
or rolls back.
MOVE CONVERSATION is not valid in a user-defined function.

Permissions
To move a conversation, the current user must be the owner of the conversation and the conversation group, or be
a member of the sysadmin fixed server role, or be a member of the db_owner fixed database role.

Examples
The following example moves a conversation to a different conversation group.

DECLARE @conversation_handle UNIQUEIDENTIFIER,


@conversation_group_id UNIQUEIDENTIFIER ;

SET @conversation_handle =
<retrieve conversation handle from database> ;
SET @conversation_group_id =
<retrieve conversation group ID from database> ;

MOVE CONVERSATION @conversation_handle TO @conversation_group_id ;

See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
GET CONVERSATION GROUP (Transact-SQL )
END CONVERSATION (Transact-SQL )
sys.conversation_groups (Transact-SQL )
sys.conversation_endpoints (Transact-SQL )
RECEIVE (Transact-SQL)
5/3/2018 • 10 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Retrieves one or more messages from a queue. Depending on the retention setting for the queue, either removes
the message from the queue or updates the status of the message in the queue.
Transact-SQL Syntax Conventions

Syntax
[ WAITFOR ( ]
RECEIVE [ TOP ( n ) ]
<column_specifier> [ ,...n ]
FROM <queue>
[ INTO table_variable ]
[ WHERE { conversation_handle = conversation_handle
| conversation_group_id = conversation_group_id } ]
[ ) ] [ , TIMEOUT timeout ]
[ ; ]

<column_specifier> ::=
{ *
| { column_name | [ ] expression } [ [ AS ] column_alias ]
| column_alias = expression
} [ ,...n ]

<queue> ::=
{
[ database_name . [ schema_name ] . | schema_name . ]
queue_name
}

Arguments
WAITFOR
Specifies that the RECEIVE statement waits for a message to arrive on the queue, if no messages are currently
present.
TOP ( n )
Specifies the maximum number of messages to be returned. If this clause is not specified, all messages are
returned that meet the statement criteria.
*
Specifies that the result set contains all columns in the queue.
column_name
The name of a column to include in the result set.
expression
A column name, constant, function, or any combination of column names, constants, and functions connected by
an operator.
column_alias
An alternative name to replace the column name in the result set.
FROM
Specifies the queue that contains the messages to retrieve.
database_name
The name of the database that contains the queue to receive messages from. When no database name is
provided, defaults to the current database.
schema_name
The name of the schema that owns the queue to receive messages from. When no schema name is provided,
defaults to the default schema for the current user.
queue_name
The name of the queue to receive messages from.
INTO table_variable
Specifies the table variable that RECEIVE places the messages into. The table variable must have the same
number of columns as are in the messages. The data type of each column in the table variable must be implicitly
convertible to the data type of the corresponding column in the messages. If INTO is not specified, the messages
are returned as a result set.
WHERE
Specifies the conversation or conversation group for the received messages. If omitted, returns messages from the
next available conversation group.
conversation_handle = conversation_handle
Specifies the conversation for received messages. The conversation handle provided must be a uniqueidentifer,
or a type that is convertible to uniqueidentifier.
conversation_group_id = conversation_group_id
Specifies the conversation group for received messages. The conversation group ID that is provided must be a
uniqueidentifier, or a type convertible to uniqueidentifier.
TIMEOUT timeout
Specifies the amount of time, in milliseconds, for the statement to wait for a message. This clause can only be used
with the WAITFOR clause. If this clause is not specified, or the time-out is -1, the wait time is unlimited. If the
time-out expires, RECEIVE returns an empty result set.

Remarks
IMPORTANT
If the RECEIVE statement is not the first statement in a batch or stored procedure, the preceding statement must be ended
with a semi-colon (;).

The RECEIVE statement reads messages from a queue and returns a result set. The result set consists of zero or
more rows, each of which contains one message. If the INTO clause is not used, and column_specifier does not
assign the values to local variables, the statement returns a result set to the calling program.
The messages that are returned by the RECEIVE statement can be of different message types. Applications can
use the message_type_name column to route each message to code that handles the associated message type.
There are two classes of message types:
Application-defined message types that were created by using the CREATE MESSAGE TYPE statement.
The set of application-defined message types that are allowed in a conversation are defined by the Service
Broker contract that is specified for the conversation.
Service Broker system messages that return status or error information.
The RECEIVE statement removes received messages from the queue unless the queue specifies message
retention. When the RETENTION setting for the queue is ON, the RECEIVE statement updates the status
column to 0 and leaves the messages in the queue. When a transaction that contains a RECEIVE statement
rolls back, all changes to the queue in the transaction are also rolled back, returning messages to the queue.
All messages that are returned by a RECEIVE statement belong the same conversation group. The
RECEIVE statement locks the conversation group for the messages that are returned until the transaction
that contains the statement finishes. A RECEIVE statement returns messages that have a status of 1. The
result set returned by a RECEIVE statement is implicitly ordered:
If messages from multiple conversations meet the WHERE clause conditions, the RECEIVE statement
returns all messages from one conversation before it returns messages for any other conversation. The
conversations are processed in descending priority level order.
For a given conversation, a RECEIVE statement returns messages in ascending
message_sequence_number order.
The WHERE clause of the RECEIVE statement can only contain one search condition that uses either
conversation_handle or conversation_group_id. The search condition cannot contain one or more of
the other columns in the queue. The conversation_handle or conversation_group_id cannot be an
expression. The set of messages that is returned depends on the conditions that are specified in the WHERE
clause:
If conversation_handle is specified, RECEIVE returns all messages from the specified conversation that
are available in the queue.
If conversation_group_id is specified, RECEIVE returns all messages that are available in the queue from
any conversation that is a member of the specified conversation group.
If there is no WHERE clause, RECEIVE determines which conversation group:
Has one or more messages in the queue.
Has not been locked by another RECEIVE statement.
Has the highest priority level of all the conversation groups that meet these criteria.
RECEIVE then returns all messages available in the queue from any conversation that is a member
of the selected conversation group.
If the conversation handle or conversation group identifier specified in the WHERE clause does not exist, or
is not associated with the specified queue, the RECEIVE statement returns an error.
If the queue specified in the RECEIVE statement has the queue status set to OFF, the statement fails with a
Transact-SQL error.
When the WAITFOR clause is specified, the statement waits for the specified time out, or until a result set is
available. If the queue is dropped or the status of the queue is set to OFF while the statement is waiting, the
statement immediately returns an error. If the RECEIVE statement specifies a conversation group or
conversation handle and the service for that conversation is dropped or moved to another queue, the
RECEIVE statement reports a Transact-SQL error.
RECEIVE is not valid in a user-defined function.
The RECEIVE statement has no priority starvation prevention. If a single RECEIVE statement locks a
conversation group and retrieves a lot of messages from low priority conversations, no messages can be
received from high priority conversations in the group. To prevent this, when you are retrieving messages
from low priority conversations, use the TOP clause to limit the number of messages retrieved by each
RECEIVE statement.

Queue Columns
The following table lists the columns in a queue:

COLUMN NAME DATA TYPE DESCRIPTION

status tinyint Status of the message. For messages


returned by the RECEIVE command, the
status is always 0. Messages in the
queue might contain one of the
following values:

0=Ready1=Received message2=Not
yet complete3=Retained sent message

priority tinyint The conversation priority level that is


applied to the message.

queuing_order bigint Message order number in the queue.

conversation_group_id uniqueidentifier Identifier for the conversation group


that this message belongs to.

conversation_handle uniqueidentifier Handle for the conversation that this


message is part of.

message_sequence_number bigint Sequence number of the message in


the conversation.

service_name nvarchar(512) Name of the service that the


conversation is to.

service_id int SQL Server object identifier of the


service that the conversation is to.

service_contract_name nvarchar(256) Name of the contract that the


conversation follows.

service_contract_id int SQL Server object identifier of the


contract that the conversation follows.

message_type_name nvarchar(256) Name of the message type that


describes the format of the message.
Messages can be either application
message types or Broker system
messages.

message_type_id int SQL Server object identifier of the


message type that describes the
message.
COLUMN NAME DATA TYPE DESCRIPTION

validation nchar(2) Validation used for the message.

E=EmptyN=NoneX=XML

message_body varbinary(MAX) Content of the message.

Permissions
To receive a message, the current user must have RECEIVE permission on the queue.

Examples
A. Receiving all columns for all messages in a conversation group
The following example receives all available messages for the next available conversation group from the
ExpenseQueue queue. The statement returns the messages as a result set.

RECEIVE * FROM ExpenseQueue ;

B. Receiving specified columns for all messages in a conversation group


The following example receives all available messages for the next available conversation group from the
ExpenseQueue queue. The statement returns the messages as a result set that contains the columns
conversation_handle , message_type_name , and message_body .

RECEIVE conversation_handle, message_type_name, message_body


FROM ExpenseQueue ;

C. Receiving the first available message in the queue


The following example receives the first available message from the ExpenseQueue queue as a result set.

RECEIVE TOP (1) * FROM ExpenseQueue ;

D. Receiving all messages for a specified conversation


The following example receives all available messages for the specified conversation from the ExpenseQueue
queue as a result set.

DECLARE @conversation_handle UNIQUEIDENTIFIER ;

SET @conversation_handle = <retrieve conversation from database> ;

RECEIVE *
FROM ExpenseQueue
WHERE conversation_handle = @conversation_handle ;

E. Receiving messages for a specified conversation group


The following example receives all available messages for the specified conversation group from the
ExpenseQueue queue as a result set.
DECLARE @conversation_group_id UNIQUEIDENTIFIER ;

SET @conversation_group_id =
<retrieve conversation group ID from database> ;

RECEIVE *
FROM ExpenseQueue
WHERE conversation_group_id = @conversation_group_id ;

F. Receiving into a table variable


The following example receives all available messages for a specified conversation group from the ExpenseQueue
queue into a table variable.

DECLARE @conversation_group_id UNIQUEIDENTIFIER ;

DECLARE @procTable TABLE(


service_instance_id UNIQUEIDENTIFIER,
handle UNIQUEIDENTIFIER,
message_sequence_number BIGINT,
service_name NVARCHAR(512),
service_contract_name NVARCHAR(256),
message_type_name NVARCHAR(256),
validation NCHAR,
message_body VARBINARY(MAX)) ;

SET @conversation_group_id = <retrieve conversation group ID from database> ;

RECEIVE TOP (1)


conversation_group_id,
conversation_handle,
message_sequence_number,
service_name,
service_contract_name,
message_type_name,
validation,
message_body
FROM ExpenseQueue
INTO @procTable
WHERE conversation_group_id = @conversation_group_id ;

G. Receiving messages and waiting indefinitely


The following example receives all available messages for the next available conversation group in the
ExpenseQueue queue. The statement waits until at least one message becomes available then returns a result set
that contains all message columns.

WAITFOR (
RECEIVE *
FROM ExpenseQueue) ;

H. Receiving messages and waiting for a specified interval


The following example receives all available messages for the next available conversation group in the
ExpenseQueue queue. The statement waits for 60 seconds or until at least one message becomes available,
whichever occurs first. The statement returns a result set that contains all message columns if at least one
message is available. Otherwise, the statement returns an empty result set.
WAITFOR (
RECEIVE *
FROM ExpenseQueue ),
TIMEOUT 60000 ;

I. Receiving messages, modifying the type of a column


The following example receives all available messages for the next available conversation group in the
ExpenseQueue queue. When the message type states that the message contains an XML document, the statement
converts the message body to XML.

WAITFOR (
RECEIVE message_type_name,
CASE
WHEN validation = 'X' THEN CAST(message_body as XML)
ELSE NULL
END AS message_body
FROM ExpenseQueue ),
TIMEOUT 60000 ;

J. Receiving a message, extracting data from the message body, retrieving conversation state
The following example receives the next available message for the next available conversation group in the
ExpenseQueue queue. When the message is of type //Adventure-Works.com/Expenses/SubmitExpense , the statement
extracts the employee ID and a list of items from the message body. The statement also retrieves state for the
conversation from the ConversationState table.

WAITFOR(
RECEIVE
TOP(1)
message_type_name,
COALESCE(
(SELECT TOP(1) ConversationState
FROM CurrentConversations AS cc
WHERE cc.ConversationHandle = conversation_handle),
'NEW')
AS ConversationState,
COALESCE(
(SELECT TOP(1) ErrorCount
FROM CurrentConversations AS cc
WHERE cc.ConversationHandle = conversation_handle),
0)
AS ConversationErrors,
CASE WHEN message_type_name = N'//Adventure-Works.com/Expenses/SubmitExpense'
THEN CAST(message_body AS XML).value(
'declare namespace rpt = "http://Adventure-Works.com/schemas/expenseReport"
(/rpt:ExpenseReport/rpt:EmployeeID)[1]', 'nvarchar(20)')
ELSE NULL
END AS EmployeeID,
CASE WHEN message_type_name = N'//Adventure-Works.com/Expenses/SubmitExpense'
THEN CAST(message_body AS XML).query(
'declare namespace rpt = "http://Adventure-Works.com/schemas/expenseReport"
/rpt:ExpenseReport/rpt:ItemDetail')
ELSE NULL
END AS ItemList
FROM ExpenseQueue
), TIMEOUT 60000 ;

See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
BEGIN CONVERSATION TIMER (Transact-SQL )
END CONVERSATION (Transact-SQL )
CREATE CONTRACT (Transact-SQL )
CREATE MESSAGE TYPE (Transact-SQL )
SEND (Transact-SQL )
CREATE QUEUE (Transact-SQL )
ALTER QUEUE (Transact-SQL )
DROP QUEUE (Transact-SQL )
SEND (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sends a message, using one or more existing conversations.
Transact-SQL Syntax Conventions

Syntax
SEND
ON CONVERSATION [(]conversation_handle [,.. @conversation_handle_n][)]
[ MESSAGE TYPE message_type_name ]
[ ( message_body_expression ) ]
[ ; ]

Arguments
ON CONVERSATION conversation_handle [.. @conversation_handle_n]
Specifies the conversations that the message belongs to. The conversation_handle must contain a valid
conversation identifier. The same conversation handle cannot be used more than once.
MESSAGE TYPE message_type_name
Specifies the message type of the sent message. This message type must be included in the service contracts used
by these conversations. These contracts must allow the message type to be sent from this side of the conversation.
For example, the target services of the conversations may only send messages specified in the contract as SENT
BY TARGET or SENT BY ANY. If this clause is omitted, the message is of the message type DEFAULT.
message_body_expression
Provides an expression representing the message body. The message_body_expression is optional. However, if the
message_body_expression is present the expression must be of a type that can be converted to varbinary(max).
The expression cannot be NULL. If this clause is omitted, the message body is empty.

Remarks
IMPORTANT
If the SEND statement is not the first statement in a batch or stored procedure, the preceding statement must be terminated
with a semicolon (;).

The SEND statement transmits a message from the services on one end of one or more Service Broker
conversations to the services on the other end of these conversations. The RECEIVE statement is then used to
retrieve the sent message from the queues associated with the target services.
The conversation handles supplied to the ON CONVERSATION clause comes from one of three sources:
When sending a message that is not in response to a message received from another service, use the
conversation handle returned from the BEGIN DIALOG statement that created the conversation.
When sending a message that is a response to a message previously received from another service, use the
conversation handle returned by the RECEIVE statement that returned the original message.
In many cases the code that contains the SEND statement is separate from the code that contains either the
BEGIN DIALOG or RECEIVE statements supplying conversation handle. In these cases, the conversation
handle must be one of the data items in the state information passed to the code that contains the SEND
statement.
Messages that are sent to services in other instances of the SQL Server Database Engine are stored in a
transmission queue in the current database until they can be transmitted to the service queues in the
remote instances. Messages sent to services in the same instance of the Database Engine are put directly
into the queues associated with these services. If a condition prevents a local message from being put
directly in the target service queue, it can be stored in the transmission queue until the condition is resolved.
Examples of when this occurs include some types of errors or the target service queue being inactive. You
can use the sys.transmission_queue system view to see the messages in the transmission queue.
SEND is an atomic statement, that is, if a SEND statement sending a message on multiple conversations
fails, e.g. because a conversation is in an errored state, no messages will be stored in the transmission queue
or put in any target service queue.
Service Broker optimizes the storage and transmission of messages that are sent on multiple conversations
in the same SEND statement.
Messages in the transmission queues for an instance are transmitted in sequence based on:
The priority level of their associated conversation endpoint.
Within priority level, their send sequence in the conversation.
Priority levels specified in conversation priorities are only applied to messages in the transmission queue if
the HONOR_BROKER_PRIORITY database option is set to ON. If HONOR_BROKER_PRIORITY is set to
OFF, all messages put in the transmission queue for that database are assigned the default priority level of
5. Priority levels are not applied to a SEND where the messages are put directly into a service queue in the
same instance of the Database Engine.
The SEND statement separately locks each conversation on which a message is sent to ensure per-
conversation ordered delivery.
SEND is not valid in a user-defined function.

Permissions
To send a message, the current user must have RECEIVE permission on the queue of every service that sends the
message.

Examples
The following example starts a dialog and sends an XML message on the dialog. To send the message, the
example converts the xml object to varbinary(max).
DECLARE @dialog_handle UNIQUEIDENTIFIER,
@ExpenseReport XML ;

SET @ExpenseReport = < construct message as appropriate for the application > ;

BEGIN DIALOG @dialog_handle


FROM SERVICE [//Adventure-Works.com/Expenses/ExpenseClient]
TO SERVICE '//Adventure-Works.com/Expenses'
ON CONTRACT [//Adventure-Works.com/Expenses/ExpenseProcessing] ;

SEND ON CONVERSATION @dialog_handle


MESSAGE TYPE [//Adventure-Works.com/Expenses/SubmitExpense]
(@ExpenseReport) ;

The following example starts three dialogs and sends an XML message on each of them.

DECLARE @dialog_handle1 UNIQUEIDENTIFIER,


@dialog_handle2 UNIQUEIDENTIFIER,
@dialog_handle3 UNIQUEIDENTIFIER,
@OrderMsg XML ;

SET @OrderMsg = < construct message as appropriate for the application > ;

BEGIN DIALOG @dialog_handle1


FROM SERVICE [//InitiatorDB/InitiatorService]
TO SERVICE '//TargetDB1/TargetService’
ON CONTRACT [//AllDBs/OrderProcessing] ;

BEGIN DIALOG @dialog_handle2


FROM SERVICE [//InitiatorDB/InitiatorService]
TO SERVICE '//TargetDB2/TargetService’
ON CONTRACT [//AllDBs/OrderProcessing] ;

BEGIN DIALOG @dialog_handle3


FROM SERVICE [//InitiatorDB/InitiatorService]
TO SERVICE '//TargetDB3/TargetService’
ON CONTRACT [//AllDBs/OrderProcessing] ;

SEND ON CONVERSATION (@dialog_handle1, @dialog_handle2, @dialog_handle3)


MESSAGE TYPE [//AllDBs/OrderMsg]
(@OrderMsg) ;

See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
END CONVERSATION (Transact-SQL )
RECEIVE (Transact-SQL )
sys.transmission_queue (Transact-SQL )
SET Statements (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
The Transact-SQL programming language provides several SET statements that change the current session
handling of specific information. The SET statements are grouped into the categories shown in the following
table.
For information about setting local variables with the SET statement, see SET @local_variable (Transact-
SQL ).

CATEGORY STATEMENTS

Date and time statements SET DATEFIRST

SET DATEFORMAT

Locking statements SET DEADLOCK_PRIORITY

SET LOCK_TIMEOUT

Miscellaneous statements SET CONCAT_NULL_YIELDS_NULL

SET CURSOR_CLOSE_ON_COMMIT

SET FIPS_FLAGGER

SET IDENTITY_INSERT

SET LANGUAGE

SET OFFSETS

SET QUOTED_IDENTIFIER
CATEGORY STATEMENTS

Query Execution Statements SET ARITHABORT

SET ARITHIGNORE

SET FMTONLY

Note: This feature will be removed in a future version of


Microsoft SQL Server. Avoid using this feature in new
development work, and plan to modify applications that
currently use this feature.

SET NOCOUNT

SET NOEXEC

SET NUMERIC_ROUNDABORT

SET PARSEONLY

SET QUERY_GOVERNOR_COST_LIMIT

SET ROWCOUNT

SET TEXTSIZE

ISO Settings statements SET ANSI_DEFAULTS

SET ANSI_NULL_DFLT_OFF

SET ANSI_NULL_DFLT_ON

SET ANSI_NULLS

SET ANSI_PADDING

SET ANSI_WARNINGS

Statistics statements SET FORCEPLAN

SET SHOWPLAN_ALL

SET SHOWPLAN_TEXT

SET SHOWPLAN_XML

SET STATISTICS IO

SET STATISTICS XML

SET STATISTICS PROFILE

SET STATISTICS TIME

Transactions statements SET IMPLICIT_TRANSACTIONS

SET REMOTE_PROC_TRANSACTIONS

SET TRANSACTION ISOLATION LEVEL

SET XACT_ABORT
Considerations When You Use the SET Statements
All SET statements are implemented at execute or run time, except for SET FIPS_FL AGGER, SET
OFFSETS, SET PARSEONLY, and SET QUOTED_IDENTIFIER. These statements are implemented at
parse time.
If a SET statement is run in a stored procedure or trigger, the value of the SET option is restored after
control is returned from the stored procedure or trigger. Also, if a SET statement is specified in a
dynamic SQL string that is run by using either sp_executesql or EXECUTE, the value of the SET
option is restored after control is returned from the batch specified in the dynamic SQL string.
Stored procedures execute with the SET settings specified at execute time except for SET
ANSI_NULLS and SET QUOTED_IDENTIFIER. Stored procedures specifying SET ANSI_NULLS or
SET QUOTED_IDENTIFIER use the setting specified at stored procedure creation time. If used inside
a stored procedure, any SET setting is ignored.
The user options setting of sp_configure allows for server-wide settings and works across multiple
databases. This setting also behaves like an explicit SET statement, except that it occurs at login time.
Database settings set by using ALTER DATABASE are valid only at the database level and take effect
only if explicitly set. Database settings override instance option settings that are set by using
sp_configure.
For any one of the SET statements with ON and OFF settings, you can specify either an ON or OFF
setting for multiple SET options.

NOTE
This does not apply to the statistics related SET options.

For example, SET QUOTED_IDENTIFIER, ANSI_NULLS ON sets both QUOTED_IDENTIFIER and


ANSI_NULLS to ON.
SET statement settings override equivalent database option settings that are set by using ALTER
DATABASE. For example, the value specified in a SET ANSI_NULLS statement will override the
database setting for ANSI_NULLs. Additionally, some connection settings are automatically set ON
when a user connects to a database based on the values put into effect by the previous use of the
sp_configure user options setting, or the values that apply to all ODBC and OLE/DB connections.
ALTER, CREATE and DROP DATABASE statements do not honor the SET LOCK_TIMEOUT setting.
When a global or shortcut SET statement, such as SET ANSI_DEFAULTS, sets several settings,
issuing the shortcut SET statement resets the previous settings for all those options affected by the
shortcut SET statement. If an individual SET option that is affected by a shortcut SET statement is
explicitly set after the shortcut SET statement is issued, the individual SET statement overrides the
corresponding shortcut settings.
When batches are used, the database context is determined by the batch established by using the USE
statement. Ad hoc queries and all other statements that are executed outside the stored procedure
and that are in batches inherit the option settings of the database and connection established by the
USE statement.
Multiple Active Result Set (MARS ) requests share a global state that contains the most recent session
SET option settings. When each request executes it can modify the SET options. The changes are
specific to the request context in which they are set, and do not affect other concurrent MARS
requests. However, after the request execution is completed, the new SET options are copied to the
global session state. New requests that execute under the same session after this change will use
these new SET option settings.
When a stored procedure is executed, either from a batch or from another stored procedure, it is
executed under the option values that are currently set in the database that contains the stored
procedure. For example, when stored procedure db1.dbo.sp1 calls stored procedure db2.dbo.sp2,
stored procedure sp1 is executed under the current compatibility level setting of database db1, and
stored procedure sp2 is executed under the current compatibility level setting of database db2.
When a Transact-SQL statement refers to objects that reside in multiple databases, the current
database context and the current connection context applies to that statement. In this case, if Transact-
SQL statement is in a batch, the current connection context is the database defined by the USE
statement; if the Transact-SQL statement is in a stored procedure, the connection context is the
database that contains the stored procedure.
When you are creating and manipulating indexes on computed columns or indexed views, the SET
options ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER, ANSI_NULLS,
ANSI_PADDING, and ANSI_WARNINGS must be set to ON. The option
NUMERIC_ROUNDABORT must be set to OFF.
If any one of these options is not set to the required values, INSERT, UPDATE, DELETE, DBCC
CHECKDB and DBCC CHECKTABLE actions on indexed views or tables with indexes on computed
columns will fail. SQL Server will raise an error listing all the options that are incorrectly set. Also,
SQL Server will process SELECT statements on these tables or indexed views as if the indexes on
computed columns or on the views do not exist.
SET ANSI_DEFAULTS (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls a group of SQL Server settings that collectively specify some ISO standard behavior.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

SET ANSI_DEFAULTS { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET ANSI_DEFAULTS ON

Remarks
SET ANSI_DEFAULTS is a server-side setting that the client does not modify. The client manages its own settings.
By default, these settings are the opposite of the server setting. Users should not modify the server setting. To
change client the behavior, users should use the SQL_COPT_SS_PRESERVE_CURSORS. For more information,
see SQLSetConnectAttr.
When enabled (ON ), this option enables the following ISO settings:

SET ANSI_NULLS SET CURSOR_CLOSE_ON_COMMIT

SET ANSI_NULL_DFLT_ON SET IMPLICIT_TRANSACTIONS

SET ANSI_PADDING SET QUOTED_IDENTIFIER

SET ANSI_WARNINGS

Together, these ISO standard SET options define the query processing environment for the duration of the work
session of the user, a running trigger, or a stored procedure. However, these SET options do not include all the
options required to comply with the ISO standard.
When dealing with indexes on computed columns and indexed views, four of these defaults (ANSI_NULLS,
ANSI_PADDING, ANSI_WARNINGS, and QUOTED_IDENTIFIER ) must be set to ON. These defaults are among
seven SET options that must be assigned the required values when you are creating and changing indexes on
computed columns and indexed views. The other SET options are ARITHABORT (ON ),
CONCAT_NULL_YIELDS_NULL (ON ), and NUMERIC_ROUNDABORT (OFF ). For more information about the
required SET option settings with indexed views and indexes on computed columns, see "Considerations When
You Use the SET Statements" in SET Statements (Transact-SQL ).
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server
automatically set ANSI_DEFAULTS to ON when connecting. The driver and Provider then set
CURSOR_CLOSE_ON_COMMIT and IMPLICIT_TRANSACTIONS to OFF. The OFF settings for SET
CURSOR_CLOSE_ON_COMMIT and SET IMPLICIT_TRANSACTIONS can be configured in ODBC data
sources, in ODBC connection attributes, or in OLE DB connection properties that are set in the application before
connecting to SQL Server. The default for SET ANSI_DEFAULTS is OFF for connections from DB -Library
applications.
When SET ANSI_DEFAULTS is issued, SET QUOTED_IDENTIFIER is set at parse time, and the following options
are set at execute time:

SET ANSI_NULLS SET ANSI_WARNINGS

SET ANSI_NULL_DFLT_ON SET CURSOR_CLOSE_ON_COMMIT

SET ANSI_PADDING SET IMPLICIT_TRANSACTIONS

Permissions
Requires membership in the public role.

Examples
The following example sets SET ANSI_DEFAULTS ON and uses the DBCC USEROPTIONS statement to display the
settings that are affected.

-- SET ANSI_DEFAULTS ON.


SET ANSI_DEFAULTS ON;
GO
-- Display the current settings.
DBCC USEROPTIONS;
GO
-- SET ANSI_DEFAULTS OFF.
SET ANSI_DEFAULTS OFF;
GO

See Also
DBCC USEROPTIONS (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_NULL_DFLT_ON (Transact-SQL )
SET ANSI_NULLS (Transact-SQL )
SET ANSI_PADDING (Transact-SQL )
SET ANSI_WARNINGS (Transact-SQL )
SET CURSOR_CLOSE_ON_COMMIT (Transact-SQL )
SET IMPLICIT_TRANSACTIONS (Transact-SQL )
SET QUOTED_IDENTIFIER (Transact-SQL )
SET ANSI_NULL_DFLT_OFF (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters the behavior of the session to override default nullability of new columns when the ANSI null default option
for the database is true. For more information about setting the value for ANSI null default, see ALTER
DATABASE (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET ANSI_NULL_DFLT_OFF { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET ANSI_NULL_DFLT_OFF OFF

Remarks
This setting only affects the nullability of new columns when the nullability of the column is not specified in the
CREATE TABLE and ALTER TABLE statements. By default, when SET ANSI_NULL_DFLT_OFF is ON, new
columns that are created by using the ALTER TABLE and CREATE TABLE statements are NOT NULL if the
nullability status of the column is not explicitly specified. SET ANSI_NULL_DFLT_OFF does not affect columns
that are created by using an explicit NULL or NOT NULL.
Both SET ANSI_NULL_DFLT_OFF and SET ANSI_NULL_DFLT_ON cannot be set ON at the same time. If one
option is set ON, the other option is set OFF. Therefore, either ANSI_NULL_DFLT_OFF or SET
ANSI_NULL_DFLT_ON can be set ON, or both can be set OFF. If either option is ON, that setting (SET
ANSI_NULL_DFLT_OFF or SET ANSI_NULL_DFLT_ON ) takes effect. If both options are set OFF, SQL Server
uses the value of the is_ansi_null_default_on column in the sys.databases catalog view.
For a more reliable operation of Transact-SQL scripts that are used in databases with different nullability settings,
it is better to always specify NULL or NOT NULL in CREATE TABLE and ALTER TABLE statements.
The setting of SET ANSI_NULL_DFLT_OFF is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.

DECLARE @ANSI_NULL_DFLT_OFF VARCHAR(3) = 'OFF';


IF ( (2048 & @@OPTIONS) = 2048 ) SET @ANSI_NULL_DFLT_OFF = 'ON';
SELECT @ANSI_NULL_DFLT_OFF AS ANSI_NULL_DFLT_OFF;

Permissions
Requires membership in the public role.
Examples
The following example shows the effects of SET ANSI_NULL_DFLT_OFF with both settings for the ANSI null default
database option.
USE AdventureWorks2012;
GO

-- Set the 'ANSI null default' database option to true by executing


-- ALTER DATABASE.
GO
ALTER DATABASE AdventureWorks2012 SET ANSI_NULL_DEFAULT ON;
GO
-- Create table t1.
CREATE TABLE t1 (a TINYINT);
GO
-- NULL INSERT should succeed.
INSERT INTO t1 (a) VALUES (NULL);
GO

-- SET ANSI_NULL_DFLT_OFF to ON and create table t2.


SET ANSI_NULL_DFLT_OFF ON;
GO
CREATE TABLE t2 (a TINYINT);
GO
-- NULL INSERT should fail.
INSERT INTO t2 (a) VALUES (NULL);
GO

-- SET ANSI_NULL_DFLT_OFF to OFF and create table t3.


SET ANSI_NULL_DFLT_OFF OFF;
GO
CREATE TABLE t3 (a TINYINT) ;
GO
-- NULL INSERT should succeed.
INSERT INTO t3 (a) VALUES (NULL);
GO

-- This illustrates the effect of having both the database


-- option and SET option disabled.
-- Set the 'ANSI null default' database option to false.
ALTER DATABASE AdventureWorks2012 SET ANSI_NULL_DEFAULT OFF;
GO
-- Create table t4.
CREATE TABLE t4 (a tinyint) ;
GO
-- NULL INSERT should fail.
INSERT INTO t4 (a) VALUES (null);
GO

-- SET ANSI_NULL_DFLT_OFF to ON and create table t5.


SET ANSI_NULL_DFLT_OFF ON;
GO
CREATE TABLE t5 (a tinyint);
GO
-- NULL insert should fail.
INSERT INTO t5 (a) VALUES (null);
GO

-- SET ANSI_NULL_DFLT_OFF to OFF and create table t6.


SET ANSI_NULL_DFLT_OFF OFF;
GO
CREATE TABLE t6 (a tinyint);
GO
-- NULL insert should fail.
INSERT INTO t6 (a) VALUES (null);
GO

-- Drop tables t1 through t6.


DROP TABLE t1, t2, t3, t4, t5, t6;
See Also
ALTER TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_NULL_DFLT_ON (Transact-SQL )
SET ANSI_NULL_DFLT_ON (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the behavior of the session to override default nullability of new columns when the ANSI null default
option for the database is false. For more information about setting the value for ANSI null default, see ALTER
DATABASE (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET ANSI_NULL_DFLT_ON {ON | OFF}

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET ANSI_NULL_DFLT_ON ON

Remarks
This setting only affects the nullability of new columns when the nullability of the column is not specified in the
CREATE TABLE and ALTER TABLE statements. When SET ANSI_NULL_DFLT_ON is ON, new columns created
by using the ALTER TABLE and CREATE TABLE statements allow null values if the nullability status of the column
is not explicitly specified. SET ANSI_NULL_DFLT_ON does not affect columns created with an explicit NULL or
NOT NULL.
Both SET ANSI_NULL_DFLT_OFF and SET ANSI_NULL_DFLT_ON cannot be set ON at the same time. If one
option is set ON, the other option is set OFF. Therefore, either ANSI_NULL_DFLT_OFF or
ANSI_NULL_DFLT_ON can be set ON, or both can be set OFF. If either option is ON, that setting (SET
ANSI_NULL_DFLT_OFF or SET ANSI_NULL_DFLT_ON ) takes effect. If both options are set OFF, SQL Server
uses the value of the is_ansi_null_default_on column in the sys.databases catalog view.
For a more reliable operation of Transact-SQL scripts that are used in databases with different nullability settings,
it is better to specify NULL or NOT NULL in CREATE TABLE and ALTER TABLE statements.
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server
automatically set ANSI_NULL_DFLT_ON to ON when connecting. The default for SET ANSI_NULL_DFLT_ON is
OFF for connections from DB -Library applications.
When SET ANSI_DEFAULTS is ON, SET ANSI_NULL_DFLT_ON is enabled.
The setting of SET ANSI_NULL_DFLT_ON is set at execute or run time and not at parse time.
The setting of SET ANSI_NULL_DFLT_ON does not apply when tables are created using the SELECT INTO
statement.
To view the current setting for this setting, run the following query.
DECLARE @ANSI_NULL_DFLT_ON VARCHAR(3) = 'OFF';
IF ( (1024 & @@OPTIONS) = 1024 ) SET @ANSI_NULL_DFLT_ON = 'ON';
SELECT @ANSI_NULL_DFLT_ON AS ANSI_NULL_DFLT_ON;

Permissions
Requires membership in the public role.

Examples
The following example shows the effects of SET ANSI_NULL_DFLT_ON with both settings for the ANSI null default
database option.

USE AdventureWorks2012;
GO

-- The code from this point on demonstrates that SET ANSI_NULL_DFLT_ON


-- has an effect when the 'ANSI null default' for the database is false.
-- Set the 'ANSI null default' database option to false by executing
-- ALTER DATABASE.
ALTER DATABASE AdventureWorks2012 SET ANSI_NULL_DEFAULT OFF;
GO
-- Create table t1.
CREATE TABLE t1 (a TINYINT) ;
GO
-- NULL INSERT should fail.
INSERT INTO t1 (a) VALUES (NULL);
GO

-- SET ANSI_NULL_DFLT_ON to ON and create table t2.


SET ANSI_NULL_DFLT_ON ON;
GO
CREATE TABLE t2 (a TINYINT);
GO
-- NULL insert should succeed.
INSERT INTO t2 (a) VALUES (NULL);
GO

-- SET ANSI_NULL_DFLT_ON to OFF and create table t3.


SET ANSI_NULL_DFLT_ON OFF;
GO
CREATE TABLE t3 (a TINYINT);
GO
-- NULL insert should fail.
INSERT INTO t3 (a) VALUES (NULL);
GO

-- The code from this point on demonstrates that SET ANSI_NULL_DFLT_ON


-- has no effect when the 'ANSI null default' for the database is true.
-- Set the 'ANSI null default' database option to true.
ALTER DATABASE AdventureWorks2012 SET ANSI_NULL_DEFAULT ON
GO

-- Create table t4.


CREATE TABLE t4 (a TINYINT);
GO
-- NULL INSERT should succeed.
INSERT INTO t4 (a) VALUES (NULL);
GO

-- SET ANSI_NULL_DFLT_ON to ON and create table t5.


SET ANSI_NULL_DFLT_ON ON;
GO
CREATE TABLE t5 (a TINYINT);
CREATE TABLE t5 (a TINYINT);
GO
-- NULL INSERT should succeed.
INSERT INTO t5 (a) VALUES (NULL);
GO

-- SET ANSI_NULL_DFLT_ON to OFF and create table t6.


SET ANSI_NULL_DFLT_ON OFF;
GO
CREATE TABLE t6 (a TINYINT);
GO
-- NULL INSERT should succeed.
INSERT INTO t6 (a) VALUES (NULL);
GO

-- Set the 'ANSI null default' database option to false.


ALTER DATABASE AdventureWorks2012 SET ANSI_NULL_DEFAULT ON;
GO

-- Drop tables t1 through t6.


DROP TABLE t1,t2,t3,t4,t5,t6;

See Also
ALTER TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
SET ANSI_NULL_DFLT_OFF (Transact-SQL )
SET ANSI_NULLS (Transact-SQL)
5/30/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies ISO compliant behavior of the Equals (=) and Not Equal To (<>) comparison operators when they are
used with null values in SQL Server 2017.

IMPORTANT
In a future version of SQL Server, ANSI_NULLS will be ON and any applications that explicitly set the option to OFF will
generate an error. Avoid using this feature in new development work, and plan to modify applications that currently use this
feature.

Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

SET ANSI_NULLS { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET ANSI_NULLS ON

Remarks
When SET ANSI_NULLS is ON, a SELECT statement that uses WHERE column_name = NULL returns zero rows
even if there are null values in column_name. A SELECT statement that uses WHERE column_name <> NULL
returns zero rows even if there are nonnull values in column_name.
When SET ANSI_NULLS is OFF, the Equals (=) and Not Equal To (<>) comparison operators do not follow the
ISO standard. A SELECT statement that uses WHERE column_name = NULL returns the rows that have null
values in column_name. A SELECT statement that uses WHERE column_name <> NULL returns the rows that
have nonnull values in the column. Also, a SELECT statement that uses WHERE column_name <> XYZ_value
returns all rows that are not XYZ_value and that are not NULL.
When SET ANSI_NULLS is ON, all comparisons against a null value evaluate to UNKNOWN. When SET
ANSI_NULLS is OFF, comparisons of all data against a null value evaluate to TRUE if the data value is NULL. If
SET ANSI_NULLS is not specified, the setting of the ANSI_NULLS option of the current database applies. For
more information about the ANSI_NULLS database option, see ALTER DATABASE (Transact-SQL ).
The following table shows how the setting of ANSI_NULLS affects the results of a number of Boolean expressions
using null and non-null values.
BOOLEAN EXPRESSION SET ANSI_NULLS ON SET ANSI_NULLS OFF

NULL = NULL UNKNOWN TRUE

1 = NULL UNKNOWN FALSE

NULL <> NULL UNKNOWN FALSE

1 <> NULL UNKNOWN TRUE

NULL > NULL UNKNOWN UNKNOWN

1 > NULL UNKNOWN UNKNOWN

NULL IS NULL TRUE TRUE

1 IS NULL FALSE FALSE

NULL IS NOT NULL FALSE FALSE

1 IS NOT NULL TRUE TRUE

SET ANSI_NULLS ON affects a comparison only if one of the operands of the comparison is either a variable that
is NULL or a literal NULL. If both sides of the comparison are columns or compound expressions, the setting does
not affect the comparison.
For a script to work as intended, regardless of the ANSI_NULLS database option or the setting of SET
ANSI_NULLS, use IS NULL and IS NOT NULL in comparisons that might contain null values.
SET ANSI_NULLS should be set to ON for executing distributed queries.
SET ANSI_NULLS must also be ON when you are creating or changing indexes on computed columns or indexed
views. If SET ANSI_NULLS is OFF, any CREATE, UPDATE, INSERT, and DELETE statements on tables with
indexes on computed columns or indexed views will fail. SQL Server returns an error that lists all SET options that
violate the required values. Also, when you execute a SELECT statement, if SET ANSI_NULLS is OFF, SQL Server
ignores the index values on computed columns or views and resolve the select operation as if there were no such
indexes on the tables or views.

NOTE
ANSI_NULLS is one of seven SET options that must be set to required values when dealing with indexes on computed
columns or indexed views. The options ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, QUOTED_IDENTIFIER, and
CONCAT_NULL_YIELDS_NULL must also be set to ON, and NUMERIC_ROUNDABORT must be set to OFF.

The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server
automatically set ANSI_NULLS to ON when connecting. This setting can be configured in ODBC data sources, in
ODBC connection attributes, or in OLE DB connection properties that are set in the application before connecting
to an instance of SQL Server. The default for SET ANSI_NULLS is OFF.
When SET ANSI_DEFAULTS is ON, SET ANSI_NULLS is enabled.
The setting of SET ANSI_NULLS is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query:
DECLARE @ANSI_NULLS VARCHAR(3) = 'OFF';
IF ( (32 & @@OPTIONS) = 32 ) SET @ANSI_NULLS = 'ON';
SELECT @ANSI_NULLS AS ANSI_NULLS;

Permissions
Requires membership in the public role.

Examples
The following example uses the Equals ( = ) and Not Equal To ( <> ) comparison operators to make comparisons
with NULL and nonnull values in a table. The example also shows that IS NULL is not affected by the
SET ANSI_NULLS setting.
-- Create table t1 and insert values.
CREATE TABLE dbo.t1 (a INT NULL);
INSERT INTO dbo.t1 values (NULL),(0),(1);
GO

-- Print message and perform SELECT statements.


PRINT 'Testing default setting';
DECLARE @varname int;
SET @varname = NULL;

SELECT a
FROM t1
WHERE a = @varname;

SELECT a
FROM t1
WHERE a <> @varname;

SELECT a
FROM t1
WHERE a IS NULL;
GO

-- SET ANSI_NULLS to ON and test.


PRINT 'Testing ANSI_NULLS ON';
SET ANSI_NULLS ON;
GO
DECLARE @varname int;
SET @varname = NULL

SELECT a
FROM t1
WHERE a = @varname;

SELECT a
FROM t1
WHERE a <> @varname;

SELECT a
FROM t1
WHERE a IS NULL;
GO

-- SET ANSI_NULLS to OFF and test.


PRINT 'Testing SET ANSI_NULLS OFF';
SET ANSI_NULLS OFF;
GO
DECLARE @varname int;
SET @varname = NULL;
SELECT a
FROM t1
WHERE a = @varname;

SELECT a
FROM t1
WHERE a <> @varname;

SELECT a
FROM t1
WHERE a IS NULL;
GO

-- Drop table t1.


DROP TABLE dbo.t1;
See Also
SET Statements (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
= (Equals) (Transact-SQL )
IF...ELSE (Transact-SQL )
<> (Not Equal To) (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
WHERE (Transact-SQL )
WHILE (Transact-SQL )
SET ANSI_PADDING (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls the way the column stores values shorter than the defined size of the column, and the way the column
stores values that have trailing blanks in char, varchar, binary, and varbinary data.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

SET ANSI_PADDING { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET ANSI_PADDING ON

Remarks
Columns defined with char, varchar, binary, and varbinary data types have a defined size.
This setting affects only the definition of new columns. After the column is created, SQL Server stores the values
based on the setting when the column was created. Existing columns are not affected by a later change to this
setting.

NOTE
We recommend that ANSI_PADDING always be set to ON.

The following table shows the effects of the SET ANSI_PADDING setting when values are inserted into columns
with char, varchar, binary, and varbinary data types.

CHAR(N) NOT NULL OR CHAR(N) NULL OR BINARY(N) VARCHAR(N) OR


SETTING BINARY(N) NOT NULL NULL VARBINARY(N)

ON Pad original value (with Follows same rules as for Trailing blanks in character
trailing blanks for char char(n) or binary(n) NOT values inserted into varchar
columns and with trailing NULL when SET columns are not trimmed.
zeros for binary columns) ANSI_PADDING is ON. Trailing zeros in binary
to the length of the column. values inserted into
varbinary columns are not
trimmed. Values are not
padded to the length of the
column.
CHAR(N) NOT NULL OR CHAR(N) NULL OR BINARY(N) VARCHAR(N) OR
SETTING BINARY(N) NOT NULL NULL VARBINARY(N)

OFF Pad original value (with Follows same rules as for Trailing blanks in character
trailing blanks for char varchar or varbinary when values inserted into a
columns and with trailing SET ANSI_PADDING is OFF. varchar column are
zeros for binary columns) trimmed. Trailing zeros in
to the length of the column. binary values inserted into a
varbinary column are
trimmed.

NOTE
When padded, char columns are padded with blanks, and binary columns are padded with zeros. When trimmed, char
columns have the trailing blanks trimmed, and binary columns have the trailing zeros trimmed.

SET ANSI_PADDING must be ON when you are creating or changing indexes on computed columns or indexed
views. For more information about required SET option settings with indexed views and indexes on computed
columns, see "Considerations When You Use the SET Statements" in SET Statements (Transact-SQL ).
The default for SET ANSI_PADDING is ON. The SQL Server Native Client ODBC driver and SQL Server Native
Client OLE DB Provider for SQL Server automatically set ANSI_PADDING to ON when connecting. This can be
configured in ODBC data sources, in ODBC connection attributes, or OLE DB connection properties set in the
application before connecting. The default for SET ANSI_PADDING is OFF for connections from DB -Library
applications.
The SET ANSI_PADDING setting does not affect the nchar, nvarchar, ntext, text, image, varbinary(max),
varchar(max), and nvarchar(max) data types. They always display the SET ANSI_PADDING ON behavior. This
means trailing spaces and zeros are not trimmed.
When SET ANSI_DEFAULTS is ON, SET ANSI_PADDING is enabled.
The setting of SET ANSI_PADDING is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.

DECLARE @ANSI_PADDING VARCHAR(3) = 'OFF';


IF ( (16 & @@OPTIONS) = 16 ) SET @ANSI_PADDING = 'ON';
SELECT @ANSI_PADDING AS ANSI_PADDING;

Permissions
Requires membership in the public role.

Examples
The following example shows how the setting affects each of these data types.
PRINT 'Testing with ANSI_PADDING ON'
SET ANSI_PADDING ON;
GO

CREATE TABLE t1 (
charcol CHAR(16) NULL,
varcharcol VARCHAR(16) NULL,
varbinarycol VARBINARY(8)
);
GO
INSERT INTO t1 VALUES ('No blanks', 'No blanks', 0x00ee);
INSERT INTO t1 VALUES ('Trailing blank ', 'Trailing blank ', 0x00ee00);

SELECT 'CHAR' = '>' + charcol + '\<', 'VARCHAR'='>' + varcharcol + '\<',


varbinarycol
FROM t1;
GO

PRINT 'Testing with ANSI_PADDING OFF';


SET ANSI_PADDING OFF;
GO

CREATE TABLE t2 (
charcol CHAR(16) NULL,
varcharcol VARCHAR(16) NULL,
varbinarycol VARBINARY(8)
);
GO
INSERT INTO t2 VALUES ('No blanks', 'No blanks', 0x00ee);
INSERT INTO t2 VALUES ('Trailing blank ', 'Trailing blank ', 0x00ee00);

SELECT 'CHAR' = '>' + charcol + '\<', 'VARCHAR'='>' + varcharcol + '<',


varbinarycol
FROM t2;
GO

DROP TABLE t1;


DROP TABLE t2;

See Also
SET Statements (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
CREATE TABLE (Transact-SQL )
INSERT (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
SET ANSI_WARNINGS (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies ISO standard behavior for several error conditions.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET ANSI_WARNINGS { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET ANSI_WARNINGS ON

Remarks
SET ANSI_WARNINGS affects the following conditions:
When set to ON, if null values appear in aggregate functions, such as SUM, AVG, MAX, MIN, STDEV,
STDEVP, VAR, VARP, or COUNT, a warning message is generated. When set to OFF, no warning is issued.
When set to ON, the divide-by-zero and arithmetic overflow errors cause the statement to be rolled back
and an error message is generated. When set to OFF, the divide-by-zero and arithmetic overflow errors
cause null values to be returned. The behavior in which a divide-by-zero or arithmetic overflow error causes
null values to be returned occurs if an INSERT or UPDATE is tried on a character, Unicode, or binary
column in which the length of a new value exceeds the maximum size of the column. If SET
ANSI_WARNINGS is ON, the INSERT or UPDATE is canceled as specified by the ISO standard. Trailing
blanks are ignored for character columns and trailing nulls are ignored for binary columns. When OFF, data
is truncated to the size of the column and the statement succeeds.

NOTE
When truncation occurs in any conversion to or from binary or varbinary data, no warning or error is issued,
regardless of SET options.

NOTE
ANSI_WARNINGS is not honored when passing parameters in a stored procedure, user-defined function, or when
declaring and setting variables in a batch statement. For example, if a variable is defined as char(3), and then set to a
value larger than three characters, the data is truncated to the defined size and the INSERT or UPDATE statement
succeeds.

You can use the user options option of sp_configure to set the default setting for ANSI_WARNINGS for all
connections to the server. For more information, see sp_configure (Transact-SQL ).
SET ANSI_WARNINGS must be ON when you are creating or manipulating indexes on computed
columns or indexed views. If SET ANSI_WARNINGS is OFF, CREATE, UPDATE, INSERT, and DELETE
statements on tables with indexes on computed columns or indexed views will fail. For more information
about required SET option settings with indexed views and indexes on computed columns, see
"Considerations When You Use the SET Statements" in SET Statements (Transact-SQL ).
SQL Server includes the ANSI_WARNINGS database option. This is equivalent to SET
ANSI_WARNINGS. When SET ANSI_WARNINGS is ON, errors or warnings are raised in divide-by-zero,
string too large for database column, and other similar errors. When SET ANSI_WARNINGS is OFF, these
errors and warnings are not raised. The default value in the model database for SET ANSI_WARNINGS is
OFF. If not specified, the setting of ANSI_WARNINGS applies. If SET ANSI_WARNINGS is OFF, SQL
Server uses the value of the is_ansi_warnings_on column in the sys.databases catalog view.
ANSI_WARNINGS should be set to ON for executing distributed queries.
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL
Server automatically set ANSI_WARNINGS to ON when connecting. This can be configured in ODBC data
sources, in ODBC connection attributes, set in the application before connecting. The default for SET
ANSI_WARNINGS is OFF for connections from DB -Library applications.
When SET ANSI_DEFAULTS is ON, SET ANSI_WARNINGS is enabled.
The setting of SET ANSI_WARNINGS is set at execute or run time and not at parse time.
If either SET ARITHABORT or SET ARITHIGNORE is OFF and SET ANSI_WARNINGS is ON, SQL Server
still returns an error message when encountering divide-by-zero or overflow errors.
To view the current setting for this setting, run the following query.

DECLARE @ANSI_WARN VARCHAR(3) = 'OFF';


IF ( (8 & @@OPTIONS) = 8 ) SET @ANSI_WARN = 'ON';
SELECT @ANSI_WARN AS ANSI_WARNINGS;

Permissions
Requires membership in the public role.

Examples
The following example demonstrates the three situations that are previously mentioned, with the SET
ANSI_WARNINGS to ON and OFF.

USE AdventureWorks2012;
GO

CREATE TABLE T1
(
a int,
b int NULL,
c varchar(20)
);
GO

SET NOCOUNT ON;

INSERT INTO T1
VALUES (1, NULL, '')
,(1, 0, '')
,(2, 1, '')
,(2, 2, '');

SET NOCOUNT OFF;


GO

PRINT '**** Setting ANSI_WARNINGS ON';


GO

SET ANSI_WARNINGS ON;


GO

PRINT 'Testing NULL in aggregate';


GO
SELECT a, SUM(b)
FROM T1
GROUP BY a;
GO

PRINT 'Testing String Overflow in INSERT';


GO
INSERT INTO T1
VALUES (3, 3, 'Text string longer than 20 characters');
GO

PRINT 'Testing Divide by zero';


GO
SELECT a / b AS ab
FROM T1;
GO

PRINT '**** Setting ANSI_WARNINGS OFF';


GO
SET ANSI_WARNINGS OFF;
GO

PRINT 'Testing NULL in aggregate';


GO
SELECT a, SUM(b)
FROM T1
GROUP BY a;
GO

PRINT 'Testing String Overflow in INSERT';


GO
INSERT INTO T1
VALUES (4, 4, 'Text string longer than 20 characters');
GO
SELECT a, b, c
FROM T1
WHERE a = 4;
GO

PRINT 'Testing Divide by zero';


GO
SELECT a / b AS ab
FROM T1;
GO

DROP TABLE T1

See Also
INSERT (Transact-SQL )
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
SET ARITHABORT (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Terminates a query when an overflow or divide-by-zero error occurs during query execution.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET ARITHABORT { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET ARITHABORT ON

Remarks
You should always set ARITHABORT to ON in your logon sessions. Setting ARITHABORT to OFF can negatively
impact query optimization leading to performance issues.

WARNING
The default ARITHABORT setting for SQL Server Management Studio is ON. Client applications setting ARITHABORT to OFF
can receive different query plans making it difficult to troubleshoot poorly performing queries. That is, the same query can
execute fast in management studio but slow in the application. When troubleshooting queries with Management Studio
always match the client ARITHABORT setting.

If SET ARITHABORT is ON and SET ANSI WARNINGS is ON, these error conditions cause the query to
terminate.
If SET ARITHABORT is ON and SET ANSI WARNINGS is OFF, these error conditions cause the batch to
terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one
of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic
operation.
If SET ARITHABORT is OFF and SET ANSI WARNINGS is OFF and one of these errors occurs, a warning
message is displayed, and NULL is assigned to the result of the arithmetic operation.

NOTE
If neither SET ARITHABORT nor SET ARITHIGNORE is set, SQL Server returns NULL and returns a warning message after the
query is executed.

Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the database compatibility level is set
to 90 or higher. If the database compatibility level is set to 80 or earlier, the ARITHABORT option must be explicitly
set to ON.
During expression evaluation when SET ARITHABORT is OFF, if an INSERT, DELETE or UPDATE statement
encounters an arithmetic error, overflow, divide-by-zero, or a domain error, SQL Server inserts or updates a NULL
value. If the target column is not nullable, the insert or update action fails and the user receives an error.
If either SET ARITHABORT or SET ARITHIGNORE is OFF and SET ANSI_WARNINGS is ON, SQL Server still
returns an error message when encountering divide-by-zero or overflow errors.
If SET ARITHABORT is set to OFF and an abort error occurs during the evaluation of the Boolean condition of an
IF statement, the FALSE branch is executed.
SET ARITHABORT must be ON when you are creating or changing indexes on computed columns or indexed
views. If SET ARITHABORT is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on
computed columns or indexed views will fail.
The setting of SET ARITHABORT is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query:

DECLARE @ARITHABORT VARCHAR(3) = 'OFF';


IF ( (64 & @@OPTIONS) = 64 ) SET @ARITHABORT = 'ON';
SELECT @ARITHABORT AS ARITHABORT;

Permissions
Requires membership in the public role.

Examples
The following example demonstrates the divide-by-zero and overflow errors that have both SET ARITHABORT
settings.

-- SET ARITHABORT
-------------------------------------------------------------------------------
-- Create tables t1 and t2 and insert data values.
CREATE TABLE t1 (
a TINYINT,
b TINYINT
);
CREATE TABLE t2 (
a TINYINT
);
GO
INSERT INTO t1
VALUES (1, 0);
INSERT INTO t1
VALUES (255, 1);
GO

PRINT '*** SET ARITHABORT ON';


GO
-- SET ARITHABORT ON and testing.
SET ARITHABORT ON;
GO

PRINT '*** Testing divide by zero during SELECT';


GO
SELECT a / b AS ab
FROM t1;
GO
PRINT '*** Testing divide by zero during INSERT';
GO
INSERT INTO t2
SELECT a / b AS ab
FROM t1;
GO

PRINT '*** Testing tinyint overflow';


GO
INSERT INTO t2
SELECT a + b AS ab
FROM t1;
GO

PRINT '*** Resulting data - should be no data';


GO
SELECT *
FROM t2;
GO

-- Truncate table t2.


TRUNCATE TABLE t2;
GO

-- SET ARITHABORT OFF and testing.


PRINT '*** SET ARITHABORT OFF';
GO
SET ARITHABORT OFF;
GO

-- This works properly.


PRINT '*** Testing divide by zero during SELECT';
GO
SELECT a / b AS ab
FROM t1;
GO

-- This works as if SET ARITHABORT was ON.


PRINT '*** Testing divide by zero during INSERT';
GO
INSERT INTO t2
SELECT a / b AS ab
FROM t1;
GO
PRINT '*** Testing tinyint overflow';
GO
INSERT INTO t2
SELECT a + b AS ab
FROM t1;
GO

PRINT '*** Resulting data - should be 0 rows';


GO
SELECT *
FROM t2;
GO

-- Drop tables t1 and t2.


DROP TABLE t1;
DROP TABLE t2;
GO

See Also
SET Statements (Transact-SQL )
SET ARITHIGNORE (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
SET ARITHIGNORE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls whether error messages are returned from overflow or divide-by-zero errors during a query.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET ARITHIGNORE { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET ARITHIGNORE OFF

Remarks
The SET ARITHIGNORE setting only controls whether an error message is returned. SQL Server returns a NULL
in a calculation involving an overflow or divide-by-zero error, regardless of this setting. The SET ARITHABORT
setting can be used to determine whether the query is terminated. This setting does not affect errors occurring
during INSERT, UPDATE, and DELETE statements.
If either SET ARITHABORT or SET ARITHIGNORE is OFF and SET ANSI_WARNINGS is ON, SQL Server still
returns an error message when encountering divide-by-zero or overflow errors.
The setting of SET ARITHIGNORE is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.

DECLARE @ARITHIGNORE VARCHAR(3) = 'OFF';


IF ( (128 & @@OPTIONS) = 128 ) SET @ARITHIGNORE = 'ON';
SELECT @ARITHIGNORE AS ARITHIGNORE;

Permissions
Requires membership in the public role.

Examples
The following example demonstrates using both SET ARITHIGNORE settings with both types of query errors.
SET ARITHABORT OFF;
SET ANSI_WARNINGS OFF
GO

PRINT 'Setting ARITHIGNORE ON';


GO
-- SET ARITHIGNORE ON and testing.
SET ARITHIGNORE ON;
GO
SELECT 1 / 0 AS DivideByZero;
GO
SELECT CAST(256 AS TINYINT) AS Overflow;
GO

PRINT 'Setting ARITHIGNORE OFF';


GO
-- SET ARITHIGNORE OFF and testing.
SET ARITHIGNORE OFF;
GO
SELECT 1 / 0 AS DivideByZero;
GO
SELECT CAST(256 AS TINYINT) AS Overflow;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


The following example demonstrates the divide by zero and the overflow errors. This example does not return an
error message for these errors because ARITHIGNORE is OFF.

-- SET ARITHIGNORE OFF and testing.


SET ARITHIGNORE OFF;
SELECT 1 / 0 AS DivideByZero;
SELECT CAST(256 AS TINYINT) AS Overflow;

See Also
SET Statements (Transact-SQL )
SET ARITHABORT (Transact-SQL )
SET CONCAT_NULL_YIELDS_NULL (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls whether concatenation results are treated as null or empty string values.

IMPORTANT
In a future version of SQL Server CONCAT_NULL_YIELDS_NULL will always be ON and any applications that explicitly set the
option to OFF will generate an error. Avoid using this feature in new development work, and plan to modify applications that
currently use this feature.

Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

SET CONCAT_NULL_YIELDS_NULL { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET CONCAT_NULL_YIELDS_NULL ON

Remarks
When SET CONCAT_NULL_YIELDS_NULL is ON, concatenating a null value with a string yields a NULL result.
For example, SELECT 'abc' + NULL yields NULL . When SET CONCAT_NULL_YIELDS_NULL is OFF, concatenating
a null value with a string yields the string itself (the null value is treated as an empty string). For example,
SELECT 'abc' + NULL yields abc .

If SET CONCAT_NULL_YIELDS_NULL is not specified, the setting of the CONCAT_NULL_YIELDS_NULL


database option applies.

NOTE
SET CONCAT_NULL_YIELDS_NULL is the same setting as the CONCAT_NULL_YIELDS_NULL setting of ALTER DATABASE.

The setting of SET CONCAT_NULL_YIELDS_NULL is set at execute or run time and not at parse time.
SET CONCAT_NULL_YIELDS_NULL must be ON when you are creating or changing indexes on computed
columns or indexed views. If SET CONCAT_NULL_YIELDS_NULL is OFF, any CREATE, UPDATE, INSERT, and
DELETE statements on tables with indexes on computed columns or indexed views will fail. For more information
about required SET option settings with indexed views and indexes on computed columns, see "Considerations
When You Use the SET Statements" in SET Statements (Transact-SQL ).
When CONCAT_NULL_YIELDS_NULL is set to OFF, string concatenation across server boundaries cannot occur.
To view the current setting for this setting, run the following query.

DECLARE @CONCAT_NULL_YIELDS_NULL VARCHAR(3) = 'OFF';


IF ( (4096 & @@OPTIONS) = 4096 ) SET @CONCAT_NULL_YIELDS_NULL = 'ON';
SELECT @CONCAT_NULL_YIELDS_NULL AS CONCAT_NULL_YIELDS_NULL;

Examples
The following example showing using both SET CONCAT_NULL_YIELDS_NULL settings.

PRINT 'Setting CONCAT_NULL_YIELDS_NULL ON';


GO
-- SET CONCAT_NULL_YIELDS_NULL ON and testing.
SET CONCAT_NULL_YIELDS_NULL ON;
GO
SELECT 'abc' + NULL ;
GO

-- SET CONCAT_NULL_YIELDS_NULL OFF and testing.


SET CONCAT_NULL_YIELDS_NULL OFF;
GO
SELECT 'abc' + NULL;
GO

See Also
SET Statements (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
SET CONTEXT_INFO (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Associates up to 128 bytes of binary information with the current session or connection.
Transact-SQL Syntax Conventions

Syntax
SET CONTEXT_INFO { binary_str | @binary_var }

Arguments
binary_str
Is a binary constant, or a constant that is implicitly convertible to binary, to associate with the current session or
connection.
@ binary_var
Is a varbinary or binary variable holding a context value to associate with the current session or connection.

Remarks
The preferred way to retrieve the context information for the current session is to use the CONTEXT_INFO
function. Session context information is also stored in the context_info columns in the following system views:
sys.dm_exec_requests
sys.dm_exec_sessions
sys.sysprocesses
SET CONTEXT_INFO cannot be specified in a user-defined function. You cannot supply a null value to SET
CONTEXT_INFO because the views holding the values do not allow for null values.
SET CONTEXT_INFO does not accept expressions other than constants or variable names. To set the
context information to the result of a function call, you must first include the result of the function call in a
binary or varbinary variable.
When you issue SET CONTEXT_INFO in a stored procedure or trigger, unlike in other SET statements, the
new value set for the context information persists after the stored procedure or trigger is completed.

Examples
A. Setting context information by using a constant
The following example demonstrates SET CONTEXT_INFO by setting the value and displaying the results. Note that
querying sys.dm_exec_sessions requires SELECT and VIEW SERVER STATE permissions, whereas using the
CONTEXT_INFO function does not.
SET CONTEXT_INFO 0x01010101;
GO
SELECT context_info
FROM sys.dm_exec_sessions
WHERE session_id = @@SPID;
GO

B. Setting context information by using a function


The following example demonstrates using the output of a function to set the context value, where the value from
the function must be first placed in a binary variable.

DECLARE @BinVar varbinary(128);


SET @BinVar = CAST(REPLICATE( 0x20, 128 ) AS varbinary(128) );
SET CONTEXT_INFO @BinVar;

SELECT CONTEXT_INFO() AS MyContextInfo;


GO

See Also
SET Statements (Transact-SQL )
sys.dm_exec_requests (Transact-SQL )
sys.dm_exec_sessions (Transact-SQL )
CONTEXT_INFO (Transact-SQL )
SET CURSOR_CLOSE_ON_COMMIT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls the behavior of the Transact-SQL COMMIT TRANSACTION statement. The default value for this setting
is OFF. This means that the server will not close cursors when you commit a transaction.
Transact-SQL Syntax Conventions

Syntax
SET CURSOR_CLOSE_ON_COMMIT { ON | OFF }

Remarks
When SET CURSOR_CLOSE_ON_COMMIT is ON, this setting closes any open cursors on commit or rollback in
compliance with ISO. When SET CURSOR_CLOSE_ON_COMMIT is OFF, the cursor is not closed when a
transaction is committed.

NOTE
SET CURSOR_CLOSE_ON_COMMIT to ON will not close open cursors on rollback when the rollback is applied to a
savepoint_name from a SAVE TRANSACTION statement.

When SET CURSOR_CLOSE_ON_COMMIT is OFF, a ROLLBACK statement closes only open asynchronous
cursors that are not fully populated. STATIC or INSENSITIVE cursors that were opened after modifications were
made will no longer reflect the state of the data if the modifications are rolled back.
SET CURSOR_CLOSE_ON_COMMIT controls the same behavior as the CURSOR_CLOSE_ON_COMMIT
database option. If CURSOR_CLOSE_ON_COMMIT is set to ON or OFF, that setting is used on the connection. If
SET CURSOR_CLOSE_ON_COMMIT has not been specified, the value in the is_cursor_close_on_commit_on
column in the sys.databases catalog view applies.
The SQL Server Native Client OLE DB Provider for SQL Server and the SQL Server Native Client ODBC driver
both set CURSOR_CLOSE_ON_COMMIT to OFF when they connect. DB -Library does not automatically set the
CURSOR_CLOSE_ON_COMMIT value.
When SET ANSI_DEFAULTS is ON, SET CURSOR_CLOSE_ON_COMMIT is enabled.
The setting of SET CURSOR_CLOSE_ON_COMMIT is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.

DECLARE @CURSOR_CLOSE VARCHAR(3) = 'OFF';


IF ( (4 & @@OPTIONS) = 4 ) SET @CURSOR_CLOSE = 'ON';
SELECT @CURSOR_CLOSE AS CURSOR_CLOSE_ON_COMMIT;
Permissions
Requires membership in the public role.

Examples
The following example defines a cursor in a transaction and attempts to use it after the transaction is committed.

-- SET CURSOR_CLOSE_ON_COMMIT
-------------------------------------------------------------------------------
SET NOCOUNT ON;

CREATE TABLE t1 (a INT);


GO

INSERT INTO t1
VALUES (1), (2);
GO

PRINT '-- SET CURSOR_CLOSE_ON_COMMIT ON';


GO
SET CURSOR_CLOSE_ON_COMMIT ON;
GO
PRINT '-- BEGIN TRAN';
BEGIN TRAN;
PRINT '-- Declare and open cursor';
DECLARE testcursor CURSOR FOR
SELECT a FROM t1;
OPEN testcursor;
PRINT '-- Commit tran';
COMMIT TRAN;
PRINT '-- Try to use cursor';
FETCH NEXT FROM testcursor;
CLOSE testcursor;
DEALLOCATE testcursor;
GO
PRINT '-- SET CURSOR_CLOSE_ON_COMMIT OFF';
GO
SET CURSOR_CLOSE_ON_COMMIT OFF;
GO
PRINT '-- BEGIN TRAN';
BEGIN TRAN;
PRINT '-- Declare and open cursor';
DECLARE testcursor CURSOR FOR
SELECT a FROM t1;
OPEN testcursor;
PRINT '-- Commit tran';
COMMIT TRAN;
PRINT '-- Try to use cursor';
FETCH NEXT FROM testcursor;
CLOSE testcursor;
DEALLOCATE testcursor;
GO
DROP TABLE t1;
GO

See Also
ALTER DATABASE (Transact-SQL )
BEGIN TRANSACTION (Transact-SQL )
CLOSE (Transact-SQL )
COMMIT TRANSACTION (Transact-SQL )
ROLLBACK TRANSACTION (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
SET DATEFIRST (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets the first day of the week to a number from 1 through 7.
For an overview of all Transact-SQL date and time data types and functions, see Date and Time Data Types and
Functions (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET DATEFIRST { number | @number_var }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET DATEFIRST 7 ;

Arguments
number | @number_var
Is an integer that indicates the first day of the week. It can be one of the following values.

VALUE FIRST DAY OF THE WEEK IS

1 Monday

2 Tuesday

3 Wednesday

4 Thursday

5 Friday

6 Saturday

7 (default, U.S. English) Sunday

Remarks
To see the current setting of SET DATEFIRST, use the @@DATEFIRST function.
The setting of SET DATEFIRST is set at execute or run time and not at parse time.
Specifying SET DATEFIRST has no effect on DATEDIFF. DATEDIFF always uses Sunday as the first day of the
week to ensure the function is deterministic.

Permissions
Requires membership in the public role.

Examples
The following example displays the day of the week for a date value and shows the effects of changing the
DATEFIRST setting.

-- SET DATEFIRST to U.S. English default value of 7.


SET DATEFIRST 7;

SELECT CAST('1999-1-1' AS datetime2) AS SelectDate


,DATEPART(dw, '1999-1-1') AS DayOfWeek;
-- January 1, 1999 is a Friday. Because the U.S. English default
-- specifies Sunday as the first day of the week, DATEPART of 1999-1-1
-- (Friday) yields a value of 6, because Friday is the sixth day of the
-- week when you start with Sunday as day 1.

SET DATEFIRST 3;
-- Because Wednesday is now considered the first day of the week,
-- DATEPART now shows that 1999-1-1 (a Friday) is the third day of the
-- week. The following DATEPART function should return a value of 3.
SELECT CAST('1999-1-1' AS datetime2) AS SelectDate
,DATEPART(dw, '1999-1-1') AS DayOfWeek;
GO

See Also
SET Statements (Transact-SQL )
SET DATEFORMAT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets the order of the month, day, and year date parts for interpreting date, smalldatetime, datetime, datetime2
and datetimeoffset character strings.
For an overview of all Transact-SQL date and time data types and functions, see Date and Time Data Types and
Functions (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
SET DATEFORMAT { format | @format_var }

Arguments
format | @format_var
Is the order of the date parts. Valid parameters are mdy, dmy, ymd, ydm, myd, and dym. Can be either Unicode
or double-byte character sets (DBCS ) converted to Unicode. The U.S. English default is mdy. For the default
DATEFORMAT of all support languages, see sp_helplanguage (Transact-SQL ).

Remarks
The DATEFORMAT ydm is not supported for date, datetime2 and datetimeoffset data types.
The effect of the DATEFORMAT setting on the interpretation of character strings might be different for datetime
and smalldatetime values than for date, datetime2 and datetimeoffset values, depending on the string format.
This setting affects the interpretation of character strings as they are converted to date values for storage in the
database. It does not affect the display of date data type values that are stored in the database or the storage
format.
Some character strings formats, for example ISO 8601, are interpreted independently of the DATEFORMAT
setting.
The setting of SET DATEFORMAT is set at execute or run time and not at parse time.
SET DATEFORMAT overrides the implicit date format setting of SET L ANGUAGE.

Permissions
Requires membership in the public role.

Examples
The following example uses different date strings as inputs in sessions with the same DATEFORMAT setting.
-- Set date format to day/month/year.
SET DATEFORMAT dmy;
GO
DECLARE @datevar datetime2 = '31/12/2008 09:01:01.1234567';
SELECT @datevar;
GO
-- Result: 2008-12-31 09:01:01.123
SET DATEFORMAT dmy;
GO
DECLARE @datevar datetime2 = '12/31/2008 09:01:01.1234567';
SELECT @datevar;
GO
-- Result: Msg 241: Conversion failed when converting date and/or time -- from character string.

GO

See Also
SET Statements (Transact-SQL )
SET DEADLOCK_PRIORITY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the relative importance that the current session continue processing if it is deadlocked with another
session.
Transact-SQL Syntax Conventions

Syntax
SET DEADLOCK_PRIORITY { LOW | NORMAL | HIGH | <numeric-priority> | @deadlock_var | @deadlock_intvar }

<numeric-priority> ::= { -10 | -9 | -8 | … | 0 | … | 8 | 9 | 10 }

Arguments
LOW
Specifies that the current session will be the deadlock victim if it is involved in a deadlock and other sessions
involved in the deadlock chain have deadlock priority set to either NORMAL or HIGH or to an integer value
greater than -5. The current session will not be the deadlock victim if the other sessions have deadlock priority set
to an integer value less than -5. It also specifies that the current session is eligible to be the deadlock victim if
another session has set deadlock priority set to LOW or to an integer value equal to -5.
NORMAL
Specifies that the current session will be the deadlock victim if other sessions involved in the deadlock chain have
deadlock priority set to HIGH or to an integer value greater than 0, but will not be the deadlock victim if the other
sessions have deadlock priority set to LOW or to an integer value less than 0. It also specifies that the current
session is eligible to be the deadlock victim if another other session has set deadlock priority to NORMAL or to an
integer value equal to 0. NORMAL is the default priority.
HIGH
Specifies that the current session will be the deadlock victim if other sessions involved in the deadlock chain have
deadlock priority set to an integer value greater than 5, or is eligible to be the deadlock victim if another session
has also set deadlock priority to HIGH or to an integer value equal to 5.
<numeric-priority>
Is an integer value range (-10 to 10) to provide 21 levels of deadlock priority. It specifies that the current session
will be the deadlock victim if other sessions in the deadlock chain are running at a higher deadlock priority value,
but will not be the deadlock victim if the other sessions are running at a deadlock priority value lower than the
value of the current session. It also specifies that the current session is eligible to be the deadlock victim if another
session is running with a deadlock priority value that is the same as the current session. LOW maps to -5,
NORMAL to 0, and HIGH to 5.
@ deadlock_var
Is a character variable specifying the deadlock priority. The variable must be set to a value of 'LOW', 'NORMAL' or
'HIGH'. The variable must be large enough to hold the entire string.
@ deadlock_intvar
Is an integer variable specifying the deadlock priority. The variable must be set to an integer value in the range (-10
to 10).

Remarks
Deadlocks arise when two sessions are both waiting for access to resources locked by the other. When an instance
of SQL Server detects that two sessions are deadlocked, it resolves the deadlock by choosing one of the sessions
as a deadlock victim. The current transaction of the victim is rolled back and deadlock error message 1205 is
returned to the client. This releases all of the locks held by that session, allowing the other session to proceed.
Which session is chosen as the deadlock victim depends on each session's deadlock priority:
If both sessions have the same deadlock priority, the instance of SQL Server chooses the session that is less
expensive to roll back as the deadlock victim. For example, if both sessions have set their deadlock priority
to HIGH, the instance will choose as a victim the session it estimates is less costly to roll back. The cost is
determined by comparing the number of log bytes written to that point in each transaction. (You can see this
value as "Log Used" in a deadlock graph).
If the sessions have different deadlock priorities, the session with the lowest deadlock priority is chosen as
the deadlock victim.
SET DEADLOCK_PRIORITY is set at execute or run time and not at parse time.

Permissions
Requires membership in the public role.

Examples
The following example uses a variable to set the deadlock priority to LOW .

DECLARE @deadlock_var NCHAR(3);


SET @deadlock_var = N'LOW';

SET DEADLOCK_PRIORITY @deadlock_var;


GO

The following example sets the deadlock priority to NORMAL .

SET DEADLOCK_PRIORITY NORMAL;


GO

See Also
@@LOCK_TIMEOUT (Transact-SQL )
SET Statements (Transact-SQL )
SET LOCK_TIMEOUT (Transact-SQL )
SET FIPS_FLAGGER (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies checking for compliance with the FIPS 127-2 standard. This is based on the ISO standard. For
information about SQL Server FIPS compliance, see How to use SQL Server 2016 in FIPS 140-2-compliant
mode.
Transact-SQL Syntax Conventions

Syntax
SET FIPS_FLAGGER ( 'level' | OFF )

Arguments
' level '
Is the level of compliance against the FIPS 127-2 standard for which all database operations are checked. If a
database operation conflicts with the level of ISO standards chosen, Microsoft SQL Server generates a warning.
level must be one of the following values.

VALUE DESCRIPTION

ENTRY Standards checking for ISO entry-level compliance.

FULL Standards checking for ISO full compliance.

INTERMEDIATE Standards checking for ISO intermediate-level compliance.

OFF No standards checking.

Remarks
The setting of SET FIPS_FLAGGER is set at parse time and not at execute or run time. Setting at parse time means
that if the SET statement is present in the batch or stored procedure, it takes effect, regardless of whether code
execution actually reaches that point; and the SET statement takes effect before any statements are executed. For
example, even if the SET statement is in an IF...ELSE statement block that is never reached during execution, the
SET statement still takes effect because the IF...ELSE statement block is parsed.

If SET FIPS_FLAGGER is set in a stored procedure, the value of SET FIPS_FLAGGER is restored after control is returned
from the stored procedure. Therefore, a SET FIPS_FLAGGER statement specified in dynamic SQL does not have any
effect on any statements following the dynamic SQL statement.

Permissions
Requires membership in the public role.
See Also
SET Statements (Transact-SQL )
SET FMTONLY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Returns only metadata to the client. Can be used to test the format of the response without actually running the
query.

NOTE
Do not use this feature. This feature has been replaced by sp_describe_first_result_set (Transact-SQL),
sp_describe_undeclared_parameters (Transact-SQL), sys.dm_exec_describe_first_result_set (Transact-SQL), and
sys.dm_exec_describe_first_result_set_for_object (Transact-SQL).

Transact-SQL Syntax Conventions

Syntax
SET FMTONLY { ON | OFF }

Remarks
No rows are processed or sent to the client because of the request when SET FMTONLY is turned ON.
The setting of SET FMTONLY is set at execute or run time and not at parse time.

Permissions
Requires membership in the public role.

Examples
A: View the column header information for a query without actually running the query.
The following example changes the SET FMTONLY setting to ON and executes a SELECT statement. The setting
causes the statement to return the column information only; no rows of data are returned.

USE AdventureWorks2012;
GO
SET FMTONLY ON;
GO
SELECT *
FROM HumanResources.Employee;
GO
SET FMTONLY OFF;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


B. View the column header information for a query without actually running the query.
The following example shows how to return only the column header (metadata) information for a query. The batch
begins with FMTONLY set to OFF and changes FMTONLY to ON before the SELECT statement. This causes the
SELECT statement to return only the column headers; no rows of data are returned.

-- Uses AdventureWorks

BEGIN
SET FMTONLY OFF;
SET DATEFORMAT mdy;
SET FMTONLY ON;
SELECT * FROM dbo.DimCustomer;
SET FMTONLY OFF;
END

See Also
SET Statements (Transact-SQL )
SET FORCEPLAN (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
When FORCEPL AN is set to ON, the SQL Server query optimizer processes a join in the same order as the tables
appear in the FROM clause of a query. In addition, setting FORCEPL AN to ON forces the use of a nested loop join
unless other types of joins are required to construct a plan for the query, or they are requested with join hints or
query hints.
Transact-SQL Syntax Conventions

Syntax
SET FORCEPLAN { ON | OFF }

Remarks
SET FORCEPL AN essentially overrides the logic used by the query optimizer to process a Transact-SQL SELECT
statement. The data returned by the SELECT statement is the same regardless of this setting. The only difference is
the way in which SQL Server processes the tables to satisfy the query.
Query optimizer hints can also be used in queries to affect how SQL Server processes the SELECT statement.
SET FORCEPL AN is applied at execute or run time and not at parse time.

Permissions
SET FORCEPL AN permissions default to all users.

Examples
The following example performs a join of four tables. The SHOWPLAN_TEXT setting is enabled, so SQL Server returns
information about how it is processing the query differently after the SET FORCE_PLAN setting is enabled.
USE AdventureWorks2012;
GO
-- Make sure FORCEPLAN is set to OFF.
SET SHOWPLAN_TEXT OFF;
GO
SET FORCEPLAN OFF;
GO
SET SHOWPLAN_TEXT ON;
GO
-- Example where the query plan is not forced.
SELECT p.LastName, p.FirstName, v.Name
FROM Person.Person AS p
INNER JOIN HumanResources.Employee AS e
ON e.BusinessEntityID = p.BusinessEntityID
INNER JOIN Purchasing.PurchaseOrderHeader AS poh
ON e.BusinessEntityID = poh.EmployeeID
INNER JOIN Purchasing.Vendor AS v
ON poh.VendorID = v.BusinessEntityID;
GO
-- SET FORCEPLAN to ON.
SET SHOWPLAN_TEXT OFF;
GO
SET FORCEPLAN ON;
GO
SET SHOWPLAN_TEXT ON;
GO
-- Reexecute inner join to see the effect of SET FORCEPLAN ON.
SELECT p.LastName, p.FirstName, v.Name
FROM Person.Person AS p
INNER JOIN HumanResources.Employee AS e
ON e.BusinessEntityID = p.BusinessEntityID
INNER JOIN Purchasing.PurchaseOrderHeader AS poh
ON e.BusinessEntityID = poh.EmployeeID
INNER JOIN Purchasing.Vendor AS v
ON poh.VendorID = v.BusinessEntityID;
GO
SET SHOWPLAN_TEXT OFF;
GO
SET FORCEPLAN OFF;
GO

See Also
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET SHOWPL AN_TEXT (Transact-SQL )
SET IDENTITY_INSERT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Allows explicit values to be inserted into the identity column of a table.
Transact-SQL Syntax Conventions

Syntax
SET IDENTITY_INSERT [ [ database_name . ] schema_name . ] table { ON | OFF }

Arguments
database_name
Is the name of the database in which the specified table resides.
schema_name
Is the name of the schema to which the table belongs.
table
Is the name of a table with an identity column.

Remarks
At any time, only one table in a session can have the IDENTITY_INSERT property set to ON. If a table already has
this property set to ON, and a SET IDENTITY_INSERT ON statement is issued for another table, SQL Server
returns an error message that states SET IDENTITY_INSERT is already ON and reports the table it is set ON for.
If the value inserted is larger than the current identity value for the table, SQL Server automatically uses the new
inserted value as the current identity value.
The setting of SET IDENTITY_INSERT is set at execute or run time and not at parse time.

Permissions
User must own the table or have ALTER permission on the table.

Examples
The following example creates a table with an identity column and shows how the SET IDENTITY_INSERT setting
can be used to fill a gap in the identity values caused by a DELETE statement.
USE AdventureWorks2012;
GO
-- Create tool table.
CREATE TABLE dbo.Tool(
ID INT IDENTITY NOT NULL PRIMARY KEY,
Name VARCHAR(40) NOT NULL
);
GO
-- Inserting values into products table.
INSERT INTO dbo.Tool(Name)
VALUES ('Screwdriver')
, ('Hammer')
, ('Saw')
, ('Shovel');
GO

-- Create a gap in the identity values.


DELETE dbo.Tool
WHERE Name = 'Saw';
GO

SELECT *
FROM dbo.Tool;
GO

-- Try to insert an explicit ID value of 3;


-- should return a warning.
INSERT INTO dbo.Tool (ID, Name) VALUES (3, 'Garden shovel');
GO
-- SET IDENTITY_INSERT to ON.
SET IDENTITY_INSERT dbo.Tool ON;
GO

-- Try to insert an explicit ID value of 3.


INSERT INTO dbo.Tool (ID, Name) VALUES (3, 'Garden shovel');
GO

SELECT *
FROM dbo.Tool;
GO
-- Drop products table.
DROP TABLE dbo.Tool;
GO

See Also
CREATE TABLE (Transact-SQL )
IDENTITY (Property) (Transact-SQL )
SCOPE_IDENTITY (Transact-SQL )
INSERT (Transact-SQL )
SET Statements (Transact-SQL )
SET IMPLICIT_TRANSACTIONS (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets the BEGIN TRANSACTION mode to implicit, for the connection.
Transact-SQL Syntax Conventions

Syntax
SET IMPLICIT_TRANSACTIONS { ON | OFF }

Remarks
When ON, the system is in implicit transaction mode. This means that if @@TRANCOUNT = 0, any of the
following Transact-SQL statements begins a new transaction. It is equivalent to an unseen BEGIN TRANSACTION
being executed first:

ALTER TABLE FETCH REVOKE

BEGIN TRANSACTION GRANT SELECT (See exception below.)

CREATE INSERT TRUNCATE TABLE

DELETE OPEN UPDATE

DROP . .

When OFF, each of the preceding T-SQL statements is bounded by an unseen BEGIN TRANSACTION and an
unseen COMMIT TRANSACTION statement. When OFF, we say the transaction mode is autocommit. If your T-
SQL code visibly issues a BEGIN TRANSACTION, we say the transaction mode is explicit.
There are several clarifying point to understand:
When the transaction mode is implicit, no unseen BEGIN TRANSACTION is issued if @@trancount > 0
already. However, any explicit BEGIN TRANSACTION statements still increment @@TRANCOUNT.
When your INSERT statements and anything else in your unit of work is finished, you must issue COMMIT
TRANSACTION statements until @@TRANCOUNT is decremented back down to 0. Or you can issue one
ROLLBACK TRANSACTION.
SELECT statements that do not select from a table do not start implicit transactions. For example
SELECT GETDATE(); or SELECT 1, 'ABC'; do not require transactions.

Implicit transactions may unexpectedly be ON due to ANSI defaults. For details see SET ANSI_DEFAULTS
(Transact-SQL ).
IMPLICIT_TRANSACTIONS ON is not popular. In most cases where IMPLICIT_TRANSACTIONS is ON, it
is because the choice of SET ANSI_DEFAULTS ON has been made.
The SQL Server Native Client OLE DB Provider for SQL Server, and the SQL Server Native Client ODBC
driver, automatically set IMPLICIT_TRANSACTIONS to OFF when connecting. SET
IMPLICIT_TRANSACTIONS defaults to OFF for connections with the SQLClient managed provider, and
for SOAP requests received through HTTP endpoints.
To view the current setting for IMPLICIT_TRANSACTIONS, run the following query.

DECLARE @IMPLICIT_TRANSACTIONS VARCHAR(3) = 'OFF';


IF ( (2 & @@OPTIONS) = 2 ) SET @IMPLICIT_TRANSACTIONS = 'ON';
SELECT @IMPLICIT_TRANSACTIONS AS IMPLICIT_TRANSACTIONS;

Examples
The following Transact-SQL script runs a few different test cases. The text output is also provided, which shows the
detailed behavior and results from each test case.

-- Transact-SQL.
go
-- Preparations.
SET NOCOUNT ON;
SET IMPLICIT_TRANSACTIONS OFF;
go
WHILE (@@TranCount > 0) COMMIT TRANSACTION;
go
IF (OBJECT_ID(N'dbo.t1',N'U') IS NOT NULL) DROP TABLE dbo.t1;
go
CREATE table dbo.t1 (a int);
go

PRINT N'-------- [Test A] ---- OFF ----';


PRINT N'[A.01] Now, SET IMPLICIT_TRANSACTIONS OFF.';
PRINT N'[A.02] @@TranCount, at start, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
SET IMPLICIT_TRANSACTIONS OFF;
go
INSERT INTO dbo.t1 VALUES (11);
INSERT INTO dbo.t1 VALUES (12);
PRINT N'[A.03] @@TranCount, after INSERTs, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
go

PRINT N' ';


PRINT N'-------- [Test B] ---- ON ----';
PRINT N'[B.01] Now, SET IMPLICIT_TRANSACTIONS ON.';
PRINT N'[B.02] @@TranCount, at start, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
SET IMPLICIT_TRANSACTIONS ON;
go
INSERT INTO dbo.t1 VALUES (21);
INSERT INTO dbo.t1 VALUES (22);
PRINT N'[B.03] @@TranCount, after INSERTs, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
go
COMMIT TRANSACTION;
PRINT N'[B.04] @@TranCount, after COMMIT, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
go

PRINT N' ';


PRINT N'-------- [Test C] ---- ON, then BEGIN TRAN ----';
PRINT N'[C.01] Now, SET IMPLICIT_TRANSACTIONS ON.';
PRINT N'[C.02] @@TranCount, at start, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
SET IMPLICIT_TRANSACTIONS ON;
go
BEGIN TRANSACTION;
INSERT INTO dbo.t1 VALUES (31);
INSERT INTO dbo.t1 VALUES (32);
INSERT INTO dbo.t1 VALUES (32);
PRINT N'[C.03] @@TranCount, after INSERTs, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
go
COMMIT TRANSACTION;
PRINT N'[C.04] @@TranCount, after a COMMIT, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
COMMIT TRANSACTION;
PRINT N'[C.05] @@TranCount, after another COMMIT, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
go

PRINT N' ';


PRINT N'-------- [Test D] ---- ON, INSERT, BEGIN TRAN, INSERT ----';
PRINT N'[D.01] Now, SET IMPLICIT_TRANSACTIONS ON.';
PRINT N'[D.02] @@TranCount, at start, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
SET IMPLICIT_TRANSACTIONS ON;
go
INSERT INTO dbo.t1 VALUES (41);
BEGIN TRANSACTION;
INSERT INTO dbo.t1 VALUES (42);
PRINT N'[D.03] @@TranCount, after INSERTs, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
go
COMMIT TRANSACTION;
PRINT N'[D.04] @@TranCount, after INSERTs, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
COMMIT TRANSACTION;
PRINT N'[D.05] @@TranCount, after INSERTs, == ' + CAST(@@TRANCOUNT AS NVARCHAR(10));
go

-- Clean up.
SET IMPLICIT_TRANSACTIONS OFF;
go
WHILE (@@TranCount > 0) COMMIT TRANSACTION;
go
DROP TABLE dbo.t1;
go

Next is the text output from the preceding Transact-SQL script.

-- Text output from Transact-SQL:

-------- [Test A] ---- OFF ----


[A.01] Now, SET IMPLICIT_TRANSACTIONS OFF.
[A.02] @@TranCount, at start, == 0
[A.03] @@TranCount, after INSERTs, == 0

-------- [Test B] ---- ON ----


[B.01] Now, SET IMPLICIT_TRANSACTIONS ON.
[B.02] @@TranCount, at start, == 0
[B.03] @@TranCount, after INSERTs, == 1
[B.04] @@TranCount, after COMMIT, == 0

-------- [Test C] ---- ON, then BEGIN TRAN ----


[C.01] Now, SET IMPLICIT_TRANSACTIONS ON.
[C.02] @@TranCount, at start, == 0
[C.03] @@TranCount, after INSERTs, == 2
[C.04] @@TranCount, after a COMMIT, == 1
[C.05] @@TranCount, after another COMMIT, == 0

-------- [Test D] ---- ON, INSERT, BEGIN TRAN, INSERT ----


[D.01] Now, SET IMPLICIT_TRANSACTIONS ON.
[D.02] @@TranCount, at start, == 0
[D.03] @@TranCount, after INSERTs, == 2
[D.04] @@TranCount, after INSERTs, == 1
[D.05] @@TranCount, after INSERTs, == 0

Here is the result set.


See Also
ALTER TABLE (Transact-SQL )
BEGIN TRANSACTION (Transact-SQL )
CREATE TABLE (Transact-SQL )
DELETE (Transact-SQL )
DROP TABLE (Transact-SQL )
FETCH (Transact-SQL )
GRANT (Transact-SQL )
INSERT (Transact-SQL )
OPEN (Transact-SQL )
REVOKE (Transact-SQL )
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
@@TRANCOUNT (Transact-SQL )
TRUNCATE TABLE (Transact-SQL )
UPDATE (Transact-SQL )
SET LANGUAGE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the language environment for the session. The session language determines the datetime formats and
system messages.
Transact-SQL Syntax Conventions

Syntax
SET LANGUAGE { [ N ] 'language' | @language_var }

Arguments
[N ]'language' | @language_var
Is the name of the language as stored in sys.syslanguages. This argument can be either Unicode or DBCS
converted to Unicode. To specify a language in Unicode, use N'language'. If specified as a variable, the variable
must be sysname.

Remarks
The setting of SET L ANGUAGE is set at execute or run time and not at parse time.
SET L ANGUAGE implicitly sets the setting of SET DATEFORMAT.

Permissions
Requires membership in the public role.

Examples
The following example sets the default language to Italian , displays the month name, and then switches back to
us_english and displays the month name again.

DECLARE @Today DATETIME;


SET @Today = '12/5/2007';

SET LANGUAGE Italian;


SELECT DATENAME(month, @Today) AS 'Month Name';

SET LANGUAGE us_english;


SELECT DATENAME(month, @Today) AS 'Month Name' ;
GO

See Also
Data Types (Transact-SQL )
syslanguages
sp_helplanguage (Transact-SQL )
SET Statements (Transact-SQL )
SET LOCK_TIMEOUT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the number of milliseconds a statement waits for a lock to be released.
Transact-SQL Syntax Conventions

Syntax
SET LOCK_TIMEOUT timeout_period

Arguments
timeout_period
Is the number of milliseconds that will pass before Microsoft SQL Server returns a locking error. A value of -1
(default) indicates no time-out period (that is, wait forever).
When a wait for a lock exceeds the time-out value, an error is returned. A value of 0 means to not wait at all and
return a message as soon as a lock is encountered.

Remarks
At the beginning of a connection, this setting has a value of -1. After it is changed, the new setting stays in effect
for the remainder of the connection.
The setting of SET LOCK_TIMEOUT is set at execute or run time and not at parse time.
The READPAST locking hint provides an alternative to this SET option.
CREATE DATABASE, ALTER DATABASE, and DROP DATABASE statements do not honor the SET
LOCK_TIMEOUT setting.

Permissions
Requires membership in the public role.

Examples
A: Set the lock timeout to 1800 milliseconds
The following example sets the lock time-out period to 1800 milliseconds.

SET LOCK_TIMEOUT 1800;


GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


B. Set the lock timeout to wait forever for a lock to be released.
The following example sets the lock timeout to wait forever and never expire. This is the default behavior that is
already set at the beginning of each connection.

SET LOCK_TIMEOUT -1;

The following example sets the lock time-out period to 1800 milliseconds. In this release, SQL Data Warehouse
will parse the statement successfully, but will ignore the value 1800 and continue to use the default behavior.

SET LOCK_TIMEOUT 1800;

See Also
@@LOCK_TIMEOUT (Transact-SQL )
SET Statements (Transact-SQL )
SET NOCOUNT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Stops the message that shows the count of the number of rows affected by a Transact-SQL statement or stored
procedure from being returned as part of the result set.
Transact-SQL Syntax Conventions

Syntax
SET NOCOUNT { ON | OFF }

Remarks
When SET NOCOUNT is ON, the count is not returned. When SET NOCOUNT is OFF, the count is returned.
The @@ROWCOUNT function is updated even when SET NOCOUNT is ON.
SET NOCOUNT ON prevents the sending of DONE_IN_PROC messages to the client for each statement in a
stored procedure. For stored procedures that contain several statements that do not return much actual data, or
for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant
performance boost, because network traffic is greatly reduced.
The setting specified by SET NOCOUNT is in effect at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.

DECLARE @NOCOUNT VARCHAR(3) = 'OFF';


IF ( (512 & @@OPTIONS) = 512 ) SET @NOCOUNT = 'ON';
SELECT @NOCOUNT AS NOCOUNT;

Permissions
Requires membership in the public role.

Examples
The following example prevents the message about the number of rows affected from being displayed.
USE AdventureWorks2012;
GO
SET NOCOUNT OFF;
GO
-- Display the count message.
SELECT TOP(5)LastName
FROM Person.Person
WHERE LastName LIKE 'A%';
GO
-- SET NOCOUNT to ON to no longer display the count message.
SET NOCOUNT ON;
GO
SELECT TOP(5) LastName
FROM Person.Person
WHERE LastName LIKE 'A%';
GO
-- Reset SET NOCOUNT to OFF
SET NOCOUNT OFF;
GO

See Also
@@ROWCOUNT (Transact-SQL )
SET Statements (Transact-SQL )
SET NOEXEC (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Compiles each query but does not execute it.
Transact-SQL Syntax Conventions

Syntax
SET NOEXEC { ON | OFF }

Remarks
When SET NOEXEC is ON, SQL Server compiles each batch of Transact-SQL statements but does not execute
them. When SET NOEXEC is OFF, all batches are executed after compilation.
The execution of statements in SQL Server has two phases: compilation and execution. This setting is useful for
having SQL Server validate the syntax and object names in Transact-SQL code when executing. It is also useful for
debugging statements that would generally be part of a larger batch of statements.
The setting of SET NOEXEC is set at execute or run time and not at parse time.

Permissions
Requires membership in the public role.

Examples
The following example uses NOEXEC with a valid query, a query with an object name that is not valid, and a query
with incorrect syntax.
USE AdventureWorks2012;
GO
PRINT 'Valid query';
GO
-- SET NOEXEC to ON.
SET NOEXEC ON;
GO
-- Inner join.
SELECT e.BusinessEntityID, e.JobTitle, v.Name
FROM HumanResources.Employee AS e
INNER JOIN Purchasing.PurchaseOrderHeader AS poh
ON e.BusinessEntityID = poh.EmployeeID
INNER JOIN Purchasing.Vendor AS v
ON poh.VendorID = v.BusinessEntityID;
GO
-- SET NOEXEC to OFF.
SET NOEXEC OFF;
GO

PRINT 'Invalid object name';


GO
-- SET NOEXEC to ON.
SET NOEXEC ON;
GO
-- Function name uses is a reserved keyword.
USE AdventureWorks2012;
GO
CREATE FUNCTION dbo.Values(@BusinessEntityID int)
RETURNS TABLE
AS
RETURN (SELECT PurchaseOrderID, TotalDue
FROM dbo.PurchaseOrderHeader
WHERE VendorID = @BusinessEntityID);

-- SET NOEXEC to OFF.


SET NOEXEC OFF;
GO

PRINT 'Invalid syntax';


GO
-- SET NOEXEC to ON.
SET NOEXEC ON;
GO
-- Built-in function incorrectly invoked.
SELECT *
FROM fn_helpcollations;
-- Reset SET NOEXEC to OFF.
SET NOEXEC OFF;
GO

See Also
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET SHOWPL AN_TEXT (Transact-SQL )
SET NUMERIC_ROUNDABORT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the level of error reporting generated when rounding in an expression causes a loss of precision.
Transact-SQL Syntax Conventions

Syntax
SET NUMERIC_ROUNDABORT { ON | OFF }

Remarks
When SET NUMERIC_ROUNDABORT is ON, an error is generated after a loss of precision occurs in an
expression. When OFF, losses of precision do not generate error messages and the result is rounded to the
precision of the column or variable storing the result.
Loss of precision occurs when an attempt is made to store a value with a fixed precision in a column or variable
with less precision.
If SET NUMERIC_ROUNDABORT is ON, SET ARITHABORT determines the severity of the generated error. This
table shows the effects of these two settings when a loss of precision occurs.

SETTING SET NUMERIC_ROUNDABORT ON SET NUMERIC_ROUNDABORT OFF

SET ARITHABORT ON Error is generated; no result set No errors or warnings; result is


returned. rounded.

SET ARITHABORT OFF Warning is returned; expression returns No errors or warnings; result is
NULL. rounded.

The setting of SET NUMERIC_ROUNDABORT is set at execute or run time and not at parse time.
SET NUMERIC_ROUNDABORT must be OFF when you are creating or changing indexes on computed columns
or indexed views. If SET NUMERIC_ROUNDABORT is ON, CREATE, UPDATE, INSERT, and DELETE statements
on tables with indexes on computed columns or indexed views fail. For more information about required SET
option settings with indexed views and indexes on computed columns, see "Considerations When You Use the SET
Statements" in SET Statements (Transact-SQL ).
To view the current setting for this setting, run the following query:

DECLARE @NUMERIC_ROUNDABORT VARCHAR(3) = 'OFF';


IF ( (8192 & @@OPTIONS) = 8192 ) SET @NUMERIC_ROUNDABORT = 'ON';
SELECT @NUMERIC_ROUNDABORT AS NUMERIC_ROUNDABORT;

Permissions
Requires membership in the public role.

Examples
The following example shows two values with a precision of four decimal places that are added and stored in a
variable with a precision of two decimal places. The expressions demonstrate the effects of the different
SET NUMERIC_ROUNDABORT and SET ARITHABORT settings.
-- SET NOCOUNT to ON,
-- SET NUMERIC_ROUNDABORT to ON, and SET ARITHABORT to ON.
SET NOCOUNT ON;
PRINT 'SET NUMERIC_ROUNDABORT ON';
PRINT 'SET ARITHABORT ON';
SET NUMERIC_ROUNDABORT ON;
SET ARITHABORT ON;
GO
DECLARE @result DECIMAL(5, 2),
@value_1 DECIMAL(5, 4),
@value_2 DECIMAL(5, 4);
SET @value_1 = 1.1234;
SET @value_2 = 1.1234 ;
SELECT @result = @value_1 + @value_2;
SELECT @result;
GO

-- SET NUMERIC_ROUNDABORT to ON and SET ARITHABORT to OFF.


PRINT 'SET NUMERIC_ROUNDABORT ON';
PRINT 'SET ARITHABORT OFF';
SET NUMERIC_ROUNDABORT ON;
SET ARITHABORT OFF;
GO
DECLARE @result DECIMAL(5, 2),
@value_1 DECIMAL(5, 4),
@value_2 DECIMAL(5, 4);
SET @value_1 = 1.1234;
SET @value_2 = 1.1234 ;
SELECT @result = @value_1 + @value_2;
SELECT @result;
GO

-- SET NUMERIC_ROUNDABORT to OFF and SET ARITHABORT to ON.


PRINT 'SET NUMERIC_ROUNDABORT OFF';
PRINT 'SET ARITHABORT ON';
SET NUMERIC_ROUNDABORT OFF;
SET ARITHABORT ON;
GO
DECLARE @result DECIMAL(5, 2),
@value_1 DECIMAL(5, 4),
@value_2 DECIMAL(5, 4);
SET @value_1 = 1.1234;
SET @value_2 = 1.1234 ;
SELECT @result = @value_1 + @value_2;
SELECT @result;
GO

-- SET NUMERIC_ROUNDABORT to OFF and SET ARITHABORT to OFF.


PRINT 'SET NUMERIC_ROUNDABORT OFF';
PRINT 'SET ARITHABORT OFF';
SET NUMERIC_ROUNDABORT OFF;
SET ARITHABORT OFF;
GO
DECLARE @result DECIMAL(5, 2),
@value_1 DECIMAL(5, 4),
@value_2 DECIMAL(5, 4);
SET @value_1 = 1.1234;
SET @value_2 = 1.1234;
SELECT @result = @value_1 + @value_2;
SELECT @result;
GO

See Also
Data Types (Transact-SQL )
SET Statements (Transact-SQL )
SET ARITHABORT (Transact-SQL )
SET OFFSETS (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Returns the offset (position relative to the start of a statement) of specified keywords in Transact-SQL statements
to DB -Library applications.

IMPORTANT
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.

Transact-SQL Syntax Conventions

Syntax
SET OFFSETS keyword_list { ON | OFF }

Arguments
keyword_list
Is a comma-separated list of Transact-SQL constructs including SELECT, FROM, ORDER, TABLE, PROCEDURE,
STATEMENT, PARAM, and EXECUTE.

Remarks
SET OFFSETS is used only in DB -Library applications.
The setting of SET OFFSETS is set at parse time and not at execute time or run time. Setting at parse time means
that if the SET statement is present in the batch or stored procedure, the setting takes effect, regardless of whether
code execution actually reaches that point; and the SET statement takes effect before any statements are executed.
For example, even if the set statement is in an IF...ELSE statement block that is never reached during execution, the
SET statement still takes effect because the IF...ELSE statement block is parsed.
If SET OFFSETS is set in a stored procedure, the value of SET OFFSETS is restored after control is returned from
the stored procedure. Therefore, a SET OFFSETS statement specified in dynamic SQL does not have any effect on
any statements following the dynamic SQL statement.
SET PARSEONLY returns offsets if the OFFSETS option is ON and no errors occur.

Permissions
Requires membership in the public role.

See Also
SET Statements (Transact-SQL )
SET PARSEONLY (Transact-SQL )
SET PARSEONLY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Examines the syntax of each Transact-SQL statement and returns any error messages without compiling or
executing the statement.
Transact-SQL Syntax Conventions

Syntax
SET PARSEONLY { ON | OFF }

Remarks
When SET PARSEONLY is ON, SQL Server only parses the statement. When SET PARSEONLY is OFF, SQL
Server compiles and executes the statement.
The setting of SET PARSEONLY is set at parse time and not at execute or run time.
Do not use PARSEONLY in a stored procedure or a trigger. SET PARSEONLY returns offsets if the OFFSETS
option is ON and no errors occur.

Permissions
Requires membership in the public role.

See Also
SET Statements (Transact-SQL )
SET OFFSETS (Transact-SQL )
SET QUERY_GOVERNOR_COST_LIMIT (Transact-
SQL)
5/4/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Overrides the currently configured query governor cost limit value for the current connection.
Transact-SQL Syntax Conventions

Syntax
SET QUERY_GOVERNOR_COST_LIMIT value

Arguments
value
Is a numeric or integer value specifying the longest time in which a query can run. Values are rounded down to the
nearest integer. Negative values are rounded up to 0. The query governor disallows execution of any query that
has an estimated cost exceeding that value. Specifying 0 (the default) for this option turns off the query governor,
and all queries are allowed to run indefinitely.
"Query cost" refers to the estimated elapsed time, in seconds, required to complete a query on a specific hardware
configuration.

Remarks
Using SET QUERY_GOVERNOR_COST_LIMIT applies to the current connection only and lasts the duration of the
current connection. Use the Configure the query governor cost limit Server Configuration Optionoption of
sp_configure to change the server-wide query governor cost limit value. For more information about configuring
this option, see sp_configure and Server Configuration Options (SQL Server).
The setting of SET QUERY_GOVERNOR_COST_LIMIT is set at execute or run time and not at parse time.

Permissions
Requires membership in the public role.

See Also
SET Statements (Transact-SQL )
SET QUOTED_IDENTIFIER (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes SQL Server to follow the ISO rules regarding quotation mark delimiting identifiers and literal strings.
Identifiers delimited by double quotation marks can be either Transact-SQL reserved keywords or can contain
characters not generally allowed by the Transact-SQL syntax rules for identifiers.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET QUOTED_IDENTIFIER { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET QUOTED_IDENTIFIER ON

Remarks
When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals
must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be
quoted and must follow all Transact-SQL rules for identifiers. For more information, see Database Identifiers.
Literals can be delimited by either single or double quotation marks.
When SET QUOTED_IDENTIFIER is ON (default), all strings delimited by double quotation marks are
interpreted as object identifiers. Therefore, quoted identifiers do not have to follow the Transact-SQL rules for
identifiers. They can be reserved keywords and can include characters not generally allowed in Transact-SQL
identifiers. Double quotation marks cannot be used to delimit literal string expressions; single quotation marks
must be used to enclose literal strings. If a single quotation mark (') is part of the literal string, it can be
represented by two single quotation marks ("). SET QUOTED_IDENTIFIER must be ON when reserved
keywords are used for object names in the database.
When SET QUOTED_IDENTIFIER is OFF, literal strings in expressions can be delimited by single or double
quotation marks. If a literal string is delimited by double quotation marks, the string can contain embedded
single quotation marks, such as apostrophes.
SET QUOTED_IDENTIFIER must be ON when you are creating or changing indexes on computed columns or
indexed views. If SET QUOTED_IDENTIFIER is OFF, CREATE, UPDATE, INSERT, and DELETE statements on
tables with indexes on computed columns or indexed views will fail. For more information about required SET
option settings with indexed views and indexes on computed columns, see "Considerations When You Use the
SET Statements" in SET Statements (Transact-SQL ).
SET QUOTED_IDENTIFIER must be ON when you are creating a filtered index.
SET QUOTED_IDENTIFIER must be ON when you invoke XML data type methods.
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server
automatically set QUOTED_IDENTIFIER to ON when connecting. This can be configured in ODBC data
sources, in ODBC connection attributes, or OLE DB connection properties. The default for SET
QUOTED_IDENTIFIER is OFF for connections from DB -Library applications.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the table's metadata even
if the option is set to OFF when the table is created.
When a stored procedure is created, the SET QUOTED_IDENTIFIER and SET ANSI_NULLS settings are
captured and used for subsequent invocations of that stored procedure.
When executed inside a stored procedure, the setting of SET QUOTED_IDENTIFIER is not changed.
When SET ANSI_DEFAULTS is ON, SET QUOTED_IDENTIFIER is enabled.
SET QUOTED_IDENTIFIER also corresponds to the QUOTED_IDENTIFIER setting of ALTER DATABASE. For
more information about database settings, see ALTER DATABASE (Transact-SQL ).
SET QUOTED_IDENTIFIER is takes effect at parse-time and only affects parsing, not query execution.
For a top-level Ad-Hoc batch parsing begins using the session’s current setting for QUOTED_IDENTIFIER. As
the batch is parsed any occurrence of SET QUOTED_IDENTIFIER will change the parsing behavior from that
point on, and save that setting for the session. So after the batch is parsed and executed, the session’s
QUOTED_IDENTIFER setting will be set according to the last occurrence of SET QUOTED_IDENTIFIER in the
batch.
Static SQL in a stored procedure is parsed using the QUOTED_IDENTIFIER setting in effect for the batch that
created or altered the stored procedure. SET QUOTED_IDENTIFIER has no effect when it appears in the body
of a stored procedure as static SQL.
For a nested batch using sp_executesql or exec() the parsing begins using the QUOTED_IDENTIFIER setting of
the session. If the nested batch is inside a stored procedure the parsing starts using the QUOTED_IDENTIFIER
setting of the stored procedure. As the nested batch is parsed the any occurrence of SET
QUOTED_IDENTIFIER will change the parsing behavior from that point on, but the session’s
QUOTED_IDENTIFIER setting will not be updated.
Using brackets, [ and ], to delimit identifiers is not affected by the QUOTED_IDENTIFIER setting.
To view the current setting for this setting, run the following query.

DECLARE @QUOTED_IDENTIFIER VARCHAR(3) = 'OFF';


IF ( (256 & @@OPTIONS) = 256 ) SET @QUOTED_IDENTIFIER = 'ON';
SELECT @QUOTED_IDENTIFIER AS QUOTED_IDENTIFIER;

Permissions
Requires membership in the public role.

Examples
A. Using the quoted identifier setting and reserved word object names
The following example shows that the SET QUOTED_IDENTIFIER setting must be ON , and the keywords in table
names must be in double quotation marks to create and use objects that have reserved keyword names.
SET QUOTED_IDENTIFIER OFF
GO
-- An attempt to create a table with a reserved keyword as a name
-- should fail.
CREATE TABLE "select" ("identity" INT IDENTITY NOT NULL, "order" INT NOT NULL);
GO

SET QUOTED_IDENTIFIER ON;


GO

-- Will succeed.
CREATE TABLE "select" ("identity" INT IDENTITY NOT NULL, "order" INT NOT NULL);
GO

SELECT "identity","order"
FROM "select"
ORDER BY "order";
GO

DROP TABLE "SELECT";


GO

SET QUOTED_IDENTIFIER OFF;


GO

B. Using the quoted identifier setting with single and double quotation marks
The following example shows the way single and double quotation marks are used in string expressions with
SET QUOTED_IDENTIFIER set to ON and OFF .
SET QUOTED_IDENTIFIER OFF;
GO
USE AdventureWorks2012;
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'Test')
DROP TABLE dbo.Test;
GO
USE AdventureWorks2012;
CREATE TABLE dbo.Test (ID INT, String VARCHAR(30)) ;
GO

-- Literal strings can be in single or double quotation marks.


INSERT INTO dbo.Test VALUES (1, "'Text in single quotes'");
INSERT INTO dbo.Test VALUES (2, '''Text in single quotes''');
INSERT INTO dbo.Test VALUES (3, 'Text with 2 '''' single quotes');
INSERT INTO dbo.Test VALUES (4, '"Text in double quotes"');
INSERT INTO dbo.Test VALUES (5, """Text in double quotes""");
INSERT INTO dbo.Test VALUES (6, "Text with 2 """" double quotes");
GO

SET QUOTED_IDENTIFIER ON;


GO

-- Strings inside double quotation marks are now treated


-- as object names, so they cannot be used for literals.
INSERT INTO dbo."Test" VALUES (7, 'Text with a single '' quote');
GO

-- Object identifiers do not have to be in double quotation marks


-- if they are not reserved keywords.
SELECT ID, String
FROM dbo.Test;
GO

DROP TABLE dbo.Test;


GO

SET QUOTED_IDENTIFIER OFF;


GO

Here is the result set.

ID String
----------- ------------------------------
1 'Text in single quotes'
2 'Text in single quotes'
3 Text with 2 '' single quotes
4 "Text in double quotes"
5 "Text in double quotes"
6 Text with 2 "" double quotes
7 Text with a single ' quote

See Also
CREATE DATABASE (SQL Server Transact-SQL )
CREATE DEFAULT (Transact-SQL )
CREATE PROCEDURE (Transact-SQL )
CREATE RULE (Transact-SQL )
CREATE TABLE (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
CREATE VIEW (Transact-SQL )
Data Types (Transact-SQL )
EXECUTE (Transact-SQL )
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
sp_rename (Transact-SQL )
SET REMOTE_PROC_TRANSACTIONS (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies that when a local transaction is active, executing a remote stored procedure starts a Transact-SQL
distributed transaction managed by Microsoft Distributed Transaction Coordinator (MS DTC ).

IMPORTANT
This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work,
and modify applications that currently use this feature as soon as possible. This option is provided for backward compatibility
for applications that use remote stored procedures. Instead of issuing remote stored procedure calls, use distributed queries
that reference linked servers. These are defined by using sp_addlinkedserver.

Transact-SQL Syntax Conventions

Syntax
SET REMOTE_PROC_TRANSACTIONS { ON | OFF }

Arguments
ON | OFF
When ON, a Transact-SQL distributed transaction is started when a remote stored procedure is executed from a
local transaction. When OFF, calling remote stored procedures from a local transaction does not start a Transact-
SQL distributed transaction.

Remarks
When REMOTE_PROC_TRANSACTIONS is ON, calling a remote stored procedure starts a distributed
transaction and enlists the transaction with MS DTC. The instance of SQL Server making the remote stored
procedure call is the transaction originator and controls the completion of the transaction. When a subsequent
COMMIT TRANSACTION or ROLLBACK TRANSACTION statement is issued for the connection, the controlling
instance requests that MS DTC manage the completion of the distributed transaction across the computers
involved.
After a Transact-SQL distributed transaction has been started, remote stored procedure calls can be made to other
instances of SQL Server that have been defined as remote servers. The remote servers are all enlisted in the
Transact-SQL distributed transaction, and MS DTC ensures that the transaction is completed against each remote
server.
REMOTE_PROC_TRANSACTIONS is a connection-level setting that can be used to override the instance-level
sp_configure remote proc trans option.
When REMOTE_PROC_TRANSACTIONS is OFF, remote stored procedure calls are not made part of a local
transaction. The modifications made by the remote stored procedure are committed or rolled back at the time the
stored procedure completes. Subsequent COMMIT TRANSACTION or ROLLBACK TRANSACTION statements
issued by the connection that called the remote stored procedure have no effect on the processing done by the
procedure.
The REMOTE_PROC_TRANSACTIONS option is a compatibility option that affects only remote stored procedure
calls made to instances of SQL Server defined as remote servers using sp_addserver. The option does not apply
to distributed queries that execute a stored procedure on an instance defined as a linked server using
sp_addlinkedserver.
The setting of SET REMOTE_PROC_TRANSACTIONS is set at execute or run time and not at parse time.

Permissions
Requires membership in the public role.

See Also
BEGIN DISTRIBUTED TRANSACTION (Transact-SQL )
SET Statements (Transact-SQL )
SET ROWCOUNT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes SQL Server to stop processing the query after the specified number of rows are returned.
Transact-SQL Syntax Conventions

Syntax
SET ROWCOUNT { number | @number_var }

Arguments
number | @number_var
Is the number, an integer, of rows to be processed before stopping the specific query.

Remarks
IMPORTANT
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using
SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications
that currently use it. For a similar behavior, use the TOP syntax. For more information, see TOP (Transact-SQL).

To set this option off so that all rows are returned, specify SET ROWCOUNT 0.
Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have
been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect
dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with
caution.
SET ROWCOUNT overrides the SELECT statement TOP keyword if the rowcount is the smaller value.
The setting of SET ROWCOUNT is set at execute or run time and not at parse time.

Permissions
Requires membership in the public role.

Examples
SET ROWCOUNT stops processing after the specified number of rows. In the following example, note that over
500 rows meet the criteria of Quantity less than 300 . However, after applying SET ROWCOUNT, you can see
that not all rows were returned.
USE AdventureWorks2012;
GO
SELECT count(*) AS Count
FROM Production.ProductInventory
WHERE Quantity < 300;
GO

Here is the result set.

Count
-----------
537

(1 row(s) affected)

Now, set ROWCOUNT to 4 and return all rows to demonstrate that only 4 rows are returned.

SET ROWCOUNT 4;
SELECT *
FROM Production.ProductInventory
WHERE Quantity < 300;
GO

(4 row(s) affected)

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


SET ROWCOUNT stops processing after the specified number of rows. In the following example, note that more
than 20 rows meet the criteria of AccountType = 'Assets' . However, after applying SET ROWCOUNT, you can see
that not all rows were returned.

-- Uses AdventureWorks

SET ROWCOUNT 5;
SELECT * FROM [dbo].[DimAccount]
WHERE AccountType = 'Assets';

To return all rows, set ROWCOUNT to 0.

-- Uses AdventureWorks

SET ROWCOUNT 0;
SELECT * FROM [dbo].[DimAccount]
WHERE AccountType = 'Assets';

See Also
SET Statements (Transact-SQL )
SET SHOWPLAN_ALL (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes Microsoft SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed
information about how the statements are executed and provides estimates of the resource requirements for the
statements.
Transact-SQL Syntax Conventions

Syntax
SET SHOWPLAN_ALL { ON | OFF }

Remarks
The setting of SET SHOWPL AN_ALL is set at execute or run time and not at parse time.
When SET SHOWPL AN_ALL is ON, SQL Server returns execution information for each statement without
executing it, and Transact-SQL statements are not executed. After this option is set ON, information about all
subsequent Transact-SQL statements are returned until the option is set OFF. For example, if a CREATE TABLE
statement is executed while SET SHOWPL AN_ALL is ON, SQL Server returns an error message from a
subsequent SELECT statement involving that same table, informing users that the specified table does not exist.
Therefore, subsequent references to this table fail. When SET SHOWPL AN_ALL is OFF, SQL Server executes the
statements without generating a report.
SET SHOWPL AN_ALL is intended to be used by applications written to handle its output. Use SET
SHOWPL AN_TEXT to return readable output for Microsoft Win32 command prompt applications, such as the
osql utility.
SET SHOWPL AN_TEXT and SET SHOWPL AN_ALL cannot be specified inside a stored procedure; they must be
the only statements in a batch.
SET SHOWPL AN_ALL returns information as a set of rows that form a hierarchical tree representing the steps
taken by the SQL Server query processor as it executes each statement. Each statement reflected in the output
contains a single row with the text of the statement, followed by several rows with the details of the execution
steps. The table shows the columns that the output contains.

COLUMN NAME DESCRIPTION

StmtText For rows that are not of type PLAN_ROW, this column
contains the text of the Transact-SQL statement. For rows of
type PLAN_ROW, this column contains a description of the
operation. This column contains the physical operator and
may optionally also contain the logical operator. This column
may also be followed by a description that is determined by
the physical operator. For more information, see Showplan
Logical and Physical Operators Reference.
COLUMN NAME DESCRIPTION

StmtId Number of the statement in the current batch.

NodeId ID of the node in the current query.

Parent Node ID of the parent step.

PhysicalOp Physical implementation algorithm for the node. For rows of


type PLAN_ROWS only.

LogicalOp Relational algebraic operator this node represents. For rows of


type PLAN_ROWS only.

Argument Provides supplemental information about the operation being


performed. The contents of this column depend on the
physical operator.

DefinedValues Contains a comma-separated list of values introduced by this


operator. These values may be computed expressions which
were present in the current query (for example, in the SELECT
list or WHERE clause), or internal values introduced by the
query processor in order to process this query. These defined
values may then be referenced elsewhere within this query.
For rows of type PLAN_ROWS only.

EstimateRows Estimated number of rows of output produced by this


operator. For rows of type PLAN_ROWS only.

EstimateIO Estimated I/O cost* for this operator. For rows of type
PLAN_ROWS only.

EstimateCPU Estimated CPU cost* for this operator. For rows of type
PLAN_ROWS only.

AvgRowSize Estimated average row size (in bytes) of the row being passed
through this operator.

TotalSubtreeCost Estimated (cumulative) cost* of this operation and all child


operations.

OutputList Contains a comma-separated list of columns being projected


by the current operation.
COLUMN NAME DESCRIPTION

Warnings Contains a comma-separated list of warning messages


relating to the current operation. Warning messages may
include the string "NO STATS:()" with a list of columns. This
warning message means that the query optimizer attempted
to make a decision based on the statistics for this column, but
none were available. Consequently, the query optimizer had
to make a guess, which may have resulted in the selection of
an inefficient query plan. For more information about creating
or updating column statistics (which help the query optimizer
choose a more efficient query plan), see UPDATE STATISTICS.
This column may optionally include the string "MISSING JOIN
PREDICATE", which means that a join (involving tables) is
taking place without a join predicate. Accidentally dropping a
join predicate may result in a query which takes much longer
to run than expected, and returns a huge result set. If this
warning is present, verify that the absence of a join predicate
is intentional.

Type Node type. For the parent node of each query, this is the
Transact-SQL statement type (for example, SELECT, INSERT,
EXECUTE, and so on). For subnodes representing execution
plans, the type is PLAN_ROW.

Parallel 0 = Operator is not running in parallel.

1 = Operator is running in parallel.

EstimateExecutions Estimated number of times this operator will be executed


while running the current query.

*Cost units are based on an internal measurement of time, not wall-clock time. They are used for determining the
relative cost of a plan in comparison to other plans.

Permissions
In order to use SET SHOWPL AN_ALL, you must have sufficient permissions to execute the statements on which
SET SHOWPL AN_ALL is executed, and you must have SHOWPL AN permission for all databases containing
referenced objects.
For SELECT, INSERT, UPDATE, DELETE, EXEC stored_procedure, and EXEC user_defined_function statements, to
produce a Showplan the user must:
Have the appropriate permissions to execute the Transact-SQL statements.
Have SHOWPL AN permission on all databases containing objects referenced by the Transact-SQL
statements, such as tables, views, and so on.
For all other statements, such as DDL, USE database_name, SET, DECL ARE, dynamic SQL, and so on, only
the appropriate permissions to execute the Transact-SQL statements are needed.

Examples
The two statements that follow use the SET SHOWPL AN_ALL settings to show the way SQL Server analyzes
and optimizes the use of indexes in queries.
The first query uses the Equals comparison operator (=) in the WHERE clause on an indexed column. This results
in the Clustered Index Seek value in the LogicalOp column and the name of the index in the Argument column.
The second query uses the LIKE operator in the WHERE clause. This forces SQL Server to use a clustered index
scan and find the data that satisfies the WHERE clause condition. This results in the Clustered Index Scan value in
the LogicalOp column with the name of the index in the Argument column, and the Filter value in the
LogicalOp column with the WHERE clause condition in the Argument column.
The values in the EstimateRows and the TotalSubtreeCost columns are smaller for the first indexed query,
indicating that it is processed much faster and uses fewer resources than the nonindexed query.

USE AdventureWorks2012;
GO
SET SHOWPLAN_ALL ON;
GO
-- First query.
SELECT BusinessEntityID
FROM HumanResources.Employee
WHERE NationalIDNumber = '509647174';
GO
-- Second query.
SELECT BusinessEntityID, EmergencyContactID
FROM HumanResources.Employee
WHERE EmergencyContactID LIKE '1%';
GO
SET SHOWPLAN_ALL OFF;
GO

See Also
SET Statements (Transact-SQL )
SET SHOWPL AN_TEXT (Transact-SQL )
SET SHOWPL AN_XML (Transact-SQL )
SET SHOWPLAN_TEXT (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes Microsoft SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed
information about how the statements are executed.
Transact-SQL Syntax Conventions

Syntax
SET SHOWPLAN_TEXT { ON | OFF }

Remarks
The setting of SET SHOWPL AN_TEXT is set at execute or run time and not at parse time.
When SET SHOWPL AN_TEXT is ON, SQL Server returns execution information for each Transact-SQL
statement without executing it. After this option is set ON, execution plan information about all subsequent SQL
Server statements is returned until the option is set OFF. For example, if a CREATE TABLE statement is executed
while SET SHOWPL AN_TEXT is ON, SQL Server returns an error message from a subsequent SELECT
statement involving that same table informing the user that the specified table does not exist. Therefore,
subsequent references to this table fail. When SET SHOWPL AN_TEXT is OFF, SQL Server executes statements
without generating a report with execution plan information.
SET SHOWPL AN_TEXT is intended to return readable output for Microsoft Win32 command prompt
applications such as the osql utility. SET SHOWPL AN_ALL returns more detailed output intended to be used
with programs designed to handle its output.
SET SHOWPL AN_TEXT and SET SHOWPL AN_ALL cannot be specified in a stored procedure. They must be the
only statements in a batch.
SET SHOWPL AN_TEXT returns information as a set of rows that form a hierarchical tree representing the steps
taken by the SQL Server query processor as it executes each statement. Each statement reflected in the output
contains a single row with the text of the statement, followed by several rows with the details of the execution
steps. The table shows the column that the output contains.

COLUMN NAME DESCRIPTION

StmtText For rows which are not of type PLAN_ROW, this column
contains the text of the Transact-SQL statement. For rows of
type PLAN_ROW, this column contains a description of the
operation. This column contains the physical operator and
may optionally also contain the logical operator. This column
may also be followed by a description which is determined by
the physical operator. For more information about physical
operators, see the Argument column in SET SHOWPLAN_ALL
(Transact-SQL).

For more information about the physical and logical operators that can be seen in Showplan output, see
Showplan Logical and Physical Operators Reference

Permissions
In order to use SET SHOWPL AN_TEXT, you must have sufficient permissions to execute the statements on which
SET SHOWPL AN_TEXT is executed, and you must have SHOWPL AN permission for all databases containing
referenced objects.
For SELECT, INSERT, UPDATE, DELETE, EXEC stored_procedure, and EXEC user_defined_function statements, to
produce a Showplan the user must:
Have the appropriate permissions to execute the Transact-SQL statements.
Have SHOWPL AN permission on all databases containing objects referenced by the Transact-SQL
statements, such as tables, views, and so on.
For all other statements, such as DDL, USE database_name, SET, DECL ARE, dynamic SQL, and so on, only
the appropriate permissions to execute the Transact-SQL statements are needed.

Examples
This example shows how indexes are used by SQL Server as it processes the statements.
This is the query using an index:

USE AdventureWorks2012;
GO
SET SHOWPLAN_TEXT ON;
GO
SELECT *
FROM Production.Product
WHERE ProductID = 905;
GO
SET SHOWPLAN_TEXT OFF;
GO

Here is the result set:

StmtText
---------------------------------------------------
SELECT *
FROM Production.Product
WHERE ProductID = 905;

StmtText
--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
|--Clustered Index Seek(OBJECT:([AdventureWorks2012].[Production].[Product].[PK_Product_ProductID]), SEEK:
([AdventureWorks2012].[Production].[Product].[ProductID]=CONVERT_IMPLICIT(int,[@1],0)) ORDERED FORWARD)

Here is the query not using an index:


USE AdventureWorks2012;
GO
SET SHOWPLAN_TEXT ON;
GO
SELECT *
FROM Production.ProductCostHistory
WHERE StandardCost < 500.00;
GO
SET SHOWPLAN_TEXT OFF;
GO

Here is the result set:

StmtText
------------------------------------------------------------------------
SELECT *
FROM Production.ProductCostHistory
WHERE StandardCost < 500.00;

StmtText
--------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------
|--Clustered Index Scan(OBJECT:([AdventureWorks2012].[Production].[ProductCostHistory].
[PK_ProductCostHistory_ProductCostID]), WHERE:([AdventureWorks2012].[Production].[ProductCostHistory].
[StandardCost]<[@1]))

See Also
Operators (Transact-SQL )
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET SHOWPLAN_XML (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed information
about how the statements are going to be executed in the form of a well-defined XML document
Transact-SQL Syntax Conventions

Syntax
SET SHOWPLAN_XML { ON | OFF }

Remarks
The setting of SET SHOWPL AN_XML is set at execute or run time and not at parse time.
When SET SHOWPL AN_XML is ON, SQL Server returns execution plan information for each statement without
executing it, and Transact-SQL statements are not executed. After this option is set ON, execution plan information
about all subsequent Transact-SQL statements is returned until the option is set OFF. For example, if a CREATE
TABLE statement is executed while SET SHOWPL AN_XML is ON, SQL Server returns an error message from a
subsequent SELECT statement involving that same table; the specified table does not exist. Therefore, subsequent
references to this table fail. When SET SHOWPL AN_XML is OFF, SQL Server executes the statements without
generating a report.
SET SHOWPL AN_XML is intended to return output as nvarchar(max) for applications such as the sqlcmd
utility, where the XML output is subsequently used by other tools to display and process the query plan
information.

NOTE
The dynamic management view, sys.dm_exec_query_plan, returns the same information as SET SHOWPLAN XML in the
xml data type. This information is returned from the query_plan column of sys.dm_exec_query_plan. For more
information, see sys.dm_exec_query_plan (Transact-SQL).

SET SHOWPL AN_XML cannot be specified inside a stored procedure. It must be the only statement in a batch.
SET SHOWPL AN_XML returns information as a set of XML documents. Each batch after the SET
SHOWPL AN_XML ON statement is reflected in the output by a single document. Each document contains the
text of the statements in the batch, followed by the details of the execution steps. The document shows the
estimated costs, numbers of rows, accessed indexes, and types of operators performed, join order, and more
information about the execution plans.
The document containing the XML schema for the XML output by SET SHOWPL AN_XML is copied during setup
to a local directory on the computer on which Microsoft SQL Server is installed. It can be found on the drive
containing the SQL Server installation files, at:
\Microsoft SQL Server\130\Tools\Binn\schemas\sqlserver\2004\07\showplan\showplanxml.xsd
The Showplan Schema can also be found at this Web site.

NOTE
If Include Actual Execution Plan is selected in SQL Server Management Studio, this SET option does not produce XML
Showplan output. Clear the Include Actual Execution Plan button before using this SET option.

Permissions
In order to use SET SHOWPL AN_XML, you must have sufficient permissions to execute the statements on which
SET SHOWPL AN_XML is executed, and you must have SHOWPL AN permission for all databases containing
referenced objects.
For SELECT, INSERT, UPDATE, DELETE, EXEC stored_procedure, and EXEC user_defined_function statements, to
produce a Showplan the user must:
Have the appropriate permissions to execute the Transact-SQL statements.
Have SHOWPL AN permission on all databases containing objects referenced by the Transact-SQL
statements, such as tables, views, and so on.
For all other statements, such as DDL, USE database_name, SET, DECL ARE, dynamic SQL, and so on, only
the appropriate permissions to execute the Transact-SQL statements are needed.

Examples
The two statements that follow use the SET SHOWPL AN_XML settings to show the way SQL Server analyzes
and optimizes the use of indexes in queries.
The first query uses the Equals comparison operator (=) in the WHERE clause on an indexed column. The second
query uses the LIKE operator in the WHERE clause. This forces SQL Server to use a clustered index scan and find
the data meeting the WHERE clause condition. The values in the EstimateRows and the
EstimatedTotalSubtreeCost attributes are smaller for the first indexed query, indicating that it is processed
much faster and uses fewer resources than the nonindexed query.

USE AdventureWorks2012;
GO
SET SHOWPLAN_XML ON;
GO
-- First query.
SELECT BusinessEntityID
FROM HumanResources.Employee
WHERE NationalIDNumber = '509647174';
GO
-- Second query.
SELECT BusinessEntityID, JobTitle
FROM HumanResources.Employee
WHERE JobTitle LIKE 'Production%';
GO
SET SHOWPLAN_XML OFF;

See Also
SET Statements (Transact-SQL )
SET STATISTICS IO (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes SQL Server to display information regarding the amount of disk activity generated by Transact-SQL
statements.
Transact-SQL Syntax Conventions

Syntax
SET STATISTICS IO { ON | OFF }

Remarks
When STATISTICS IO is ON, statistical information is displayed. When OFF, the information is not displayed.
After this option is set ON, all subsequent Transact-SQL statements return the statistical information until the
option is set to OFF.
The following table lists and describes the output items.

OUTPUT ITEM MEANING

Table Name of the table.

Scan count Number of seeks/scans started after reaching the leaf level in
any direction to retrieve all the values to construct the final
dataset for the output.

Scan count is 0 if the index used is a unique index or clustered


index on a primary key and you are seeking for only one
value. For example WHERE Primary_Key_Column = <value> .

Scan count is 1 when you are searching for one value using a
non-unique clustered index which is defined on a non-primary
key column. This is done to check for duplicate values for the
key value that you are searching for. For example
WHERE Clustered_Index_Key_Column = <value> .

Scan count is N when N is the number of different seek/scan


started towards the left or right side at the leaf level after
locating a key value using the index key.

logical reads Number of pages read from the data cache.

physical reads Number of pages read from disk.

read-ahead reads Number of pages placed into the cache for the query.
OUTPUT ITEM MEANING

lob logical reads Number of text, ntext, image, or large value type
(varchar(max), nvarchar(max), varbinary(max)) pages
read from the data cache.

lob physical reads Number of text, ntext, image or large value type pages read
from disk.

lob read-ahead reads Number of text, ntext, image or large value type pages
placed into the cache for the query.

The setting of SET STATISTICS IO is set at execute or run time and not at parse time.

NOTE
When Transact-SQL statements retrieve LOB columns, some LOB retrieval operations might require traversing the LOB tree
multiple times. This may cause SET STATISTICS IO to report higher than expected logical reads.

Permissions
To use SET STATISTICS IO, users must have the appropriate permissions to execute the Transact-SQL statement.
The SHOWPL AN permission is not required.

Examples
This example shows how many logical and physical reads are used by SQL Server as it processes the statements.

USE AdventureWorks2012;
GO
SET STATISTICS IO ON;
GO
SELECT *
FROM Production.ProductCostHistory
WHERE StandardCost < 500.00;
GO
SET STATISTICS IO OFF;
GO

Here is the result set:

Table 'ProductCostHistory'. Scan count 1, logical reads 5, physical


reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0,
lob read-ahead reads 0.

See Also
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET STATISTICS TIME (Transact-SQL )
SET STATISTICS PROFILE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Displays the profile information for a statement. STATISTICS PROFILE works for ad hoc queries, views, and stored
procedures.
Transact-SQL Syntax Conventions

Syntax
SET STATISTICS PROFILE { ON | OFF }

Remarks
When STATISTICS PROFILE is ON, each executed query returns its regular result set, followed by an additional
result set that shows a profile of the query execution.
The additional result set contains the SHOWPL AN_ALL columns for the query and these additional columns.

COLUMN NAME DESCRIPTION

Rows Actual number of rows produced by each operator

Executes Number of times the operator has been executed

Permissions
To use SET STATISTICS PROFILE and view the output, users must have the following permissions:
Appropriate permissions to execute the Transact-SQL statements.
SHOWPL AN permission on all databases containing objects that are referenced by the Transact-SQL
statements.
For Transact-SQL statements that do not produce STATISTICS PROFILE result sets, only the appropriate
permissions to execute the Transact-SQL statements are required. For Transact-SQL statements that do
produce STATISTICS PROFILE result sets, checks for both the Transact-SQL statement execution
permission and the SHOWPL AN permission must succeed, or the Transact-SQL statement execution is
aborted and no Showplan information is generated.

See Also
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET STATISTICS TIME (Transact-SQL )
SET STATISTICS IO (Transact-SQL )
SET STATISTICS TIME (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Displays the number of milliseconds required to parse, compile, and execute each statement.
Transact-SQL Syntax Conventions

Syntax
SET STATISTICS TIME { ON | OFF }

Remarks
When SET STATISTICS TIME is ON, the time statistics for a statement are displayed. When OFF, the time
statistics are not displayed.
The setting of SET STATISTICS TIME is set at execute or run time and not at parse time.
Microsoft SQL Server is unable to provide accurate statistics in fiber mode, which is activated when you enable
the lightweight pooling configuration option.
The cpu column in the sysprocesses table is only updated when a query executes with SET STATISTICS TIME
ON. When SET STATISTICS TIME is OFF, 0 is returned.
ON and OFF settings also affect the CPU column in the Process Info View for Current Activity in SQL Server
Management Studio.

Permissions
To use SET STATISTICS TIME, users must have the appropriate permissions to execute the Transact-SQL
statement. The SHOWPL AN permission is not required.

Examples
This example shows the server execution, parse, and compile times.

USE AdventureWorks2012;
GO
SET STATISTICS TIME ON;
GO
SELECT ProductID, StartDate, EndDate, StandardCost
FROM Production.ProductCostHistory
WHERE StandardCost < 500.00;
GO
SET STATISTICS TIME OFF;
GO

Here is the result set:


SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.

(269 row(s) affected)

SQL Server Execution Times:


CPU time = 0 ms, elapsed time = 2 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.

See Also
SET Statements (Transact-SQL )
SET STATISTICS IO (Transact-SQL )
SET STATISTICS XML (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes Microsoft SQL Server to execute Transact-SQL statements and generate detailed information about how
the statements were executed in the form of a well-defined XML document.
Transact-SQL Syntax Conventions

Syntax
SET STATISTICS XML { ON | OFF }

Remarks
The setting of SET STATISTICS XML is set at execute or run time and not at parse time.
When SET STATISTICS XML is ON, SQL Server returns execution information for each statement after executing
it. After this option is set ON, information about all subsequent Transact-SQL statements is returned until the
option is set to OFF. Note that SET STATISTICS XML need not be the only statement in a batch.
SET STATISTICS XML returns output as nvarchar(max) for applications, such as the sqlcmd utility, where the
XML output is subsequently used by other tools to display and process the query plan information.
SET STATISTICS XML returns information as a set of XML documents. Each statement after the SET STATISTICS
XML ON statement is reflected in the output by a single document. Each document contains the text of the
statement, followed by the details of the execution steps. The output shows run-time information such as the costs,
accessed indexes, and types of operations performed, join order, the number of times a physical operation is
performed, the number of rows each physical operator produced, and more.
The document containing the XML schema for the XML output by SET STATISTICS XML is copied during setup to
a local directory on the computer on which Microsoft SQL Server is installed. It can be found on the drive
containing the SQL Server installation files, at:
\Microsoft SQL Server\100\Tools\Binn\schemas\sqlserver\2004\07\showplan\showplanxml.xsd
The Showplan Schema can also be found at this Web site.
SET STATISTICS PROFILE and SET STATISTICS XML are counterparts of each other. The former produces textual
output; the latter produces XML output. In future versions of SQL Server, new query execution plan information
will only be displayed through the SET STATISTICS XML statement, not the SET STATISTICS PROFILE statement.

NOTE
If Include Actual Execution Plan is selected in SQL Server Management Studio, this SET option does not produce XML
Showplan output. Clear the Include Actual Execution Plan button before using this SET option.

Permissions
To use SET STATISTICS XML and view the output, users must have the following permissions:
Appropriate permissions to execute the Transact-SQL statements.
SHOWPL AN permission on all databases containing objects that are referenced by the Transact-SQL
statements.
For Transact-SQL statements that do not produce STATISTICS XML result sets, only the appropriate
permissions to execute the Transact-SQL statements are required. For Transact-SQL statements that do
produce STATISTICS XML result sets, checks for both the Transact-SQL statement execution permission
and the SHOWPL AN permission must succeed, or the Transact-SQL statement execution is aborted and no
Showplan information is generated.

Examples
The two statements that follow use the SET STATISTICS XML settings to show the way SQL Server analyzes and
optimizes the use of indexes in queries. The first query uses the Equals (=) comparison operator in the WHERE
clause on an indexed column. The second query uses the LIKE operator in the WHERE clause. This forces SQL
Server to use a clustered index scan to find the data that satisfies the WHERE clause condition. The values in the
EstimateRows and the EstimatedTotalSubtreeCost attributes are smaller for the first indexed query indicating
that it was processed much faster and used fewer resources than the nonindexed query.

USE AdventureWorks2012;
GO
SET STATISTICS XML ON;
GO
-- First query.
SELECT BusinessEntityID
FROM HumanResources.Employee
WHERE NationalIDNumber = '509647174';
GO
-- Second query.
SELECT BusinessEntityID, JobTitle
FROM HumanResources.Employee
WHERE JobTitle LIKE 'Production%';
GO
SET STATISTICS XML OFF;
GO

See Also
SET SHOWPL AN_XML (Transact-SQL )
sqlcmd Utility
SET TEXTSIZE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the size of varchar(max), nvarchar(max), varbinary(max), text, ntext, and image data returned by a
SELECT statement.

IMPORTANT
ntext, text, and image data types will be removed in a future version of Microsoft SQL Server. Avoid using these data types
in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max), and
varbinary(max) instead.

Transact-SQL Syntax Conventions

Syntax
SET TEXTSIZE { number }

Arguments
number
Is the length of varchar(max), nvarchar(max), varbinary(max), text, ntext, or image data, in bytes. number is
an integer with a maximum value of 2147483647 (2 GB ). A value of -1 indicates unlimited size. A value of 0 resets
the size to the default value of 4 KB.
The SQL Server Native Client (10.0 and higher) and ODBC Driver for SQL Server automatically specify -1
(unlimited) when connecting.
Drivers older than SQL Server 2008: The SQL Server Native Client ODBC driver and SQL Server Native Client
OLE DB Provider (version 9) for SQL Server automatically set TEXTSIZE to 2147483647 when connecting.

Remarks
Setting SET TEXTSIZE affects the @@TEXTSIZE function.
The setting of set TEXTSIZE is set at execute or run time and not at parse time.

Permissions
Requires membership in the public role.

See Also
@@TEXTSIZE (Transact-SQL )
Data Types (Transact-SQL )
SET Statements (Transact-SQL )
SET TRANSACTION ISOLATION LEVEL (Transact-
SQL)
5/3/2018 • 9 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls the locking and row versioning behavior of Transact-SQL statements issued by a connection to SQL
Server.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET TRANSACTION ISOLATION LEVEL


{ READ UNCOMMITTED
| READ COMMITTED
| REPEATABLE READ
| SNAPSHOT
| SERIALIZABLE
}

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

Arguments
READ UNCOMMITTED
Specifies that statements can read rows that have been modified by other transactions but not yet committed.
Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other transactions
from modifying data read by the current transaction. READ UNCOMMITTED transactions are also not blocked
by exclusive locks that would prevent the current transaction from reading rows that have been modified but not
committed by other transactions. When this option is set, it is possible to read uncommitted modifications, which
are called dirty reads. Values in the data can be changed and rows can appear or disappear in the data set before
the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT
statements in a transaction. This is the least restrictive of the isolation levels.
In SQL Server, you can also minimize locking contention while protecting transactions from dirty reads of
uncommitted data modifications using either:
The READ COMMITTED isolation level with the READ_COMMITTED_SNAPSHOT database option set to
ON.
The SNAPSHOT isolation level.
READ COMMITTED
Specifies that statements cannot read data that has been modified but not committed by other
transactions. This prevents dirty reads. Data can be changed by other transactions between individual
statements within the current transaction, resulting in nonrepeatable reads or phantom data. This option is
the SQL Server default.
The behavior of READ COMMITTED depends on the setting of the READ_COMMITTED_SNAPSHOT
database option:
If READ_COMMITTED_SNAPSHOT is set to OFF (the default), the Database Engine uses shared locks to
prevent other transactions from modifying rows while the current transaction is running a read operation.
The shared locks also block the statement from reading rows modified by other transactions until the other
transaction is completed. The shared lock type determines when it will be released. Row locks are released
before the next row is processed. Page locks are released when the next page is read, and table locks are
released when the statement finishes.

NOTE
If READ_COMMITTED_SNAPSHOT is set to ON, the Database Engine uses row versioning to present each statement
with a transactionally consistent snapshot of the data as it existed at the start of the statement. Locks are not used
to protect the data from updates by other transactions.
Snapshot isolation supports FILESTREAM data. Under snapshot isolation mode, FILESTREAM data read by any
statement in a transaction will be the transactionally consistent version of the data that existed at the start of the
transaction.

When the READ_COMMITTED_SNAPSHOT database option is ON, you can use the
READCOMMITTEDLOCK table hint to request shared locking instead of row versioning for individual
statements in transactions running at the READ COMMITTED isolation level.

NOTE
When you set the READ_COMMITTED_SNAPSHOT option, only the connection executing the ALTER DATABASE command is
allowed in the database. There must be no other open connection in the database until ALTER DATABASE is complete. The
database does not have to be in single-user mode.

REPEATABLE READ
Specifies that statements cannot read data that has been modified but not yet committed by other transactions
and that no other transactions can modify data that has been read by the current transaction until the current
transaction completes.
Shared locks are placed on all data read by each statement in the transaction and are held until the transaction
completes. This prevents other transactions from modifying any rows that have been read by the current
transaction. Other transactions can insert new rows that match the search conditions of statements issued by the
current transaction. If the current transaction then retries the statement it will retrieve the new rows, which results
in phantom reads. Because shared locks are held to the end of a transaction instead of being released at the end
of each statement, concurrency is lower than the default READ COMMITTED isolation level. Use this option only
when necessary.
SNAPSHOT
Specifies that data read by any statement in a transaction will be the transactionally consistent version of the data
that existed at the start of the transaction. The transaction can only recognize data modifications that were
committed before the start of the transaction. Data modifications made by other transactions after the start of the
current transaction are not visible to statements executing in the current transaction. The effect is as if the
statements in a transaction get a snapshot of the committed data as it existed at the start of the transaction.
Except when a database is being recovered, SNAPSHOT transactions do not request locks when reading data.
SNAPSHOT transactions reading data do not block other transactions from writing data. Transactions writing
data do not block SNAPSHOT transactions from reading data.
During the roll-back phase of a database recovery, SNAPSHOT transactions will request a lock if an attempt is
made to read data that is locked by another transaction that is being rolled back. The SNAPSHOT transaction is
blocked until that transaction has been rolled back. The lock is released immediately after it has been granted.
The ALLOW_SNAPSHOT_ISOL ATION database option must be set to ON before you can start a transaction
that uses the SNAPSHOT isolation level. If a transaction using the SNAPSHOT isolation level accesses data in
multiple databases, ALLOW_SNAPSHOT_ISOL ATION must be set to ON in each database.
A transaction cannot be set to SNAPSHOT isolation level that started with another isolation level; doing so will
cause the transaction to abort. If a transaction starts in the SNAPSHOT isolation level, you can change it to
another isolation level and then back to SNAPSHOT. A transaction starts the first time it accesses data.
A transaction running under SNAPSHOT isolation level can view changes made by that transaction. For example,
if the transaction performs an UPDATE on a table and then issues a SELECT statement against the same table, the
modified data will be included in the result set.

NOTE
Under snapshot isolation mode, FILESTREAM data read by any statement in a transaction will be the transactionally
consistent version of the data that existed at the start of the transaction, not at the start of the statement.

SERIALIZABLE
Specifies the following:
Statements cannot read data that has been modified but not yet committed by other transactions.
No other transactions can modify data that has been read by the current transaction until the current
transaction completes.
Other transactions cannot insert new rows with key values that would fall in the range of keys read by any
statements in the current transaction until the current transaction completes.
Range locks are placed in the range of key values that match the search conditions of each statement
executed in a transaction. This blocks other transactions from updating or inserting any rows that would
qualify for any of the statements executed by the current transaction. This means that if any of the
statements in a transaction are executed a second time, they will read the same set of rows. The range locks
are held until the transaction completes. This is the most restrictive of the isolation levels because it locks
entire ranges of keys and holds the locks until the transaction completes. Because concurrency is lower, use
this option only when necessary. This option has the same effect as setting HOLDLOCK on all tables in all
SELECT statements in a transaction.

Remarks
Only one of the isolation level options can be set at a time, and it remains set for that connection until it is
explicitly changed. All read operations performed within the transaction operate under the rules for the specified
isolation level unless a table hint in the FROM clause of a statement specifies different locking or versioning
behavior for a table.
The transaction isolation levels define the type of locks acquired on read operations. Shared locks acquired for
READ COMMITTED or REPEATABLE READ are generally row locks, although the row locks can be escalated to
page or table locks if a significant number of the rows in a page or table are referenced by the read. If a row is
modified by the transaction after it has been read, the transaction acquires an exclusive lock to protect that row,
and the exclusive lock is retained until the transaction completes. For example, if a REPEATABLE READ
transaction has a shared lock on a row, and the transaction then modifies the row, the shared row lock is
converted to an exclusive row lock.
With one exception, you can switch from one isolation level to another at any time during a transaction. The
exception occurs when changing from any isolation level to SNAPSHOT isolation. Doing this causes the
transaction to fail and roll back. However, you can change a transaction started in SNAPSHOT isolation to any
other isolation level.
When you change a transaction from one isolation level to another, resources that are read after the change are
protected according to the rules of the new level. Resources that are read before the change continue to be
protected according to the rules of the previous level. For example, if a transaction changed from READ
COMMITTED to SERIALIZABLE, the shared locks acquired after the change are now held until the end of the
transaction.
If you issue SET TRANSACTION ISOL ATION LEVEL in a stored procedure or trigger, when the object returns
control the isolation level is reset to the level in effect when the object was invoked. For example, if you set
REPEATABLE READ in a batch, and the batch then calls a stored procedure that sets the isolation level to
SERIALIZABLE, the isolation level setting reverts to REPEATABLE READ when the stored procedure returns
control to the batch.

NOTE
User-defined functions and common language runtime (CLR) user-defined types cannot execute SET TRANSACTION
ISOLATION LEVEL. However, you can override the isolation level by using a table hint. For more information, see Table Hints
(Transact-SQL).

When you use sp_bindsession to bind two sessions, each session retains its isolation level setting. Using SET
TRANSACTION ISOL ATION LEVEL to change the isolation level setting of one session does not affect the
setting of any other sessions bound to it.
SET TRANSACTION ISOL ATION LEVEL takes effect at execute or run time, and not at parse time.
Optimized bulk load operations on heaps block queries that are running under the following isolation levels:
SNAPSHOT
READ UNCOMMITTED
READ COMMITTED using row versioning
Conversely, queries that run under these isolation levels block optimized bulk load operations on heaps.
For more information about bulk load operations, see Bulk Import and Export of Data (SQL Server).
FILESTREAM -enabled databases support the following transaction isolation levels.

ISOLATION LEVEL TRANSACT SQL ACCESS FILE SYSTEM ACCESS

Read uncommitted SQL Server 2017 Unsupported

Read committed SQL Server 2017 SQL Server 2017

Repeatable read SQL Server 2017 Unsupported

Serializable SQL Server 2017 Unsupported

Read committed snapshot SQL Server 2017 SQL Server 2017


ISOLATION LEVEL TRANSACT SQL ACCESS FILE SYSTEM ACCESS

Snapshot SQL Server 2017 SQL Server 2017

Examples
The following example sets the TRANSACTION ISOLATION LEVEL for the session. For each Transact-SQL statement
that follows, SQL Server holds all of the shared locks until the end of the transaction.

USE AdventureWorks2012;
GO
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
GO
BEGIN TRANSACTION;
GO
SELECT *
FROM HumanResources.EmployeePayHistory;
GO
SELECT *
FROM HumanResources.Department;
GO
COMMIT TRANSACTION;
GO

See Also
ALTER DATABASE (Transact-SQL )
DBCC USEROPTIONS (Transact-SQL )
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
Table Hints (Transact-SQL )
SET XACT_ABORT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse

NOTE
The THROW statement honors SET XACT_ABORT. RAISERROR does not. New applications should use THROW instead of
RAISERROR.

Specifies whether SQL Server automatically rolls back the current transaction when a Transact-SQL statement
raises a run-time error.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

SET XACT_ABORT { ON | OFF }

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

SET XACT_ABORT ON

Remarks
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is
terminated and rolled back.
When SET XACT_ABORT is OFF, in some cases only the Transact-SQL statement that raised the error is rolled
back and the transaction continues processing. Depending upon the severity of the error, the entire transaction
may be rolled back even when SET XACT_ABORT is OFF. OFF is the default setting.
Compile errors, such as syntax errors, are not affected by SET XACT_ABORT.
XACT_ABORT must be set ON for data modification statements in an implicit or explicit transaction against most
OLE DB providers, including SQL Server. The only case where this option is not required is if the provider
supports nested transactions.
When ANSI_WARNINGS=OFF, permissions violations cause transactions to abort.
The setting of SET XACT_ABORT is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.

DECLARE @XACT_ABORT VARCHAR(3) = 'OFF';


IF ( (16384 & @@OPTIONS) = 16384 ) SET @XACT_ABORT = 'ON';
SELECT @XACT_ABORT AS XACT_ABORT;
Examples
The following code example causes a foreign key violation error in a transaction that has other Transact-SQL
statements. In the first set of statements, the error is generated, but the other statements execute successfully and
the transaction is successfully committed. In the second set of statements, SET XACT_ABORT is set to ON . This causes
the statement error to terminate the batch and the transaction is rolled back.

USE AdventureWorks2012;
GO
IF OBJECT_ID(N't2', N'U') IS NOT NULL
DROP TABLE t2;
GO
IF OBJECT_ID(N't1', N'U') IS NOT NULL
DROP TABLE t1;
GO
CREATE TABLE t1
(a INT NOT NULL PRIMARY KEY);
CREATE TABLE t2
(a INT NOT NULL REFERENCES t1(a));
GO
INSERT INTO t1 VALUES (1);
INSERT INTO t1 VALUES (3);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (6);
GO
SET XACT_ABORT OFF;
GO
BEGIN TRANSACTION;
INSERT INTO t2 VALUES (1);
INSERT INTO t2 VALUES (2); -- Foreign key error.
INSERT INTO t2 VALUES (3);
COMMIT TRANSACTION;
GO
SET XACT_ABORT ON;
GO
BEGIN TRANSACTION;
INSERT INTO t2 VALUES (4);
INSERT INTO t2 VALUES (5); -- Foreign key error.
INSERT INTO t2 VALUES (6);
COMMIT TRANSACTION;
GO
-- SELECT shows only keys 1 and 3 added.
-- Key 2 insert failed and was rolled back, but
-- XACT_ABORT was OFF and rest of transaction
-- succeeded.
-- Key 5 insert error with XACT_ABORT ON caused
-- all of the second transaction to roll back.
SELECT *
FROM t2;
GO

See Also
THROW (Transact-SQL )
BEGIN TRANSACTION (Transact-SQL )
COMMIT TRANSACTION (Transact-SQL )
ROLLBACK TRANSACTION (Transact-SQL )
SET Statements (Transact-SQL )
@@TRANCOUNT (Transact-SQL )
TRUNCATE TABLE (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes all rows from a table or specified partitions of a table, without logging the individual row deletions.
TRUNCATE TABLE is similar to the DELETE statement with no WHERE clause; however, TRUNCATE TABLE is
faster and uses fewer system and transaction log resources.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

TRUNCATE TABLE
[ { database_name .[ schema_name ] . | schema_name . } ]
table_name
[ WITH ( PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ) ]
[ ; ]

<range> ::=
<partition_number_expression> TO <partition_number_expression>

-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

TRUNCATE TABLE [ { database_name . [ schema_name ] . | schema_name . ] table_name


[;]

Arguments
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to truncate or from which all rows are removed. table_name must be a literal. table_name
cannot be the OBJECT_ID () function or a variable.
WITH ( PARTITIONS ( { <partition_number_expression> | <range> } [ , ...n ] ) )
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version)
Specifies the partitions to truncate or from which all rows are removed. If the table is not partitioned, the WITH
PARTITIONS argument will generate an error. If the WITH PARTITIONS clause is not provided, the entire table
will be truncated.
<partition_number_expression> can be specified in the following ways:
Provide the number of a partition, for example: WITH (PARTITIONS (2))
Provide the partition numbers for several individual partitions separated by commas, for example:
WITH (PARTITIONS (1, 5))

Provide both ranges and individual partitions, for example: WITH (PARTITIONS (2, 4, 6 TO 8))

<range> can be specified as partition numbers separated by the word TO, for example:
WITH (PARTITIONS (6 TO 8))

To truncate a partitioned table, the table and indexes must be aligned (partitioned on the same partition
function).

Remarks
Compared to the DELETE statement, TRUNCATE TABLE has the following advantages:
Less transaction log space is used.
The DELETE statement removes rows one at a time and records an entry in the transaction log for each
deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table
data and records only the page deallocations in the transaction log.
Fewer locks are typically used.
When the DELETE statement is executed using a row lock, each row in the table is locked for deletion.
TRUNCATE TABLE always locks the table (including a schema (SCH-M ) lock) and page but not each row.
Without exception, zero pages are left in the table.
After a DELETE statement is executed, the table can still contain empty pages. For example, empty pages in
a heap cannot be deallocated without at least an exclusive (LCK_M_X) table lock. If the delete operation
does not use a table lock, the table (heap) will contain many empty pages. For indexes, the delete operation
can leave empty pages behind, although these pages will be deallocated quickly by a background cleanup
process.
TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints,
indexes, and so on remain. To remove the table definition in addition to its data, use the DROP TABLE
statement.
If the table contains an identity column, the counter for that column is reset to the seed value defined for
the column. If no seed was defined, the default value 1 is used. To retain the identity counter, use DELETE
instead.

Restrictions
You cannot use TRUNCATE TABLE on tables that:
Are referenced by a FOREIGN KEY constraint. (You can truncate a table that has a foreign key that
references itself.)
Participate in an indexed view.
Are published by using transactional replication or merge replication.
For tables with one or more of these characteristics, use the DELETE statement instead.
TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions.
For more information, see CREATE TRIGGER (Transact-SQL ).
In Azure SQL Data Warehouse and Parallel Data Warehouse:
TRUNCATE TABLE is not allowed within the EXPL AIN statement.
TRUNCATE TABLE cannot be ran inside of a transaction.

Truncating Large Tables


Microsoft SQL Server has the ability to drop or truncate tables that have more than 128 extents without holding
simultaneous locks on all the extents required for the drop.

Permissions
The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table
owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and
are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a
stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.

Examples
A. Truncate a Table
The following example removes all data from the JobCandidate table. SELECT statements are included before and
after the TRUNCATE TABLE statement to compare results.

USE AdventureWorks2012;
GO
SELECT COUNT(*) AS BeforeTruncateCount
FROM HumanResources.JobCandidate;
GO
TRUNCATE TABLE HumanResources.JobCandidate;
GO
SELECT COUNT(*) AS AfterTruncateCount
FROM HumanResources.JobCandidate;
GO

B. Truncate Table Partitions


Applies to: SQL Server ( SQL Server 2016 (13.x) through current version)
The following example truncates specified partitions of a partitioned table. The WITH (PARTITIONS (2, 4, 6 TO 8))
syntax causes partition numbers 2, 4, 6, 7, and 8 to be truncated.

TRUNCATE TABLE PartitionTable1


WITH (PARTITIONS (2, 4, 6 TO 8));
GO

See Also
DELETE (Transact-SQL )
DROP TABLE (Transact-SQL )
IDENTITY (Property) (Transact-SQL )
UPDATE STATISTICS (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online

THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Updates query optimization statistics on a table or indexed view. By default, the query optimizer already updates
statistics as necessary to improve the query plan; in some cases you can improve query performance by using
UPDATE STATISTICS or the stored procedure sp_updatestats to update statistics more frequently than the default
updates.
Updating statistics ensures that queries compile with up-to-date statistics. However, updating statistics causes
queries to recompile. We recommend not updating statistics too frequently because there is a performance
tradeoff between improving query plans and the time it takes to recompile queries. The specific tradeoffs depend
on your application. UPDATE STATISTICS can use tempdb to sort the sample of rows for building statistics.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

UPDATE STATISTICS table_or_indexed_view_name


[
{
{ index_or_statistics__name }
| ( { index_or_statistics_name } [ ,...n ] )
}
]
[ WITH
[
FULLSCAN
[ [ , ] PERSIST_SAMPLE_PERCENT = { ON | OFF } ]
| SAMPLE number { PERCENT | ROWS }
[ [ , ] PERSIST_SAMPLE_PERCENT = { ON | OFF } ]
| RESAMPLE
[ ON PARTITIONS ( { <partition_number> | <range> } [, …n] ) ]
| <update_stats_stream_option> [ ,...n ]
]
[ [ , ] [ ALL | COLUMNS | INDEX ]
[ [ , ] NORECOMPUTE ]
[ [ , ] INCREMENTAL = { ON | OFF } ]
[ [ , ] MAXDOP = max_degree_of_parallelism ]
] ;

<update_stats_stream_option> ::=
[ STATS_STREAM = stats_stream ]
[ ROWCOUNT = numeric_constant ]
[ PAGECOUNT = numeric_contant ]
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse

UPDATE STATISTICS schema_name . ] table_name


[ ( { statistics_name | index_name } ) ]
[ WITH
{
FULLSCAN
| SAMPLE number PERCENT
| RESAMPLE
}
]
[;]

Arguments
table_or_indexed_view_name
Is the name of the table or indexed view that contains the statistics object.
index_or_statistics_name
Is the name of the index to update statistics on or name of the statistics to update. If index_or_statistics_name is
not specified, the query optimizer updates all statistics for the table or indexed view. This includes statistics created
using the CREATE STATISTICS statement, single-column statistics created when AUTO_CREATE_STATISTICS is
on, and statistics created for indexes.
For more information about AUTO_CREATE_STATISTICS, see ALTER DATABASE SET Options (Transact-SQL ). To
view all indexes for a table or view, you can use sp_helpindex.
FULLSCAN
Compute statistics by scanning all rows in the table or indexed view. FULLSCAN and SAMPLE 100 PERCENT
have the same results. FULLSCAN cannot be used with the SAMPLE option.
SAMPLE number { PERCENT | ROWS }
Specifies the approximate percentage or number of rows in the table or indexed view for the query optimizer to
use when it updates statistics. For PERCENT, number can be from 0 through 100 and for ROWS, number can be
from 0 to the total number of rows. The actual percentage or number of rows the query optimizer samples might
not match the percentage or number specified. For example, the query optimizer scans all rows on a data page.
SAMPLE is useful for special cases in which the query plan, based on default sampling, is not optimal. In most
situations, it is not necessary to specify SAMPLE because the query optimizer uses sampling and determines the
statistically significant sample size by default, as required to create high-quality query plans.
Starting with SQL Server 2016 (13.x), sampling of data to build statistics is done in parallel, when using
compatibility level 130, to improve the performance of statistics collection. The query optimizer will use parallel
sample statistics, whenever a table size exceeds a certain threshold.
SAMPLE cannot be used with the FULLSCAN option. When neither SAMPLE nor FULLSCAN is specified, the
query optimizer uses sampled data and computes the sample size by default.
We recommend against specifying 0 PERCENT or 0 ROWS. When 0 PERCENT or ROWS is specified, the
statistics object is updated but does not contain statistics data.
For most workloads, a full scan is not required, and default sampling is adequate.
However, certain workloads that are sensitive to widely varying data distributions may require an increased
sample size, or even a full scan.
For more information, see the CSS SQL Escalation Services blog.
RESAMPLE
Update each statistic using its most recent sample rate.
Using RESAMPLE can result in a full-table scan. For example, statistics for indexes use a full-table scan for their
sample rate. When none of the sample options (SAMPLE, FULLSCAN, RESAMPLE ) are specified, the query
optimizer samples the data and computes the sample size by default.
PERSIST_SAMPLE_PERCENT = { ON | OFF }
When ON, the statistics will retain the set sampling percentage for subsequent updates that do not explicitly
specify a sampling percentage. When OFF, statistics sampling percentage will get reset to default sampling in
subsequent updates that do not explicitly specify a sampling percentage. The default is OFF.

NOTE
If AUTO_UPDATE_STATISTICS is executed, it uses the persisted sampling percentage if available, or use default sampling
percentage if not. RESAMPLE behavior is not affected by this option.

TIP
DBCC SHOW_STATISTICS and sys.dm_db_stats_properties expose the persisted sample percent value for the selected
statistic.

Applies to: SQL Server 2016 (13.x) (starting with SQL Server 2016 (13.x) SP1 CU4) through SQL Server 2017
(starting with SQL Server 2017 (14.x) CU1).
ON PARTITIONS ( { <partition_number> | <range> } [, …n] ) ] Forces the leaf-level statistics covering the
partitions specified in the ON PARTITIONS clause to be recomputed, and then merged to build the global
statistics. WITH RESAMPLE is required because partition statistics built with different sample rates cannot be
merged together.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017
ALL | COLUMNS | INDEX
Update all existing statistics, statistics created on one or more columns, or statistics created for indexes. If none of
the options are specified, the UPDATE STATISTICS statement updates all statistics on the table or indexed view.
NORECOMPUTE
Disable the automatic statistics update option, AUTO_UPDATE_STATISTICS, for the specified statistics. If this
option is specified, the query optimizer completes this statistics update and disables future updates.
To re-enable the AUTO_UPDATE_STATISTICS option behavior, run UPDATE STATISTICS again without the
NORECOMPUTE option or run sp_autostats.

WARNING
Using this option can produce suboptimal query plans. We recommend using this option sparingly, and then only by a
qualified system administrator.

For more information about the AUTO_STATISTICS_UPDATE option, see ALTER DATABASE SET Options
(Transact-SQL ).
INCREMENTAL = { ON | OFF }
When ON, the statistics are recreated as per partition statistics. When OFF, the statistics tree is dropped and SQL
Server re-computes the statistics. The default is OFF.
If per partition statistics are not supported an error is generated. Incremental stats are not supported for following
statistics types:
Statistics created with indexes that are not partition-aligned with the base table.
Statistics created on Always On readable secondary databases.
Statistics created on read-only databases.
Statistics created on filtered indexes.
Statistics created on views.
Statistics created on internal tables.
Statistics created with spatial indexes or XML indexes.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server (Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3).
Overrides the max degree of parallelism configuration option for the duration of the statistic operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to
limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism can be:
1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel statistic operation to the specified number or
fewer based on the current system workload.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
<update_stats_stream_option> Identified for informational purposes only. Not supported. Future compatibility is
not guaranteed.

Remarks
When to Use UPDATE STATISTICS
For more information about when to use UPDATE STATISTICS, see Statistics.

Limitations and Restrictions


Updating statistics is not supported on external tables. To update statistics on an external table, drop and re-
create the statistics.
The MAXDOP option is not compatible with STATS_STREAM, ROWCOUNT and PAGECOUNT options.

Updating All Statistics with sp_updatestats


For information about how to update statistics for all user-defined and internal tables in the database, see the
stored procedure sp_updatestats (Transact-SQL ). For example, the following command calls sp_updatestats to
update all statistics for the database.

EXEC sp_updatestats;

Determining the Last Statistics Update


To determine when statistics were last updated, use the STATS_DATE function.

PDW / SQL Data Warehouse


The following syntax is not supported by PDW / SQL Data Warehouse

update statistics t1 (a,b);

update statistics t1 (a) with sample 10 rows;

update statistics t1 (a) with NORECOMPUTE;

update statistics t1 (a) with INCREMENTAL=ON;

update statistics t1 (a) with stats_stream = 0x01;

Permissions
Requires ALTER permission on the table or view.

Examples
A. Update all statistics on a table
The following example updates the statistics for all indexes on the SalesOrderDetail table.

USE AdventureWorks2012;
GO
UPDATE STATISTICS Sales.SalesOrderDetail;
GO

B. Update the statistics for an index


The following example updates the statistics for the AK_SalesOrderDetail_rowguid index of the SalesOrderDetail
table.

USE AdventureWorks2012;
GO
UPDATE STATISTICS Sales.SalesOrderDetail AK_SalesOrderDetail_rowguid;
GO

C. Update statistics by using 50 percent sampling


The following example creates and then updates the statistics for the Name and ProductNumber columns in the
Product table.
USE AdventureWorks2012;
GO
CREATE STATISTICS Products
ON Production.Product ([Name], ProductNumber)
WITH SAMPLE 50 PERCENT
-- Time passes. The UPDATE STATISTICS statement is then executed.
UPDATE STATISTICS Production.Product(Products)
WITH SAMPLE 50 PERCENT;

D. Update statistics by using FULLSCAN and NORECOMPUTE


The following example updates the Products statistics in the Product table, forces a full scan of all rows in the
Product table, and turns off automatic statistics for the Products statistics.

USE AdventureWorks2012;
GO
UPDATE STATISTICS Production.Product(Products)
WITH FULLSCAN, NORECOMPUTE;
GO

Examples: Azure SQL Data Warehouse and Parallel Data Warehouse


E. Update statistics on a table
The following example updates the CustomerStats1 statistics on the Customer table.

UPDATE STATISTICS Customer ( CustomerStats1 );

F. Update statistics by using a full scan


The following example updates the CustomerStats1 statistics, based on scanning all of the rows in the Customer
table.

UPDATE STATISTICS Customer (CustomerStats1) WITH FULLSCAN;

G. Update all statistics on a table


The following example updates all statistics on the Customer table.

UPDATE STATISTICS Customer;

See Also
Statistics
ALTER DATABASE (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP STATISTICS (Transact-SQL )
sp_autostats (Transact-SQL )
sp_updatestats (Transact-SQL )
STATS_DATE (Transact-SQL )
sys.dm_db_stats_properties (Transact-SQL ) sys.dm_db_stats_histogram (Transact-SQL )

Você também pode gostar