Escolar Documentos
Profissional Documentos
Cultura Documentos
Overview
ALTER
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AUTHORIZATION
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMN ENCRYPTION KEY
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE (Azure SQL Database)
DATABASE (Azure SQL Data Warehouse)
DATABASE (Parallel Data Warehouse)
DATABASE AUDIT SPECIFICATION
DATABASE compatibility level
DATABASE database mirroring
DATABASE ENCRYPTION KEY
DATABASE file and filegroup options
DATABASE HADR
DATABASE SCOPED CREDENTIAL
DATABASE SCOPED CONFIGURATION
DATABASE SET Options
ENDPOINT
EVENT SESSION
EXTERNAL DATA SOURCE
EXTERNAL LIBRARY
EXTERNAL RESOURCE POOL
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
INDEX
INDEX (Selective XML Indexes)
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
RESOURCE GOVERNOR
RESOURCE POOL
ROLE
ROUTE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER CONFIGURATION
SERVER ROLE
SERVICE
SERVICE MASTER KEY
SYMMETRIC KEY
TABLE
TABLE column_constraint
TABLE column_definition
TABLE computed_column_definition
TABLE index_option
TABLE table_constraint
TRIGGER
USER
VIEW
WORKLOAD GROUP
XML SCHEMA COLLECTION
Backup and restore
BACKUP
BACKUP CERTIFICATE
BACKUP DATABASE (Parallel Data Warehouse)
BACKUP MASTER KEY
BACKUP SERVICE MASTER KEY
RESTORE
RESTORE statements
RESTORE DATABASE (Parallel Data Warehouse)
RESTORE arguments
RESTORE FILELISTONLY
RESTORE HEADERONLY
RESTORE LABELONLY
RESTORE MASTER KEY
RESTORE REWINDONLY
RESTORE VERIFYONLY
BULK INSERT
CREATE
AGGREGATE
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMNSTORE INDEX
COLUMN ENCRYPTION KEY
COLUMN MASTER KEY
CONTRACT
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE (Azure SQL Database)
DATABASE (Azure SQL Data Warehouse)
DATABASE (Parallel Data Warehouse)
DATABASE AUDIT SPECIFICATION
DATABASE ENCRYPTION KEY
DATABASE SCOPED CREDENTIAL
DEFAULT
ENDPOINT
EVENT NOTIFICATION
EVENT SESSION
EXTERNAL DATA SOURCE
EXTERNAL LIBRARY
EXTERNAL FILE FORMAT
EXTERNAL RESOURCE POOL
EXTERNAL TABLE
EXTERNAL TABLE AS SELECT
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
FUNCTION (SQL Data Warehouse)
INDEX
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
REMOTE TABLE AS SELECT (Parallel Data Warehouse)
RESOURCE POOL
ROLE
ROUTE
RULE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SELECTIVE XML INDEX
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER ROLE
SERVICE
SPATIAL INDEX
STATISTICS
SYMMETRIC KEY
SYNONYM
TABLE
TABLE (Azure SQL Data Warehouse)
TABLE (SQL Graph)
TABLE AS SELECT (Azure SQL Data Warehouse)
TABLE IDENTITY (Property)
TRIGGER
TYPE
USER
VIEW
WORKLOAD GROUP
XML INDEX
XML INDEX (Selective XML Indexes)
XML SCHEMA COLLECTION
Collations
COLLATE clause
SQL Server Collation Name
Windows Collation Name
Collation Precedence
DELETE
DISABLE TRIGGER
DROP
AGGREGATE
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMN ENCRYPTION KEY
COLUMN MASTER KEY
CONTRACT
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE AUDIT SPECIFICATION
DATABASE ENCRYPTION KEY
DATABASE SCOPED CREDENTIAL
DEFAULT
ENDPOINT
EXTERNAL DATA SOURCE
EXTERNAL FILE FORMAT
EXTERNAL LIBRARY
EXTERNAL RESOURCE POOL
EXTERNAL TABLE
EVENT NOTIFICATION
EVENT SESSION
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
INDEX
INDEX (Selective XML Indexes)
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
RESOURCE POOL
ROLE
ROUTE
RULE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER ROLE
SERVICE
SIGNATURE
STATISTICS
SYMMETRIC KEY
SYNONYM
TABLE
TRIGGER
TYPE
USER
VIEW
WORKLOAD GROUP
XML SCHEMA COLLECTION
ENABLE TRIGGER
INSERT
INSERT (SQL Graph)
MERGE
RENAME
Permissions
ADD SIGNATURE
CLOSE MASTER KEY
CLOSE SYMMETRIC KEY
DENY
DENY Assembly Permissions
DENY Asymmetric Key Permissions
DENY Availability Group Permissions
DENY Certificate Permissions
DENY Database Permissions
DENY Database Principal Permissions
DENY Database Scoped Credential
DENY Endpoint Permissions
DENY Full-Text Permissions
DENY Object Permissions
DENY Schema Permissions
DENY Search Property List Permissions
DENY Server Permissions
DENY Server Principal Permissions
DENY Service Broker Permissions
DENY Symmetric Key Permissions
DENY System Object Permissions
DENY Type Permissions
DENY XML Schema Collection Permissions
EXECUTE AS
EXECUTE AS Clause
GRANT
GRANT Assembly Permissions
GRANT Asymmetric Key Permissions
GRANT Availability Group Permissions
GRANT Certificate Permissions
GRANT Database Permissions
GRANT Database Principal Permissions
GRANT Database Scoped Credential
GRANT Endpoint Permissions
GRANT Full-Text Permissions
GRANT Object Permissions
GRANT Schema Permissions
GRANT Search Property List Permissions
GRANT Server Permissions
GRANT Server Principal Permissions
GRANT Service Broker Permissions
GRANT Symmetric Key Permissions
GRANT System Object Permissions
GRANT Type Permissions
GRANT XML Schema Collection Permissions
OPEN MASTER KEY
OPEN SYMMETRIC KEY
Permissions: GRANT, DENY, REVOKE (Azure SQL Data Warehouse, Parallel Data
Warehouse)
REVERT
REVOKE
REVOKE Assembly Permissions
REVOKE Asymmetric Key Permissions
REVOKE Availability Group Permissions
REVOKE Certificate Permissions
REVOKE Database Permissions
REVOKE Database Principal Permissions
REVOKE Database Scoped Credential
REVOKE Endpoint Permissions
REVOKE Full-Text Permissions
REVOKE Object Permissions
REVOKE Schema Permissions
REVOKE Search Property List Permissions
REVOKE Server Permissions
REVOKE Server Principal Permissions
REVOKE Service Broker Permissions
REVOKE Symmetric Key Permissions
REVOKE System Object Permissions
REVOKE Type Permissions
REVOKE XML Schema Collection Permissions
SETUSER
Service Broker
BEGIN CONVERSATION TIMER
BEGIN DIALOG CONVERSATION
END CONVERSATION
GET CONVERSATION GROUP
GET_TRANSMISSION_STATUS
MOVE CONVERSATION
RECEIVE
SEND
SET
Overview
ANSI_DEFAULTS
ANSI_NULL_DFLT_OFF
ANSI_NULL_DFLT_ON
ANSI_NULLS
ANSI_PADDING
ANSI_WARNINGS
ARITHABORT
ARITHIGNORE
CONCAT_NULL_YIELDS_NULL
CONTEXT_INFO
CURSOR_CLOSE_ON_COMMIT
DATEFIRST
DATEFORMAT
DEADLOCK_PRIORITY
FIPS_FLAGGER
FMTONLY
FORCEPLAN
IDENTITY_INSERT
IMPLICIT_TRANSACTIONS
LANGUAGE
LOCK_TIMEOUT
NOCOUNT
NOEXEC
NUMERIC_ROUNDABORT
OFFSETS
PARSEONLY
QUERY_GOVERNOR_COST_LIMIT
QUOTED_IDENTIFIER
REMOTE_PROC_TRANSACTIONS
ROWCOUNT
SHOWPLAN_ALL
SHOWPLAN_TEXT
SHOWPLAN_XML
STATISTICS IO
STATISTICS PROFILE
STATISTICS TIME
STATISTICS XML
TEXTSIZE
TRANSACTION ISOLATION LEVEL
XACT_ABORT
TRUNCATE TABLE
UPDATE STATISTICS
Transact-SQL statements
5/30/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This reference topic summarizes the categories of statements for use with Transact-SQL (T-SQL ). You can find all
of the statements listed in the left-hand navigation.
Permissions statements
Permissions statements determine which users and logins can access data and perform operations. For more
information about authentication and access, see the Security center.
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the name, password, or default schema of an application role.
Transact-SQL Syntax Conventions
Syntax
ALTER APPLICATION ROLE application_role_name
WITH <set_item> [ ,...n ]
<set_item> ::=
NAME = new_application_role_name
| PASSWORD = 'password'
| DEFAULT_SCHEMA = schema_name
Arguments
application_role_name
Is the name of the application role to be modified.
NAME =new_application_role_name
Specifies the new name of the application role. This name must not already be used to refer to any principal in the
database.
PASSWORD ='password'
Specifies the password for the application role. password must meet the Windows password policy requirements
of the computer that is running the instance of SQL Server. You should always use strong passwords.
DEFAULT_SCHEMA =schema_name
Specifies the first schema that will be searched by the server when it resolves the names of objects. schema_name
can be a schema that does not exist in the database.
Remarks
If the new application role name already exists in the database, the statement will fail. When the name, password,
or default schema of an application role is changed the ID associated with the role is not changed.
IMPORTANT
Password expiration policy is not applied to application role passwords. For this reason, take extra care in selecting strong
passwords. Applications that invoke application roles must store their passwords.
In SQL Server 2005the behavior of schemas changed from the behavior in earlier versions of SQL Server. Code
that assumes that schemas are equivalent to database users may not return correct results. Old catalog views,
including sysobjects, should not be used in a database in which any of the following DDL statements has ever been
used: CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER,
CREATE ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In a database in which any of these statements has ever been used, you must use the new
catalog views. The new catalog views take into account the separation of principals and schemas that is introduced
in SQL Server 2005. For more information about catalog views, see Catalog Views (Transact-SQL ).
Permissions
Requires ALTER ANY APPLICATION ROLE permission on the database. To change the default schema, the user
also needs ALTER permission on the application role. An application role can alter its own default schema, but not
its name or password.
Examples
A. Changing the name of application role
The following example changes the name of the application role weekly_receipts to receipts_ledger .
USE AdventureWorks2012;
CREATE APPLICATION ROLE weekly_receipts
WITH PASSWORD = '987Gbv8$76sPYY5m23' ,
DEFAULT_SCHEMA = Sales;
GO
ALTER APPLICATION ROLE weekly_receipts
WITH NAME = receipts_ledger;
GO
See Also
Application Roles
CREATE APPLICATION ROLE (Transact-SQL )
DROP APPLICATION ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER ASSEMBLY (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters an assembly by modifying the SQL Server catalog properties of an assembly. ALTER ASSEMBLY refreshes
it to the latest copy of the Microsoft .NET Framework modules that hold its implementation and adds or removes
files associated with it. Assemblies are created by using CREATE ASSEMBLY.
WARNING
CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer supported as a security boundary. A CLR
assembly created with PERMISSION_SET = SAFE may be able to access external system resources, call unmanaged code, and
acquire sysadmin privileges. Beginning with SQL Server 2017 (14.x), an sp_configure option called clr strict security
is introduced to enhance the security of CLR assemblies. clr strict security is enabled by default, and treats SAFE and
EXTERNAL_ACCESS assemblies as if they were marked UNSAFE . The clr strict security option can be disabled for
backward compatibility, but this is not recommended. Microsoft recommends that all assemblies be signed by a certificate or
asymmetric key with a corresponding login that has been granted UNSAFE ASSEMBLY permission in the master database. For
more information, see CLR strict security.
Syntax
ALTER ASSEMBLY assembly_name
[ FROM <client_assembly_specifier> | <assembly_bits> ]
[ WITH <assembly_option> [ ,...n ] ]
[ DROP FILE { file_name [ ,...n ] | ALL } ]
[ ADD FILE FROM
{
client_file_specifier [ AS file_name ]
| file_bits AS file_name
} [,...n ]
] [ ; ]
<client_assembly_specifier> :: =
'\\computer_name\share-name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'
<assembly_bits> :: =
{ varbinary_literal | varbinary_expression }
<assembly_option> :: =
PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE }
| VISIBILITY = { ON | OFF }
| UNCHECKED DATA
Arguments
assembly_name
Is the name of the assembly you want to modify. assembly_name must already exist in the database.
FROM <client_assembly_specifier> | <assembly_bits>
Updates an assembly to the latest copy of the .NET Framework modules that hold its implementation. This option
can only be used if there are no associated files with the specified assembly.
<client_assembly_specifier> specifies the network or local location where the assembly being refreshed is located.
The network location includes the computer name, the share name and a path within that share.
manifest_file_name specifies the name of the file that contains the manifest of the assembly.
<assembly_bits> is the binary value for the assembly.
Separate ALTER ASSEMBLY statements must be issued for any dependent assemblies that also require updating.
PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE }
IMPORTANT
The PERMISSION_SET option is affected by the clr strict security option, described in the opening warning. When
clr strict security is enabled, all assemblies are treated as UNSAFE .
Specifies the .NET Framework code access permission set property of the assembly. For more information about this
property, see CREATE ASSEMBLY (Transact-SQL).
NOTE
The EXTERNAL_ACCESS and UNSAFE options are not available in a contained database.
VISIBILITY = { ON | OFF }
Indicates whether the assembly is visible for creating common language runtime (CLR ) functions, stored
procedures, triggers, user-defined types, and user-defined aggregate functions against it. If set to OFF, the
assembly is intended to be called only by other assemblies. If there are existing CLR database objects already
created against the assembly, the visibility of the assembly cannot be changed. Any assemblies referenced by
assembly_name are uploaded as not visible by default.
UNCHECKED DATA
By default, ALTER ASSEMBLY fails if it must verify the consistency of individual table rows. This option allows
postponing the checks until a later time by using DBCC CHECKTABLE. If specified, SQL Server executes the
ALTER ASSEMBLY statement even if there are tables in the database that contain the following:
Persisted computed columns that either directly or indirectly reference methods in the assembly, through
Transact-SQL functions or methods.
CHECK constraints that directly or indirectly reference methods in the assembly.
Columns of a CLR user-defined type that depend on the assembly, and the type implements a
UserDefined (non-Native) serialization format.
Columns of a CLR user-defined type that reference views created by using WITH SCHEMABINDING.
If any CHECK constraints are present, they are disabled and marked untrusted. Any tables containing
columns depending on the assembly are marked as containing unchecked data until those tables are
explicitly checked.
Only members of the db_owner and db_ddlowner fixed database roles can specify this option.
Requires the ALTER ANY SCHEMA permission to specify this option.
For more information, see Implementing Assemblies.
[ DROP FILE { file_name[ ,...n] | ALL } ]
Removes the file name associated with the assembly, or all files associated with the assembly, from the
database. If used with ADD FILE that follows, DROP FILE executes first. This lets you to replace a file with
the same file name.
NOTE
This option is not available in a contained database.
NOTE
This option is not available in a contained database.
Remarks
ALTER ASSEMBLY does not disrupt currently running sessions that are running code in the assembly being
modified. Current sessions complete execution by using the unaltered bits of the assembly.
If the FROM clause is specified, ALTER ASSEMBLY updates the assembly with respect to the latest copies of the
modules provided. Because there might be CLR functions, stored procedures, triggers, data types, and user-
defined aggregate functions in the instance of SQL Server that are already defined against the assembly, the
ALTER ASSEMBLY statement rebinds them to the latest implementation of the assembly. To accomplish this
rebinding, the methods that map to CLR functions, stored procedures, and triggers must still exist in the modified
assembly with the same signatures. The classes that implement CLR user-defined types and user-defined
aggregate functions must still satisfy the requirements for being a user-defined type or aggregate.
Cau t i on
If WITH UNCHECKED DATA is not specified, SQL Server tries to prevent ALTER ASSEMBLY from executing if
the new assembly version affects existing data in tables, indexes, or other persistent sites. However, SQL Server
does not guarantee that computed columns, indexes, indexed views or expressions will be consistent with the
underlying routines and types when the CLR assembly is updated. Use caution when you execute ALTER
ASSEMBLY to make sure that there is not a mismatch between the result of an expression and a value based on
that expression stored in the assembly.
ALTER ASSEMBLY changes the assembly version. The culture and public key token of the assembly remain the
same.
ALTER ASSEMBLY statement cannot be used to change the following:
The signatures of CLR functions, aggregate functions, stored procedures, and triggers in an instance of SQL
Server that reference the assembly. ALTER ASSEMBLY fails when SQL Server cannot rebind .NET
Framework database objects in SQL Server with the new version of the assembly.
The signatures of methods in the assembly that are called from other assemblies.
The list of assemblies that depend on the assembly, as referenced in the DependentList property of the
assembly.
The indexability of a method, unless there are no indexes or persisted computed columns depending on that
method, either directly or indirectly.
The FillRow method name attribute for CLR table-valued functions.
The Accumulate and Terminate method signature for user-defined aggregates.
System assemblies.
Assembly ownership. Use ALTER AUTHORIZATION (Transact-SQL ) instead.
Additionally, for assemblies that implement user-defined types, ALTER ASSEMBLY can be used for making
only the following changes:
Modifying public methods of the user-defined type class, as long as signatures or attributes are not
changed.
Adding new public methods.
Modifying private methods in any way.
Fields contained within a native-serialized user-defined type, including data members or base classes,
cannot be changed by using ALTER ASSEMBLY. All other changes are unsupported.
If ADD FILE FROM is not specified, ALTER ASSEMBLY drops any files associated with the assembly.
If ALTER ASSEMBLY is executed without the UNCHECKED data clause, checks are performed to verify that
the new assembly version does not affect existing data in tables. Depending on the amount of data that
needs to be checked, this may affect performance.
Permissions
Requires ALTER permission on the assembly. Additional requirements are as follows:
To alter an assembly whose existing permission set is EXTERNAL_ACCESS, requiresEXTERNAL ACCESS
ASSEMBLYpermission on the server.
To alter an assembly whose existing permission set is UNSAFE requires UNSAFE ASSEMBLY permission
on the server.
To change the permission set of an assembly to EXTERNAL_ACCESS, requiresEXTERNAL ACCESS
ASSEMBLY permission on the server.
To change the permission set of an assembly to UNSAFE, requires UNSAFE ASSEMBLY permission on
the server.
Specifying WITH UNCHECKED DATA, requires ALTER ANY SCHEMA permission.
Permissions with CLR strict security
The following permissions required to alter a CLR assembly when CLR strict security is enabled:
The user must have the ALTER ASSEMBLY permission
And one of the following conditions must also be true:
The assembly is signed with a certificate or asymmetric key that has a corresponding login with the
UNSAFE ASSEMBLY permission on the server. Signing the assembly is recommended.
The database has the TRUSTWORTHY property set to ON , and the database is owned by a login that has the
UNSAFE ASSEMBLY permission on the server. This option is not recommended.
For more information about assembly permission sets, see Designing Assemblies.
Examples
A. Refreshing an assembly
The following example updates assembly ComplexNumber to the latest copy of the .NET Framework modules that
hold its implementation.
NOTE
Assembly ComplexNumber can be created by running the UserDefinedDataType sample scripts. For more information, see
User Defined Type.
See Also
CREATE ASSEMBLY (Transact-SQL )
DROP ASSEMBLY (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER ASYMMETRIC KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of an asymmetric key.
Transact-SQL Syntax Conventions
Syntax
ALTER ASYMMETRIC KEY Asym_Key_Name <alter_option>
<alter_option> ::=
<password_change_option>
| REMOVE PRIVATE KEY
<password_change_option> ::=
WITH PRIVATE KEY ( <password_option> [ , <password_option> ] )
<password_option> ::=
ENCRYPTION BY PASSWORD = 'strongPassword'
| DECRYPTION BY PASSWORD = 'oldPassword'
Arguments
Asym_Key_Name
Is the name by which the asymmetric key is known in the database.
REMOVE PRIVATE KEY
Removes the private key from the asymmetric key The public key is not removed.
WITH PRIVATE KEY
Changes the protection of the private key.
ENCRYPTION BY PASSWORD ='stongPassword'
Specifies a new password for protecting the private key. password must meet the Windows password policy
requirements of the computer that is running the instance of SQL Server. If this option is omitted, the private key
will be encrypted by the database master key.
DECRYPTION BY PASSWORD ='oldPassword'
Specifies the old password, with which the private key is currently protected. Is not required if the private key is
encrypted with the database master key.
Remarks
If there is no database master key the ENCRYPTION BY PASSWORD option is required, and the operation will fail
if no password is supplied. For information about how to create a database master key, see CREATE MASTER KEY
(Transact-SQL ).
You can use ALTER ASYMMETRIC KEY to change the protection of the private key by specifying PRIVATE KEY
options as shown in the following table.
CHANGE PROTECTION FROM ENCRYPTION BY PASSWORD DECRYPTION BY PASSWORD
The database master key must be opened before it can be used to protect a private key. For more information, see
OPEN MASTER KEY (Transact-SQL ).
To change the ownership of an asymmetric key, use ALTER AUTHORIZATION.
Permissions
Requires CONTROL permission on the asymmetric key if the private key is being removed.
Examples
A. Changing the password of the private key
The following example changes the password used to protect the private key of asymmetric key PacificSales09 .
The new password will be <enterStrongPasswordHere> .
See Also
CREATE ASYMMETRIC KEY (Transact-SQL )
DROP ASYMMETRIC KEY (Transact-SQL )
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
Extensible Key Management (EKM )
ALTER AUTHORIZATION (Transact-SQL)
5/3/2018 • 11 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the ownership of a securable.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
ALTER AUTHORIZATION
ON [ <class_type>:: ] entity_name
TO { principal_name | SCHEMA OWNER }
[;]
<class_type> ::=
{
OBJECT | ASSEMBLY | ASYMMETRIC KEY | AVAILABILITY GROUP | CERTIFICATE
| CONTRACT | TYPE | DATABASE | ENDPOINT | FULLTEXT CATALOG
| FULLTEXT STOPLIST | MESSAGE TYPE | REMOTE SERVICE BINDING
| ROLE | ROUTE | SCHEMA | SEARCH PROPERTY LIST | SERVER ROLE
| SERVICE | SYMMETRIC KEY | XML SCHEMA COLLECTION
}
ALTER AUTHORIZATION
ON [ <class_type>:: ] entity_name
TO { principal_name | SCHEMA OWNER }
[;]
<class_type> ::=
{
OBJECT | ASSEMBLY | ASYMMETRIC KEY | CERTIFICATE
| TYPE | DATABASE | FULLTEXT CATALOG
| FULLTEXT STOPLIST
| ROLE | SCHEMA | SEARCH PROPERTY LIST
| SYMMETRIC KEY | XML SCHEMA COLLECTION
}
-- Syntax for Azure SQL Data Warehouse
ALTER AUTHORIZATION ON
[ <class_type> :: ] <entity_name>
TO { principal_name | SCHEMA OWNER }
[;]
<class_type> ::= {
SCHEMA
| OBJECT
}
<entity_name> ::=
{
schema_name
| [ schema_name. ] object_name
}
ALTER AUTHORIZATION ON
[ <class_type> :: ] <entity_name>
TO { principal_name | SCHEMA OWNER }
[;]
<class_type> ::= {
DATABASE
| SCHEMA
| OBJECT
}
<entity_name> ::=
{
database_name
| schema_name
| [ schema_name. ] object_name
}
Arguments
<class_type> Is the securable class of the entity for which the owner is being changed. OBJECT is the default.
OBJECT APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database, Azure SQL Data Warehouse, Parallel
Data Warehouse.
ASSEMBLY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
ASYMMETRIC KEY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
AVAILABILITY GROUP APPLIES TO: SQL Server 2012 through SQL Server 2017.
CERTIFICATE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
CONTRACT APPLIES TO: SQL Server 2008 through SQL Server 2017.
DATABASE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database. For more information,see ALTER
AUTHORIZATION FOR databases section below.
ENDPOINT APPLIES TO: SQL Server 2008 through SQL Server 2017.
FULLTEXT CATALOG APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
FULLTEXT STOPLIST APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
MESSAGE TYPE APPLIES TO: SQL Server 2008 through SQL Server 2017.
REMOTE SERVICE BINDING APPLIES TO: SQL Server 2008 through SQL Server 2017.
ROLE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
ROUTE APPLIES TO: SQL Server 2008 through SQL Server 2017.
SCHEMA APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database, Azure SQL Data Warehouse, Parallel
Data Warehouse.
SEARCH PROPERTY LIST APPLIES TO: SQL Server 2012 (11.x) through SQL Server
2017, Azure SQL Database.
SERVER ROLE APPLIES TO: SQL Server 2008 through SQL Server 2017.
SERVICE APPLIES TO: SQL Server 2008 through SQL Server 2017.
SYMMETRIC KEY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
TYPE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
XML SCHEMA COLLECTION APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.
entity_name
Is the name of the entity.
principal_name | SCHEMA OWNER
Name of the security principal that will own the entity. Database objects must be owned by a database principal;
a database user or role. Server objects (such as databases) must be owned by a server principal (a login). Specify
SCHEMA OWNER as the principal_name to indicate that the object should be owned by the principal that
owns the schema of the object.
Remarks
ALTER AUTHORIZATION can be used to change the ownership of any entity that has an owner. Ownership of
database-contained entities can be transferred to any database-level principal. Ownership of server-level entities
can be transferred only to server-level principals.
IMPORTANT
Beginning with SQL Server 2005, a user can own an OBJECT or TYPE that is contained by a schema owned by another
database user. This is a change of behavior from earlier versions of SQL Server. For more information, see
OBJECTPROPERTY (Transact-SQL) and TYPEPROPERTY (Transact-SQL).
Ownership of the following schema-contained entities of type "object" can be transferred: tables, views,
functions, procedures, queues, and synonyms.
Ownership of the following entities cannot be transferred: linked servers, statistics, constraints, rules, defaults,
triggers, Service Broker queues, credentials, partition functions, partition schemes, database master keys, service
master key, and event notifications.
Ownership of members of the following securable classes cannot be transferred: server, login, user, application
role, and column.
The SCHEMA OWNER option is only valid when you are transferring ownership of a schema-contained entity.
SCHEMA OWNER will transfer ownership of the entity to the owner of the schema in which it resides. Only
entities of class OBJECT, TYPE, or XML SCHEMA COLLECTION are schema-contained.
If the target entity is not a database and the entity is being transferred to a new owner, all permissions on the
target will be dropped.
Cau t i on
In SQL Server 2005, the behavior of schemas changed from the behavior in earlier versions of SQL Server.
Code that assumes that schemas are equivalent to database users may not return correct results. Old catalog
views, including sysobjects, should not be used in a database in which any of the following DDL statements has
ever been used: CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP
USER, CREATE ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE,
ALTER AUTHORIZATION. In a database in which any of these statements has ever been used, you must use the
new catalog views. The new catalog views take into account the separation of principals and schemas that was
introduced in SQL Server 2005. For more information about catalog views, see Catalog Views (Transact-SQL ).
Also, note the following:
IMPORTANT
The only reliable way to find the owner of a object is to query the sys.objects catalog view. The only reliable way to find
the owner of a type is to use the TYPEPROPERTY function.
CLASS CONDITION
NOTE
If the new owner is an Azure Active Directory user, it cannot exist as a user in the database where the new owner will
become the new DBO. Such Azure AD user must be first removed from the database before executing the ALTER
AUTHORIZATION statement changing the database ownership to the new user. For more information about configuring
an Azure Active Directory users with SQL Database, see Connecting to SQL Database or SQL Data Warehouse By Using
Azure Active Directory Authentication.
To verify an Azure AD owner of the database execute the following Transact-SQL command in a user database
(in this example testdb ).
Best practice
Instead of using Azure AD users as individual owners of the database, use an Azure AD group as a member of
the db_owner fixed database role. The following steps, show how to configure a disabled login as the database
owner, and make an Azure Active Directory group ( mydbogroup ) a member of the db_owner role.
1. Login to SQL Server as Azure AD admin, and change the owner of the database to a disabled SQL Server
authentication login. For example, from the user database execute:
ALTER AUTHORIZATION ON database::testdb TO DisabledLogin;
2. Create an Azure AD group that should own the database and add it as a user to the user database. For
example:
CREATE USER [mydbogroup] FROM EXTERNAL PROVIDER;
3. In the user database add the user representing the Azure AD group, to the db_owner fixed database role. For
example:
ALTER ROLE db_owner ADD MEMBER mydbogroup;
Now the mydbogroup members can centrally manage the database as members of the db_owner role.
When members of this group are removed from the Azure AD group, they automatically loose the dbo
permissions for this database.
Similarly if new members are added to mydbogroup Azure AD group, they automatically gain the dbo access
for this database.
To check if a specific user has the effective dbo permission, have the user execute the following statement:
Permissions
Requires TAKE OWNERSHIP permission on the entity. If the new owner is not the user that is executing this
statement, also requires either, 1) IMPERSONATE permission on the new owner if it is a user or login; or 2) if
the new owner is a role, membership in the role, or ALTER permission on the role; or 3) if the new owner is an
application role, ALTER permission on the application role.
Examples
A. Transfer ownership of a table
The following example transfers ownership of table Sprockets to user MichikoOsada . The table is located inside
schema Parts .
If the objects schema is not included as part of the statement, the Database Engine will look for the object in the
users default schema. For example:
Note that for Azure AD users the brackets around the user name must be used.
See Also
OBJECTPROPERTY (Transact-SQL )
TYPEPROPERTY (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER AVAILABILITY GROUP (Transact-SQL)
5/30/2018 • 29 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters an existing Always On availability group in SQL Server. Most ALTER AVAIL ABILITY GROUP arguments
are supported only the current primary replica. However the JOIN, FAILOVER, and
FORCE_FAILOVER_ALLOW_DATA_LOSS arguments are supported only on secondary replicas.
Transact-SQL Syntax Conventions
Syntax
ALTER AVAILABILITY GROUP group_name
{
SET ( <set_option_spec> )
| ADD DATABASE database_name
| REMOVE DATABASE database_name
| ADD REPLICA ON <add_replica_spec>
| MODIFY REPLICA ON <modify_replica_spec>
| REMOVE REPLICA ON <server_instance>
| JOIN
| JOIN AVAILABILITY GROUP ON <add_availability_group_spec> [ ,...2 ]
| MODIFY AVAILABILITY GROUP ON <modify_availability_group_spec> [ ,...2 ]
| GRANT CREATE ANY DATABASE
| DENY CREATE ANY DATABASE
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
| ADD LISTENER ‘dns_name’ ( <add_listener_option> )
| MODIFY LISTENER ‘dns_name’ ( <modify_listener_option> )
| RESTART LISTENER ‘dns_name’
| REMOVE LISTENER ‘dns_name’
| OFFLINE
}
[ ; ]
<set_option_spec> ::=
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
| FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
| HEALTH_CHECK_TIMEOUT = milliseconds
| DB_FAILOVER = { ON | OFF }
| REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = { integer }
<server_instance> ::=
{ 'system_name[\instance_name]' | 'FCI_network_name[\instance_name]' }
<add_replica_spec>::=
<server_instance> WITH
(
ENDPOINT_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT | CONFIGURATION_ONLY },
FAILOVER_MODE = { AUTOMATIC | MANUAL }
[ , <add_replica_option> [ ,...n ] ]
)
<add_replica_option>::=
SEEDING_MODE = { AUTOMATIC | MANUAL }
| BACKUP_PRIORITY = n
| SECONDARY_ROLE ( {
| SECONDARY_ROLE ( {
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
| READ_ONLY_ROUTING_URL = 'TCP://system-address:port'
} )
| PRIMARY_ROLE ( {
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
| READ_ONLY_ROUTING_LIST = { ( ‘<server_instance>’ [ ,...n ] ) | NONE }
} )
| SESSION_TIMEOUT = seconds
<modify_replica_spec>::=
<server_instance> WITH
(
ENDPOINT_URL = 'TCP://system-address:port'
| AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT }
| FAILOVER_MODE = { AUTOMATIC | MANUAL }
| SEEDING_MODE = { AUTOMATIC | MANUAL }
| BACKUP_PRIORITY = n
| SECONDARY_ROLE ( {
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
| READ_ONLY_ROUTING_URL = 'TCP://system-address:port'
} )
| PRIMARY_ROLE ( {
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
| READ_ONLY_ROUTING_LIST = { ( ‘<server_instance>’ [ ,...n ] ) | NONE }
} )
| SESSION_TIMEOUT = seconds
)
<add_availability_group_spec>::=
<ag_name> WITH
(
LISTENER_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT },
FAILOVER_MODE = MANUAL,
SEEDING_MODE = { AUTOMATIC | MANUAL }
)
<modify_availability_group_spec>::=
<ag_name> WITH
(
LISTENER_URL = 'TCP://system-address:port'
| AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT }
| SEEDING_MODE = { AUTOMATIC | MANUAL }
)
<add_listener_option> ::=
{
WITH DHCP [ ON ( <network_subnet_option> ) ]
| WITH IP ( { ( <ip_address_option> ) } [ , ...n ] ) [ , PORT = listener_port ]
}
<network_subnet_option> ::=
‘four_part_ipv4_address’, ‘four_part_ipv4_mask’
<ip_address_option> ::=
{
‘four_part_ipv4_address’, ‘four_part_ipv4_mask’
| ‘ipv6_address’
}
<modify_listener_option>::=
{
ADD IP ( <ip_address_option> )
| PORT = listener_port
}
Arguments
group_name
Specifies the name of the new availability group. group_name must be a valid SQL Server identifier, and it must
be unique across all availability groups in the WSFC cluster.
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
Specifies a preference about how a backup job should evaluate the primary replica when choosing where to
perform backups. You can script a given backup job to take the automated backup preference into account. It is
important to understand that the preference is not enforced by SQL Server, so it has no impact on ad hoc
backups.
Supported only on the primary replica.
The values are as follows:
PRIMARY
Specifies that the backups should always occur on the primary replica. This option is useful if you need backup
features, such as creating differential backups, that are not supported when backup is run on a secondary replica.
IMPORTANT
If you plan to use log shipping to prepare any secondary databases for an availability group, set the automated backup
preference to Primary until all the secondary databases have been prepared and joined to the availability group.
SECONDARY_ONLY
Specifies that backups should never be performed on the primary replica. If the primary replica is the only replica
online, the backup should not occur.
SECONDARY
Specifies that backups should occur on a secondary replica except when the primary replica is the only replica
online. In that case, the backup should occur on the primary replica. This is the default behavior.
NONE
Specifies that you prefer that backup jobs ignore the role of the availability replicas when choosing the replica to
perform backups. Note backup jobs might evaluate other factors such as backup priority of each availability
replica in combination with its operational state and connected state.
IMPORTANT
There is no enforcement of the AUTOMATED_BACKUP_PREFERENCE setting. The interpretation of this preference depends
on the logic, if any, that you script into back jobs for the databases in a given availability group. The automated backup
preference setting has no impact on ad hoc backups. For more information, see Configure Backup on Availability Replicas
(SQL Server).
NOTE
To view the automated backup preference of an existing availability group, select the automated_backup_preference or
automated_backup_preference_desc column of the sys.availability_groups catalog view. Additionally,
sys.fn_hadr_backup_is_preferred_replica (Transact-SQL) can be used to determine the preferred backup replica. This function
will always return 1 for at least one of the replicas, even when AUTOMATED_BACKUP_PREFERENCE = NONE .
FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
Specifies what failure conditions will trigger an automatic failover for this availability group.
FAILURE_CONDITION_LEVEL is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode (AVAILIBILITY_MODE = SYNCHRONOUS_COMMIT).
Furthermore, failure conditions can trigger an automatic failover only if both the primary and secondary replicas
are configured for automatic failover mode (FAILOVER_MODE = AUTOMATIC ) and the secondary replica is
currently synchronized with the primary replica.
Supported only on the primary replica.
The failure-condition levels (1–5) range from the least restrictive, level 1, to the most restrictive, level 5. A given
condition level encompasses all of the less restrictive levels. Thus, the strictest condition level, 5, includes the four
less restrictive condition levels (1-4), level 4 includes levels 1-3, and so forth. The following table describes the
failure-condition that corresponds to each level.
NOTE
Lack of response by an instance of SQL Server to client requests is not relevant to availability groups.
The FAILURE_CONDITION_LEVEL and HEALTH_CHECK_TIMEOUT values, define a flexible failover policy for a
given group. This flexible failover policy provides you with granular control over what conditions must cause an
automatic failover. For more information, see Flexible Failover Policy for Automatic Failover of an Availability
Group (SQL Server).
HEALTH_CHECK_TIMEOUT = milliseconds
Specifies the wait time (in milliseconds) for the sp_server_diagnostics system stored procedure to return server-
health information before WSFC cluster assumes that the server instance is slow or hung.
HEALTH_CHECK_TIMEOUT is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode with automatic failover (AVAILIBILITY_MODE =
SYNCHRONOUS_COMMIT). Furthermore, a health-check timeout can trigger an automatic failover only if both
the primary and secondary replicas are configured for automatic failover mode (FAILOVER_MODE =
AUTOMATIC ) and the secondary replica is currently synchronized with the primary replica.
The default HEALTH_CHECK_TIMEOUT value is 30000 milliseconds (30 seconds). The minimum value is 15000
milliseconds (15 seconds), and the maximum value is 4294967295 milliseconds.
Supported only on the primary replica.
IMPORTANT
sp_server_diagnostics does not perform health checks at the database level.
DB_FAILOVER = { ON | OFF }
Specifies the response to take when a database on the primary replica is offline. When set to ON, any status other
than ONLINE for a database in the availability group triggers an automatic failover. When this option is set to
OFF, only the health of the instance is used to trigger automatic failover.
For more information regarding this setting, see Database Level Health Detection Option
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT
Introduced in SQL Server 2017. Used to set a minimum number of synchronous secondary replicas required to
commit before the primary commits a transaction. Guarantees that SQL Server transactions will wait until the
transaction logs are updated on the minimum number of secondary replicas. The default is 0 which gives the
same behavior as SQL Server 2016. The minimum value is 0. The maximum value is the number of replicas
minus 1. This option relates to replicas in synchronous commit mode. When replicas are in synchronous commit
mode, writes on the primary replica wait until writes on the secondary synchronous replicas are committed to the
replica database transaction log. If a SQL Server that hosts a secondary synchronous replica stops responding,
the SQL Server that hosts the primary replica will mark that secondary replica as NOT SYNCHRONIZED and
proceed. When the unresponsive database comes back online it will be in a "not synced" state and the replica will
be marked as unhealthy until the primary can make it synchronous again. This setting guarantees that the
primary replica will not proceed until the minimum number of replicas have committed each transaction. If the
minimum number of replicas is not available then commits on the primary will fail. For cluster type EXTERNAL the
setting is changed when the availability group is added to a cluster resource. See High availability and data
protection for availability group configurations.
ADD DATABASE database_name
Specifies a list of one or more user databases that you want to add to the availability group. These databases must
reside on the instance of SQL Server that hosts the current primary replica. You can specify multiple databases for
an availability group, but each database can belong to only one availability group. For information about the type
of databases that an availability group can support, see Prerequisites, Restrictions, and Recommendations for
Always On Availability Groups (SQL Server). To find out which local databases already belong to an availability
group, see the replica_id column in the sys.databases catalog view.
Supported only on the primary replica.
NOTE
After you have created the availability group, you will need connect to each server instance that hosts a secondary replica
and then prepare each secondary database and join it to the availability group. For more information, see Start Data
Movement on an Always On Secondary Database (SQL Server).
NOTE
SQL Server Failover Cluster Instances (FCIs) do not support automatic failover by availability groups, so any availability
replica that is hosted by an FCI can only be configured for manual failover.
MANUAL
Enables manual failover or forced manual failover (forced failover) by the database administrator.
FAILOVER_MODE is required in the ADD REPLICA ON clause and optional in the MODIFY REPLICA ON
clause. Two types of manual failover exist, manual failover without data loss and forced failover (with possible
data loss), which are supported under different conditions. For more information, see Failover and Failover Modes
(Always On Availability Groups).
SEEDING_MODE = { AUTOMATIC | MANUAL }
Specifies how the secondary replica will be initially seeded.
AUTOMATIC
Enables direct seeding. This method will seed the secondary replica over the network. This method does not
require you to backup and restore a copy of the primary database on the replica.
NOTE
For direct seeding, you must allow database creation on each secondary replica by calling ALTER AVAILABILITY GROUP
with the GRANT CREATE ANY DATABASE option.
MANUAL
Specifies manual seeding (default). This method requires you to create a backup of the database on the primary
replica and manually restore that backup on the secondary replica.
BACKUP_PRIORITY =n
Specifies your priority for performing backups on this replica relative to the other replicas in the same availability
group. The value is an integer in the range of 0..100. These values have the following meanings:
1..100 indicates that the availability replica could be chosen for performing backups. 1 indicates the lowest
priority, and 100 indicates the highest priority. If BACKUP_PRIORITY = 1, the availability replica would be
chosen for performing backups only if no higher priority availability replicas are currently available.
0 indicates that this availability replica will never be chosen for performing backups. This is useful, for
example, for a remote availability replica to which you never want backups to fail over.
For more information, see Active Secondaries: Backup on Secondary Replicas (Always On Availability
Groups).
SECONDARY_ROLE ( … )
Specifies role-specific settings that will take effect if this availability replica currently owns the secondary
role (that is, whenever it is a secondary replica). Within the parentheses, specify either or both secondary-
role options. If you specify both, use a comma-separated list.
The secondary role options are as follows:
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
Specifies whether the databases of a given availability replica that is performing the secondary role (that is,
is acting as a secondary replica) can accept connections from clients, one of:
NO
No user connections are allowed to secondary databases of this replica. They are not available for read
access. This is the default behavior.
READ_ONLY
Only connections are allowed to the databases in the secondary replica where the Application Intent
property is set to ReadOnly. For more information about this property, see Using Connection String
Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the secondary replica for read-only access.
For more information, see Active Secondaries: Readable Secondary Replicas (Always On Availability
Groups).
READ_ONLY_ROUTING_URL ='TCP://system -address:port'
Specifies the URL to be used for routing read-intent connection requests to this availability replica. This is
the URL on which the SQL Server Database Engine listens. Typically, the default instance of the SQL
Server Database Engine listens on TCP port 1433.
For a named instance, you can obtain the port number by querying the port and type_desc columns of
the sys.dm_tcp_listener_states dynamic management view. The server instance uses the Transact-SQL
listener (type_desc='TSQL').
For more information about calculating the read-only routing URL for an availability replica, see
Calculating read_only_routing_url for Always On.
NOTE
For a named instance of SQL Server, the Transact-SQL listener should be configured to use a specific port. For more
information, see Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager).
PRIMARY_ROLE ( … )
Specifies role-specific settings that will take effect if this availability replica currently owns the primary role (that
is, whenever it is the primary replica). Within the parentheses, specify either or both primary-role options. If you
specify both, use a comma-separated list.
The primary role options are as follows:
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
Specifies the type of connection that the databases of a given availability replica that is performing the primary
role (that is, is acting as a primary replica) can accept from clients, one of:
READ_WRITE
Connections where the Application Intent connection property is set to ReadOnly are disallowed. When the
Application Intent property is set to ReadWrite or the Application Intent connection property is not set, the
connection is allowed. For more information about Application Intent connection property, see Using Connection
String Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the primary replica. This is the default behavior.
READ_ONLY_ROUTING_LIST = { (‘<server_instance>’ [ ,...n ] ) | NONE }
Specifies a comma-separated list of server instances that host availability replicas for this availability group that
meet the following requirements when running under the secondary role:
Be configured to allow all connections or read-only connections (see the ALLOW_CONNECTIONS
argument of the SECONDARY_ROLE option, above).
Have their read-only routing URL defined (see the READ_ONLY_ROUTING_URL argument of the
SECONDARY_ROLE option, above).
The READ_ONLY_ROUTING_LIST values are as follows:
<server_instance>
Specifies the address of the instance of SQL Server that is the host for an availability replica that is a
readable secondary replica when running under the secondary role.
Use a comma-separated list to specify all of the server instances that might host a readable secondary
replica. Read-only routing will follow the order in which server instances are specified in the list. If you
include a replica's host server instance on the replica's read-only routing list, placing this server instance at
the end of the list is typically a good practice, so that read-intent connections go to a secondary replica, if
one is available.
Beginning with SQL Server 2016 (13.x), you can load-balance read-intent requests across readable
secondary replicas. You specify this by placing the replicas in a nested set of parentheses within the read-
only routing list. For more information and examples, see Configure load-balancing across read-only
replicas.
NONE
Specifies that when this availability replica is the primary replica, read-only routing will not be supported.
This is the default behavior. When used with MODIFY REPLICA ON, this value disables an existing list, if
any.
SESSION_TIMEOUT =seconds
Specifies the session-timeout period in seconds. If you do not specify this option, by default, the time
period is 10 seconds. The minimum value is 5 seconds.
IMPORTANT
We recommend that you keep the time-out period at 10 seconds or greater.
For more information about the session-timeout period, see Overview of Always On Availability Groups (SQL
Server).
MODIFY REPLICA ON
Modifies any of the replicas of the availability group. The list of replicas to be modified contains the server
instance address and a WITH (…) clause for each replica.
Supported only on the primary replica.
REMOVE REPLICA ON
Removes the specified secondary replica from the availability group. The current primary replica cannot be
removed from an availability group. On being removed, the replica stops receiving data. Its secondary databases
are removed from the availability group and enter the RESTORING state.
Supported only on the primary replica.
NOTE
If you remove a replica while it is unavailable or failed, when it comes back online it will discover that it no longer belongs
the availability group.
JOIN
Causes the local server instance to host a secondary replica in the specified availability group.
Supported only on a secondary replica that has not yet been joined to the availability group.
For more information, see Join a Secondary Replica to an Availability Group (SQL Server).
FAILOVER
Initiates a manual failover of the availability group without data loss to the secondary replica to which you are
connected. The replica that will host the primary replica is the failover target. The failover target will take over the
primary role and recover its copy of each database and bring them online as the new primary databases. The
former primary replica concurrently transitions to the secondary role, and its databases become secondary
databases and are immediately suspended. Potentially, these roles can be switched back and forth by a series of
failures.
Supported only on a synchronous-commit secondary replica that is currently synchronized with the primary
replica. Note that for a secondary replica to be synchronized the primary replica must also be running in
synchronous-commit mode.
NOTE
A failover command returns as soon as the failover target has accepted the command. However, database recovery occurs
asynchronously after the availability group has finished failing over.
For information about the limitations, prerequisites and recommendations for a performing a planned manual
failover, see Perform a Planned Manual Failover of an Availability Group (SQL Server).
FORCE_FAILOVER_ALLOW_DATA_LOSS
Cau t i on
Forcing failover, which might involve some data loss, is strictly a disaster recovery method. Therefore, We strongly
recommend that you force failover only if the primary replica is no longer running, you are willing to risk losing
data, and you must restore service to the availability group immediately.
Supported only on a replica whose role is in the SECONDARY or RESOLVING state. --The replica on which you
enter a failover command is known as the failover target.
Forces failover of the availability group, with possible data loss, to the failover target. The failover target will take
over the primary role and recover its copy of each database and bring them online as the new primary databases.
On any remaining secondary replicas, every secondary database is suspended until manually resumed. When the
former primary replica becomes available, it will switch to the secondary role, and its databases will become
suspended secondary databases.
NOTE
A failover command returns as soon as the failover target has accepted the command. However, database recovery occurs
asynchronously after the availability group has finished failing over.
For information about the limitations, prerequisites and recommendations for forcing failover and the effect of a
forced failover on the former primary databases in the availability group, see Perform a Forced Manual Failover of
an Availability Group (SQL Server).
ADD LISTENER ‘dns_name’( <add_listener_option> )
Defines a new availability group listener for this availability group. Supported only on the primary replica.
IMPORTANT
Before you create your first listener, we strongly recommend that you read Create or Configure an Availability Group
Listener (SQL Server).
After you create a listener for a given availability group, we strongly recommend that you do the following:
Ask your network administrator to reserve the listener's IP address for its exclusive use.
Give the listener's DNS host name to application developers to use in connection strings when requesting client
connections to this availability group.
dns_name
Specifies the DNS host name of the availability group listener. The DNS name of the listener must be unique in
the domain and in NetBIOS.
dns_name is a string value. This name can contain only alphanumeric characters, dashes (-), and hyphens (_), in
any order. DNS host names are case insensitive. The maximum length is 63 characters.
We recommend that you specify a meaningful string. For example, for an availability group named AG1 ,a
meaningful DNS host name would be ag1-listener .
IMPORTANT
NetBIOS recognizes only the first 15 chars in the dns_name. If you have two WSFC clusters that are controlled by the same
Active Directory and you try to create availability group listeners in both of clusters using names with more than 15
characters and an identical 15 character prefix, you will get an error reporting that the Virtual Network Name resource could
not be brought online. For information about prefix naming rules for DNS names, see Assigning Domain Names.
IMPORTANT
This command must be repeated on both the primary availability group and secondary availability group instances.
IMPORTANT
We do not recommend DHCP in production environment. If there is a down time and the DHCP IP lease expires, extra time
is required to register the new DHCP network IP address that is associated with the listener DNS name and impact the client
connectivity. However, DHCP is good for setting up your development and testing environment to verify basic functions of
availability groups and for integration with your applications.
For example:
WITH DHCP ON ('10.120.19.0','255.255.254.0')
WITH IP ( { (‘four_part_ipv4_address’,‘four_part_ipv4_mask’) | (‘ipv6_address’) } [ , ...n ] ) [ , PORT =listener_port ]
Specifies that, instead of using DHCP, the availability group listener will use one or more static IP addresses. To
create an availability group across multiple subnets, each subnet requires one static IP address in the listener
configuration. For a given subnet, the static IP address can be either an IPv4 address or an IPv6 address. Contact
your network administrator to get a static IP address for each subnet that will host an availability replica for the
new availability group.
For example:
WITH IP ( ('10.120.19.155','255.255.254.0') )
four_part_ipv4_address
Specifies an IPv4 four-part address for an availability group listener. For example, 10.120.19.155 .
four_part_ipv4_mask
Specifies an IPv4 four-part mask for an availability group listener. For example, 255.255.254.0 .
ipv6_address
Specifies an IPv6 address for an availability group listener. For example, 2001::4898:23:1002:20f:1fff:feff:b3a3 .
PORT = listener_port
Specifies the port number—listener_port—to be used by an availability group listener that is specified by a WITH
IP clause. PORT is optional.
The default port number, 1433, is supported. However, if you have security concerns, we recommend using a
different port number.
For example: WITH IP ( ('2001::4898:23:1002:20f:1fff:feff:b3a3') ) , PORT = 7777
Security
Permissions
Requires ALTER AVAIL ABILITY GROUP permission on the availability group, CONTROL AVAIL ABILITY
GROUP permission, ALTER ANY AVAIL ABILITY GROUP permission, or CONTROL SERVER permission. Also
requires ALTER ANY DATABASE permission.
Examples
A. Joining a secondary replica to an availability group
The following example joins a secondary replica to which you are connected to the AccountsAG availability group.
See Also
CREATE AVAIL ABILITY GROUP (Transact-SQL )
ALTER DATABASE SET HADR (Transact-SQL )
DROP AVAIL ABILITY GROUP (Transact-SQL )
sys.availability_replicas (Transact-SQL )
sys.availability_groups (Transact-SQL )
Troubleshoot Always On Availability Groups Configuration (SQL Server)
Overview of Always On Availability Groups (SQL Server)
Availability Group Listeners, Client Connectivity, and Application Failover (SQL Server)
ALTER BROKER PRIORITY (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a Service Broker conversation priority.
Transact-SQL Syntax Conventions
Syntax
ALTER BROKER PRIORITY ConversationPriorityName
FOR CONVERSATION
{ SET ( [ CONTRACT_NAME = {ContractName | ANY } ]
[ [ , ] LOCAL_SERVICE_NAME = {LocalServiceName | ANY } ]
[ [ , ] REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY } ]
[ [ , ] PRIORITY_LEVEL = { PriorityValue | DEFAULT } ]
)
}
[;]
Arguments
ConversationPriorityName
Specifies the name of the conversation priority to be changed. The name must refer to a conversation priority in
the current database.
SET
Specifies the criteria for determining if the conversation priority applies to a conversation. SET is required and
must contain at least one criterion: CONTRACT_NAME, LOCAL_SERVICE_NAME, REMOTE_SERVICE_NAME,
or PRIORITY_LEVEL.
CONTRACT_NAME = {ContractName | ANY }
Specifies the name of a contract to be used as a criterion for determining if the conversation priority applies to a
conversation. ContractName is a Database Engine identifier, and must specify the name of a contract in the current
database.
ContractName
Specifies that the conversation priority can be applied only to conversations where the BEGIN DIALOG statement
that started the conversation specified ON CONTRACT ContractName.
ANY
Specifies that the conversation priority can be applied to any conversation, regardless of which contract it uses.
If CONTRACT_NAME is not specified, the contract property of the conversation priority is not changed.
LOCAL_SERVICE_NAME = {LocalServiceName | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to a
conversation endpoint.
LocalServiceName is a Database Engine identifier and must specify the name of a service in the current database.
LocalServiceName
Specifies that the conversation priority can be applied to the following:
Any initiator conversation endpoint whose initiator service name matches LocalServiceName.
Any target conversation endpoint whose target service name matches LocalServiceName.
ANY
Specifies that the conversation priority can be applied to any conversation endpoint, regardless of the
name of the local service used by the endpoint.
If LOCAL_SERVICE_NAME is not specified, the local service property of the conversation priority is not
changed.
REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to a
conversation endpoint.
RemoteServiceName is a literal of type nvarchar(256). Service Broker uses a byte-by-byte comparison to
match the RemoteServiceName string. The comparison is case-sensitive and does not consider the current
collation. The target service can be in the current instance of the Database Engine, or a remote instance of
the Database Engine.
'RemoteServiceName'
Specifies the conversation priority be assigned to the following:
Any initiator conversation endpoint whose associated target service name matches RemoteServiceName.
Any target conversation endpoint whose associated initiator service name matches RemoteServiceName.
ANY
Specifies that the conversation priority applies to any conversation endpoint, regardless of the name of the
remote service associated with the endpoint.
If REMOTE_SERVICE_NAME is not specified, the remote service property of the conversation priority is
not changed.
PRIORITY_LEVEL = { PriorityValue | DEFAULT }
Specifies the priority level to assign any conversation endpoint that use the contracts and services that are
specified in the conversation priority. PriorityValue must be an integer literal from 1 (lowest priority) to 10
(highest priority).
If PRIORITY_LEVEL is not specified, the priority level property of the conversation priority is not changed.
Remarks
No properties that are changed by ALTER BROKER PRIORITY are applied to existing conversations. The existing
conversations continue with the priority that was assigned when they were started.
For more information, see CREATE BROKER PRIORITY (Transact-SQL ).
Permissions
Permission for creating a conversation priority defaults to members of the db_ddladmin or db_owner fixed
database roles, and to the sysadmin fixed server role. Requires ALTER permission on the database.
Examples
A. Changing only the priority level of an existing conversation priority.
Changes the priority level, but does not change the contract, local service, or remote service properties.
See Also
CREATE BROKER PRIORITY (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
sys.conversation_priorities (Transact-SQL )
ALTER CERTIFICATE (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the private key used to encrypt a certificate, or adds one if none is present. Changes the availability of a
certificate to Service Broker.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<private_key_spec> ::=
FILE = 'path_to_private_key'
| DECRYPTION BY PASSWORD = 'key_password'
| ENCRYPTION BY PASSWORD = 'password'
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
certificate_name
Is the unique name by which the certificate is known in database.
FILE ='path_to_private_key'
Specifies the complete path, including file name, to the private key. This parameter can be a local path or a UNC
path to a network location. This file will be accessed within the security context of the SQL Server service account.
When you use this option, you must make sure that the service account has access to the specified file.
DECRYPTION BY PASSWORD ='key_password'
Specifies the password that is required to decrypt the private key.
ENCRYPTION BY PASSWORD ='password'
Specifies the password used to encrypt the private key of the certificate in the database. password must meet the
Windows password policy requirements of the computer that is running the instance of SQL Server. For more
information, see Password Policy.
REMOVE PRIVATE KEY
Specifies that the private key should no longer be maintained inside the database.
ACTIVE FOR BEGIN_DIALOG = { ON | OFF }
Makes the certificate available to the initiator of a Service Broker dialog conversation.
Remarks
The private key must correspond to the public key specified by certificate_name.
The DECRYPTION BY PASSWORD clause can be omitted if the password in the file is protected with a null
password.
When the private key of a certificate that already exists in the database is imported from a file, the private key will
be automatically protected by the database master key. To protect the private key with a password, use the
ENCRYPTION BY PASSWORD phrase.
The REMOVE PRIVATE KEY option will delete the private key of the certificate from the database. You can do this
when the certificate will be used to verify signatures or in Service Broker scenarios that do not require a private
key. Do not remove the private key of a certificate that protects a symmetric key.
You do not have to specify a decryption password when the private key is encrypted by using the database master
key.
IMPORTANT
Always make an archival copy of a private key before removing it from a database. For more information, see BACKUP
CERTIFICATE (Transact-SQL).
Permissions
Requires ALTER permission on the certificate.
Examples
A. Changing the password of a certificate
C. Importing a private key for a certificate that is already present in the database
ALTER CERTIFICATE Shipping13
WITH PRIVATE KEY (FILE = 'c:\\importedkeys\Shipping13',
DECRYPTION BY PASSWORD = 'GDFLKl8^^GGG4000%');
GO
D. Changing the protection of the private key from a password to the database master key
See Also
CREATE CERTIFICATE (Transact-SQL )
DROP CERTIFICATE (Transact-SQL )
BACKUP CERTIFICATE (Transact-SQL )
Encryption Hierarchy
EVENTDATA (Transact-SQL )
ALTER COLUMN ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a column encryption key in a database, adding or dropping an encrypted value. A CEK can have up to two
values which allows for the rotation of the corresponding column master key. A CEK is used when encrypting
columns using the Always Encrypted (Database Engine) feature. Before adding a CEK value, you must define the
column master key that was used to encrypt the value by using SQL Server Management Studio or the CREATE
MASTER KEY statement.
Transact-SQL Syntax Conventions
Syntax
ALTER COLUMN ENCRYPTION KEY key_name
[ ADD | DROP ] VALUE
(
COLUMN_MASTER_KEY = column_master_key_name
[, ALGORITHM = 'algorithm_name' , ENCRYPTED_VALUE = varbinary_literal ]
) [;]
Arguments
key_name
The column encryption key that you are changing.
column_master_key_name
Specifies the name of the column master key (CMK) used for encrypting the column encryption key (CEK).
algorithm_name
Name of the encryption algorithm used to encrypt the value. The algorithm for the system providers must be
RSA_OAEP. This argument is not valid when dropping a column encryption key value.
varbinary_literal
The CEK BLOB encrypted with the specified master encryption key. . This argument is not valid when dropping a
column encryption key value.
WARNING
Never pass plaintext CEK values in this statement. Doing so will comprise the benefit of this feature.
Remarks
Typically, a column encryption key is created with just one encrypted value. When a column master key needs to
be rotated (the current column master key needs to be replaced with the new column master key), you can add a
new value of the column encryption key, encrypted with the new column master key. This will allow you to ensure
client applications can access data encrypted with the column encryption key, while the new column master key is
being made available to client applications. An Always Encrypted enabled driver in a client application that does
not have access to the new master key, will be able to use the column encryption key value encrypted with the old
column master key to access sensitive data. The encryption algorithms, Always Encrypted supports, require the
plaintext value to have 256 bits. An encrypted value should be generated using a key store provider that
encapsulates the key store holding the column master key.
Use sys.columns (Transact-SQL ), sys.column_encryption_keys (Transact-SQL ) and
sys.column_encryption_key_values (Transact-SQL ) to view information about column encryption keys.
Permissions
Requires ALTER ANY COLUMN ENCRYPTION KEY permission on the database.
Examples
A. Adding a column encryption key value
The following example alters a column encryption key called MyCEK .
See Also
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL )
Always Encrypted (Database Engine)
sys.column_encryption_keys (Transact-SQL )
sys.column_encryption_key_values (Transact-SQL )
sys.columns (Transact-SQL )
ALTER CREDENTIAL (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Changes the properties of a credential.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
ALTER CREDENTIAL credential_name WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]
Arguments
credential_name
Specifies the name of the credential that is being altered.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server.
SECRET ='secret'
Specifies the secret required for outgoing authentication. secret is optional.
Remarks
When a credential is changed, the values of both identity_name and secret are reset. If the optional SECRET
argument is not specified, the value of the stored secret will be set to NULL.
The secret is encrypted by using the service master key. If the service master key is regenerated, the secret is
reencrypted by using the new service master key.
Information about credentials is visible in the sys.credentials catalog view.
Permissions
Requires ALTER ANY CREDENTIAL permission. If the credential is a system credential, requires CONTROL
SERVER permission.
Examples
A. Changing the password of a credential
The following example changes the secret stored in a credential called Saddles . The credential contains the
Windows login RettigB and its password. The new password is added to the credential using the SECRET clause.
See Also
Credentials (Database Engine)
CREATE CREDENTIAL (Transact-SQL )
DROP CREDENTIAL (Transact-SQL )
ALTER DATABASE SCOPED CREDENTIAL (Transact-SQL )
CREATE LOGIN (Transact-SQL )
sys.credentials (Transact-SQL )
ALTER CRYPTOGRAPHIC PROVIDER (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a cryptographic provider within SQL Server from an Extensible Key Management (EKM ) provider.
Transact-SQL Syntax Conventions
Syntax
ALTER CRYPTOGRAPHIC PROVIDER provider_name
[ FROM FILE = path_of_DLL ]
ENABLE | DISABLE
Arguments
provider_name
Name of the Extensible Key Management provider.
Path_of_DLL
Path of the .dll file that implements the SQL Server Extensible Key Management interface.
ENABLE | DISABLE
Enables or disables a provider.
Remarks
If the provider changes the .dll file that is used to implement Extensible Key Management in SQL Server, you must
use the ALTER CRYPTOGRAPHIC PROVIDER statement.
When the .dll file path is updated by using the ALTER CRYPTOGRAPHIC PROVIDER statement, SQL Server
performs the following actions:
Disables the provider.
Verifies the DLL signature and ensures that the .dll file has the same GUID as the one recorded in the catalog.
Updates the DLL version in the catalog.
When an EKM provider is set to DISABLE, any attempts on new connections to use the provider with encryption
statements will fail.
To disable a provider, all sessions that use the provider must be terminated.
When an EKM provider dll does not implement all of the necessary methods, ALTER CRYPTOGRAPHIC
PROVIDER can return error 33085:
One or more methods cannot be found in cryptographic provider library '%.*ls'.
When the header file used to create the EKM provider dll is out of date, ALTER CRYPTOGRAPHIC PROVIDER
can return error 33032:
SQL Crypto API version '%02d.%02d' implemented by provider is not supported. Supported version is '%02d.%02d'.
Permissions
Requires CONTROL permission on the cryptographic provider.
Examples
The following example alters a cryptographic provider, called SecurityProvider in SQL Server, to a newer version
of a .dll file. This new version is named c:\SecurityProvider\SecurityProvider_v2.dll and is installed on the server.
The provider's certificate must be installed on the server.
1. Disable the provider to perform the upgrade. This will terminate all open cryptographic sessions.
2. Upgrade the provider .dll file. The GUID must the same as the previous version, but the version can be
different.
See Also
Extensible Key Management (EKM )
CREATE CRYPTOGRAPHIC PROVIDER (Transact-SQL )
DROP CRYPTOGRAPHIC PROVIDER (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Extensible Key Management Using Azure Key Vault (SQL Server)
ALTER DATABASE (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies a database, or the files and filegroups associated with the database. Adds or removes files and
filegroups from a database, changes the attributes of a database or its files and filegroups, changes the database
collation, and sets database options. Database snapshots cannot be modified. To modify database options
associated with replication, use sp_replicationdboption.
Because of its length, the ALTER DATABASE syntax is separated into the following topics:
ALTER DATABASE
The current topic provides the syntax for changing the name and the collation of a database.
ALTER DATABASE File and Filegroup Options
Provides the syntax for adding and removing files and filegroups from a database, and for changing the
attributes of the files and filegroups.
ALTER DATABASE SET Options
Provides the syntax for changing the attributes of a database by using the SET options of ALTER DATABASE.
ALTER DATABASE Database Mirroring
Provides the syntax for the SET options of ALTER DATABASE that are related to database mirroring.
ALTER DATABASE SET HADR
Provides the syntax for the Always On availability groups options of ALTER DATABASE for configuring a
secondary database on a secondary replica of an Always On availability group.
ALTER DATABASE Compatibility Level
Provides the syntax for the SET options of ALTER DATABASE that are related to database compatibility levels.
Transact-SQL Syntax Conventions
For Azure SQL Database, see ALTER DATABASE (Azure SQL Database)
For Azure SQL Data Warehouse, see ALTER DATABASE (Azure SQL Data Warehouse).
For Parallel Data Warehouse, see ALTER DATABASE (Parallel Data Warehouse).
Syntax
-- SQL Server Syntax
ALTER DATABASE { database_name | CURRENT }
{
MODIFY NAME = new_database_name
| COLLATE collation_name
| <file_and_filegroup_options>
| <set_database_options>
}
[;]
<file_and_filegroup_options >::=
<add_or_modify_files>::=
<filespec>::=
<add_or_modify_filegroups>::=
<filegroup_updatability_option>::=
<set_database_options>::=
<optionspec>::=
<auto_option> ::=
<change_tracking_option> ::=
<cursor_option> ::=
<database_mirroring_option> ::=
<date_correlation_optimization_option> ::=
<db_encryption_option> ::=
<db_state_option> ::=
<db_update_option> ::=
<db_user_access_option> ::= <delayed_durability_option> ::= <external_access_option> ::=
<FILESTREAM_options> ::=
<HADR_options> ::=
<parameterization_option> ::=
<query_store_options> ::=
<recovery_option> ::=
<service_broker_option> ::=
<snapshot_option> ::=
<sql_option> ::=
<termination> ::=
Arguments
database_name
Is the name of the database to be modified.
NOTE
This option is not available in a Contained Database.
CURRENT
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Designates that the current database in use should be altered.
MODIFY NAME =new_database_name
Renames the database with the name specified as new_database_name.
COLL ATE collation_name
Specifies the collation for the database. collation_name can be either a Windows collation name or a SQL
collation name. If not specified, the database is assigned the collation of the instance of SQL Server.
When creating databases with other than the default collation, the data in the database always respects the
specified collation. For SQL Server, when creating a contained database, the internal catalog information is
maintained using the SQL Server default collation, Latin1_General_100_CI_AS_WS_KS_SC.
For more information about the Windows and SQL collation names, see COLL ATE (Transact-SQL ).
<delayed_durability_option> ::=
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
For more information see ALTER DATABASE SET Options (Transact-SQL ) and Control Transaction Durability.
<file_and_filegroup_options>::=
For more information, see ALTER DATABASE File and Filegroup Options (Transact-SQL ).
Remarks
To remove a database, use DROP DATABASE.
To decrease the size of a database, use DBCC SHRINKDATABASE.
The ALTER DATABASE statement must run in autocommit mode (the default transaction management mode)
and is not allowed in an explicit or implicit transaction.
The state of a database file (for example, online or offline), is maintained independently from the state of the
database. For more information, see File States. The state of the files within a filegroup determines the
availability of the whole filegroup. For a filegroup to be available, all files within the filegroup must be online. If a
filegroup is offline, any try to access the filegroup by an SQL statement will fail with an error. When you build
query plans for SELECT statements, the query optimizer avoids nonclustered indexes and indexed views that
reside in offline filegroups. This enables these statements to succeed. However, if the offline filegroup contains
the heap or clustered index of the target table, the SELECT statements fail. Additionally, any INSERT, UPDATE,
or DELETE statement that modifies a table with any index in an offline filegroup will fail.
When a database is in the RESTORING state, most ALTER DATABASE statements will fail. The exception is
setting database mirroring options. A database may be in the RESTORING state during an active restore
operation or when a restore operation of a database or log file fails because of a corrupted backup file.
The plan cache for the instance of SQL Server is cleared by setting one of the following options.
OFFLINE READ_WRITE
READ_ONLY PAGE_VERIFY
Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server error
log contains the following informational message: " SQL Server has encountered %d occurrence(s) of
cachestore flush for the '%s' cachestore (part of plan cache) due to some database maintenance or reconfigure
operations". This message is logged every five minutes as long as the cache is flushed within that time interval.
The procedure cache is also flushed in the following scenarios:
A database has the AUTO_CLOSE database option set to ON. When no user connection references or
uses the database, the background task tries to close and shut down the database automatically.
You run several queries against a database that has default options. Then, the database is dropped.
A database snapshot for a source database is dropped.
You successfully rebuild the transaction log for a database.
You restore a database backup.
You detach a database.
Permissions
Requires ALTER permission on the database.
Examples
A. Changing the name of a database
The following example changes the name of the AdventureWorks2012 database to Northwind .
USE master;
GO
ALTER DATABASE AdventureWorks2012
Modify Name = Northwind ;
GO
USE master;
GO
See Also
ALTER DATABASE (Azure SQL Database)
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
DROP DATABASE (Transact-SQL )
SET TRANSACTION ISOL ATION LEVEL (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_configure (Transact-SQL )
sp_spaceused (Transact-SQL )
sys.databases (Transact-SQL )
sys.database_files (Transact-SQL )
sys.database_mirroring_witnesses (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.filegroups (Transact-SQL )
sys.master_files (Transact-SQL )
System Databases
ALTER DATABASE (Azure SQL Database)
5/16/2018 • 13 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Modifies a Azure SQL Database. Changes the name of a database, the edition and service objective of a database,
join an elastic pool, and sets database options.
Transact-SQL Syntax Conventions
Syntax
-- Azure SQL Database Syntax
ALTER DATABASE { database_name }
{
MODIFY NAME = new_database_name
| MODIFY ( <edition_options> [, ... n] )
| SET { <option_spec> [ ,... n ] }
| ADD SECONDARY ON SERVER <partner_server_name>
[WITH ( <add-secondary-option>::= [, ... n] ) ]
| REMOVE SECONDARY ON SERVER <partner_server_name>
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
}
[;]
<edition_options> ::=
{
<add-secondary-option> ::=
{
ALLOW_CONNECTIONS = { ALL | NO }
| SERVICE_OBJECTIVE =
{ <service-objective>
| { ELASTIC_POOL ( name = <elastic_pool_name>) }
}
}
<service-objective> ::= { 'S0' | 'S1' | 'S2' | 'S3'| 'S4'| 'S6'| 'S7'| 'S9'| 'S12' |
| 'P1' | 'P2' | 'P4'| 'P6' | 'P11' | 'P15'
| 'GP_GEN4_1' | 'GP_GEN4_2' | 'GP_GEN4_4' | 'GP_GEN4_8' | 'GP_GEN4_16' | 'GP_GEN4_24' |
| 'BC_GEN4_1' | 'BC_GEN4_2' | 'BC_GEN4_4' | 'BC_GEN4_8' | 'BC_GEN4_16' | 'BC_GEN4_24' |
| 'GP_GEN5_2' | 'GP_GEN5_4' | 'GP_GEN5_8' | 'GP_GEN5_16' | 'GP_GEN5_24' | 'GP_GEN5_32' | 'GP_GEN5_48' |
'GP_GEN5_80' |
| 'BC_GEN5_2' | 'BC_GEN5_4' | 'BC_GEN5_8' | 'BC_GEN5_16' | 'BC_GEN5_24' | 'BC_GEN5_32' | 'BC_GEN5_48' |
'BC_GEN5_80' |
}
<option_spec> ::=
{
<auto_option>
| <change_tracking_option>
| <cursor_option>
| <db_encryption_option>
| <db_update_option>
| <db_user_access_option>
| <delayed_durability_option>
| <parameterization_option>
| <query_store_options>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <termination>
| <temporal_history_retention>
}
<auto_option> ::=
{
AUTO_CREATE_STATISTICS { OFF | ON [ ( INCREMENTAL = { ON | OFF } ) ] }
| AUTO_SHRINK { ON | OFF }
| AUTO_UPDATE_STATISTICS { ON | OFF }
| AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}
<change_tracking_option> ::=
{
CHANGE_TRACKING
{
= OFF
| = ON [ ( <change_tracking_option_list > [,...n] ) ]
| ( <change_tracking_option_list> [,...n] )
}
}
<change_tracking_option_list> ::=
{
AUTO_CLEANUP = { ON | OFF }
| CHANGE_RETENTION = retention_period { DAYS | HOURS | MINUTES }
}
<cursor_option> ::=
{
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
}
<db_encryption_option> ::=
ENCRYPTION { ON | OFF }
<db_update_option> ::=
{ READ_ONLY | READ_WRITE }
<db_user_access_option> ::=
{ RESTRICTED_USER | MULTI_USER }
<parameterization_option> ::=
PARAMETERIZATION { SIMPLE | FORCED }
<query_store_options> ::=
{
QUERY_STORE
{
= OFF
| = ON [ ( <query_store_option_list> [,... n] ) ]
| ( < query_store_option_list> [,... n] )
| CLEAR [ ALL ]
}
}
<query_store_option_list> ::=
{
OPERATION_MODE = { READ_WRITE | READ_ONLY }
| CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = number )
| DATA_FLUSH_INTERVAL_SECONDS = number
| MAX_STORAGE_SIZE_MB = number
| INTERVAL_LENGTH_MINUTES = number
| SIZE_BASED_CLEANUP_MODE = [ AUTO | OFF ]
| QUERY_CAPTURE_MODE = [ ALL | AUTO | NONE ]
| MAX_PLANS_PER_QUERY = number
}
<snapshot_option> ::=
{
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
| READ_COMMITTED_SNAPSHOT {ON | OFF }
| MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT {ON | OFF }
}
<sql_option> ::=
{
ANSI_NULL_DEFAULT { ON | OFF }
| ANSI_NULLS { ON | OFF }
| ANSI_PADDING { ON | OFF }
| ANSI_WARNINGS { ON | OFF }
| ARITHABORT { ON | OFF }
| COMPATIBILITY_LEVEL = { 100 | 110 | 120 | 130 | 140 }
| CONCAT_NULL_YIELDS_NULL { ON | OFF }
| NUMERIC_ROUNDABORT { ON | OFF }
| QUOTED_IDENTIFIER { ON | OFF }
| RECURSIVE_TRIGGERS { ON | OFF }
}
<termination> ::=
{
ROLLBACK AFTER integer [ SECONDS ]
| ROLLBACK IMMEDIATE
| NO_WAIT
}
For full descriptions of the set options, see ALTER DATABASE SET Options (Transact-SQL ) and ALTER
DATABASE Compatibility Level (Transact-SQL ).
Arguments
database_name
Is the name of the database to be modified.
CURRENT
Designates that the current database in use should be altered.
MODIFY NAME =new_database_name
Renames the database with the name specified as new_database_name. The following example changes the name
of a database db1 to db2 :
ALTER DATABASE db1
MODIFY Name = db2 ;
EDITION change fails if the MAXSIZE property for the database is set to a value outside the valid range
supported by that edition.
MODIFY (MAXSIZE = [100 MB | 500 MB | 1 | 1024…4096] GB )
Specifies the maximum size of the database. The maximum size must comply with the valid set of values for the
EDITION property of the database. Changing the maximum size of the database may cause the database
EDITION to be changed. Following table lists the supported MAXSIZE values and the defaults (D ) for the SQL
Database service tiers.
DTU -based model
100 MB √ √ √ √ √
250 MB √ √ √ √ √
500 MB √ √ √ √ √
1 GB √ √ √ √ √
2 GB √ (D) √ √ √ √
5 GB N/A √ √ √ √
10 GB N/A √ √ √ √
20 GB N/A √ √ √ √
30 GB N/A √ √ √ √
40 GB N/A √ √ √ √
50 GB N/A √ √ √ √
100 GB N/A √ √ √ √
150 GB N/A √ √ √ √
MAXSIZE BASIC S0-S2 S3-S12 P1-P6 P11-P15
200 GB N/A √ √ √ √
300 GB N/A √ √ √ √
400 GB N/A √ √ √ √
750 GB N/A √ √ √ √
* P11 and P15 allow MAXSIZE up to 4 TB with 1024 GB being the default size. P11 and P15 can use up to 4 TB of
included storage at no additional charge. In the Premium tier, MAXSIZE greater than 1 TB is currently available in
the following regions: US East2, West US, US Gov Virginia, West Europe, Germany Central, South East Asia,
Japan East, Australia East, Canada Central, and Canada East. For additional details regarding resource limitations
for the DTU -based model, see DTU -based resource limits.
The MAXSIZE value for the DTU -based model, if specified, has to be a valid value shown in the table above for the
service tier specified.
vCore-based model
General Purpose service tier - Generation 4 compute platform
Max data 1024 1024 1536 3072 4096 4096 4096 4096
size (GB)
PERFORMANCE
LEVEL BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16
Max data 1024 1024 1024 1024 2048 4096 4096 4096
size (GB)
If no MAXSIZE value is set when using the vCore model, the default is 32 GB. For additional details regarding
resource limitsations for vCore-based model, see vCore-based resource limits.
The following rules apply to MAXSIZE and EDITION arguments:
If EDITION is specified but MAXSIZE is not specified, the default value for the edition is used. For example,
is the EDITION is set to Standard, and the MAXSIZE is not specified, then the MAXSIZE is automatically
set to 500 MB.
If neither MAXSIZE nor EDITION is specified, the EDITION is set to Standard (S0), and MAXSIZE is set to
250 GB.
MODIFY (SERVICE_OBJECTIVE = <service-objective>)
Specifies the performance level. The following example changes service objective of a premium database to P6 :
Specifies the performance level. Available values for service objective are: S0 , S1 , S2 , S3 , S4 , S6 , S7 , S9 ,
S12 , P1 , P2 , P4 , P6 , P11 , P15 , GP_GEN4_1 , GP_GEN4_2 , GP_GEN4_4 , GP_GEN4_8 , GP_GEN4_16 , GP_GEN4_24 ,
BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16 , BC_GEN4_24 , GP_Gen5_2 , GP_Gen5_4 , GP_Gen5_8 ,
GP_Gen5_16 , GP_Gen5_24 , GP_Gen5_32 , GP_Gen5_48 , GP_Gen5_80 , BC_Gen5_2 , BC_Gen5_4 , BC_Gen5_8 , BC_Gen5_16 ,
BC_Gen5_24 , BC_Gen5_32 , BC_Gen5_48 , BC_Gen5_80 .
For service objective descriptions and more information about the size, editions, and the service objectives
combinations, see Azure SQL Database Service Tiers and Performance Levels, DTU -based resource limits and
vCore-based resource limits. Support for PRS service objectives have been removed. For questions, use this e-
mail alias: premium-rs@microsoft.com.
MODIFY (SERVICE_OBJECTIVE = EL ASTIC_POOL (name = <elastic_pool_name>)
To add an existing database to an elastic pool, set the SERVICE_OBJECTIVE of the database to EL ASTIC_POOL
and provide the name of the elastic pool. You can also use this option to change the database to a different elastic
pool within the same server. For more information, see Create and manage a SQL Database elastic pool. To
remove a database from an elastic pool, use ALTER DATABASE to set the SERVICE_OBJECTIVE to a single
database performance level.
ADD SECONDARY ON SERVER <partner_server_name>
Creates a geo-replication secondary database with the same name on a partner server, making the local database
into a geo-replication primary, and begins asynchronously replicating data from the primary to the new
secondary. If a database with the same name already exists on the secondary, the command fails. The command is
executed on the master database on the server hosting the local database that becomes the primary.
WITH ALLOW_CONNECTIONS { ALL | NO }
When ALLOW_CONNECTIONS is not specified, it is set to ALL by default. If it is set ALL, it is a read-only
database that allows all logins with the appropriate permissions to connect.
WITH SERVICE_OBJECTIVE { S0 , S1 , S2 , S3 , S4 , S6 , S7 , S9 , S12 , P1 , P2 , P4 , P6 , P11 , P15 ,
GP_GEN4_1 , GP_GEN4_2 , GP_GEN4_4 , GP_GEN4_8 , GP_GEN4_16 ,
, BC_GEN4_1 BC_GEN4_2 BC_GEN4_4
GP_GEN4_24
BC_GEN4_8 BC_GEN4_16 , BC_GEN4_24 , GP_Gen5_2 , GP_Gen5_4 , GP_Gen5_8 , GP_Gen5_16 , GP_Gen5_24 , GP_Gen5_32 ,
GP_Gen5_48 , GP_Gen5_80 , BC_Gen5_2 , BC_Gen5_4 , BC_Gen5_8 , BC_Gen5_16 , BC_Gen5_24 , BC_Gen5_32 , BC_Gen5_48 ,
BC_Gen5_80 }
When SERVICE_OBJECTIVE is not specified, the secondary database is created at the same service level as the
primary database. When SERVICE_OBJECTIVE is specified, the secondary database is created at the specified
level. This option supports creating geo-replicated secondaries with less expensive service levels. The
SERVICE_OBJECTIVE specified must be within the same edition as the source. For example, you cannot specify
S0 if the edition is premium.
EL ASTIC_POOL (name = <elastic_pool_name>)
When EL ASTIC_POOL is not specified, the secondary database is not created in an elastic pool. When
EL ASTIC_POOL is specified, the secondary database is created in the specified pool.
IMPORTANT
The user executing the ADD SECONDARY command must be DBManager on primary server, have db_owner membership in
local database, and DBManager on secondary server.
IMPORTANT
The user executing the REMOVE SECONDARY command must be DBManager on the primary server.
FAILOVER
Promotes the secondary database in geo-replication partnership on which the command is executed to become
the primary and demotes the current primary to become the new secondary. As part of this process, the geo-
replication mode is temporarily switched from asynchronous mode to synchronous mode. During the failover
process:
1. The primary stops taking new transactions.
2. All outstanding transactions are flushed to the secondary.
3. The secondary becomes the primary and begins asynchronous geo-replication with the old primary / the
new secondary.
This sequence ensures that no data loss occurs. The period during which both databases are unavailable is on the
order of 0-25 seconds while the roles are switched. The total operation should take no longer than about one
minute. If the primary database is unavailable when this command is issued, the command fails with an error
message indicating that the primary database is not available. If the failover process does not complete and
appears stuck, you can use the force failover command and accept data loss - and then, if you need to recover the
lost data, call devops (CSS ) to recover the lost data.
IMPORTANT
The user executing the FAILOVER command must be DBManager on both the primary server and the secondary server.
FORCE_FAILOVER_ALLOW_DATA_LOSS
Promotes the secondary database in geo-replication partnership on which the command is executed to become
the primary and demotes the current primary to become the new secondary. Use this command only when the
current primary is no longer available. It is designed for disaster recovery only, when restoring availability is
critical, and some data loss is acceptable.
During a forced failover:
1. The specified secondary database immediately becomes the primary database and begins accepting new
transactions.
2. When the original primary can reconnect with the new primary, an incremental backup is taken on the
original primary, and the original primary becomes a new secondary.
3. To recover data from this incremental backup on the old primary, the user engages devops/CSS.
4. If there are additional secondaries, they are automatically reconfigured to become secondaries of the new
primary. This process is asynchronous and there may be a delay until this process completes. Until the
reconfiguration has completed, the secondaries continue to be secondaries of the old primary.
IMPORTANT
The user executing the FORCE_FAILOVER_ALLOW_DATA_LOSS command must be DBManager on both the primary server
and the secondary server.
Remarks
To remove a database, use DROP DATABASE.
To decrease the size of a database, use DBCC SHRINKDATABASE.
The ALTER DATABASE statement must run in autocommit mode (the default transaction management mode) and
is not allowed in an explicit or implicit transaction.
Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server error
log contains the following informational message: " SQL Server has encountered %d occurrence(s) of cachestore
flush for the '%s' cachestore (part of plan cache) due to some database maintenance or reconfigure operations".
This message is logged every five minutes as long as the cache is flushed within that time interval.
The procedure cache is also flushed in the following scenarios:
A database has the AUTO_CLOSE database option set to ON. When no user connection references or uses
the database, the background task tries to close and shut down the database automatically.
You run several queries against a database that has default options. Then, the database is dropped.
You successfully rebuild the transaction log for a database.
You restore a database backup.
You detach a database.
IMPORTANT
The owner of the database cannot alter the database unless they are a member of the dbmanager role.
Examples
A. Check the edition options and change them:
ALTER DATABASE [db1] MODIFY (EDITION = 'Premium', MAXSIZE = 1024 GB, SERVICE_OBJECTIVE = 'P15');
See also
CREATE DATABASE - Azure SQL Database
DATABASEPROPERTYEX
DROP DATABASE
SET TRANSACTION ISOL ATION LEVEL
EVENTDATA
sp_configure
sp_spaceused
sys.databases
sys.database_files
sys.database_mirroring_witnesses
sys.data_spaces
sys.filegroups
sys.master_files
System Databases
ALTER DATABASE (Azure SQL Data Warehouse)
5/4/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the name, maximum size, or service objective for a database.
Transact-SQL Syntax Conventions
Syntax
ALTER DATABASE database_name
<edition_option> ::=
MAXSIZE = {
250 | 500 | 750 | 1024 | 5120 | 10240 | 20480
| 30720 | 40960 | 51200 | 61440 | 71680 | 81920
| 92160 | 102400 | 153600 | 204800 | 245760
} GB
| SERVICE_OBJECTIVE = {
'DW100' | 'DW200' | 'DW300' | 'DW400' | 'DW500'
| 'DW600' | 'DW1000' | 'DW1200' | 'DW1500' | 'DW2000'
| 'DW3000' | 'DW6000' | 'DW1000c' | 'DW1500c' | 'DW2000c'
| 'DW2500c' | 'DW3000c' | 'DW5000c' | 'DW6000c' | 'DW7500c'
| 'DW10000c' | 'DW15000c' | 'DW30000c'
}
Arguments
database_name
Specifies the name of the database to be modified.
MODIFY NAME = new_database_name
Renames the database with the name specified as new_database_name.
MAXSIZE
The default is 245,760 GB (240 TB ).
Applies to: Optimized for Elasticity performance tier
The maximum allowable size for the database. The database cannot grow beyond MAXSIZE.
Applies to: Optimized for Compute performance tier
The maximum allowable size for rowstore data in the database. Data stored in rowstore tables, a columnstore
index's deltastore, or a nonclustered index on a clustered columnstore index cannot grow beyond MAXSIZE. Data
compressed into columnstore format does not have a size limit and is not constrained by MAXSIZE.
SERVICE_OBJECTIVE
Specifies the performance level. For more information about service objectives for SQL Data Warehouse, see
Performance Tiers.
Permissions
Requires these permissions:
Server-level principal login (the one created by the provisioning process), or
Member of the dbmanager database role.
The owner of the database cannot alter the database unless the owner is a member of the dbmanager role.
General Remarks
The current database must be a different database than the one you are altering, therefore ALTER must be run
while connected to the master database.
SQL Data Warehouse is set to COMPATIBILITY_LEVEL 130 and cannot be changed. For more details, see
Improved Query Performance with Compatibility Level 130 in Azure SQL Database.
To decrease the size of a database, use DBCC SHRINKDATABASE.
Examples
Before you run these examples, make sure the database you are altering is not the current database. The current
database must be a different database than the one you are altering, therefore ALTER must be run while
connected to the master database.
A. Change the name of the database
See Also
CREATE DATABASE (Azure SQL Data Warehouse) SQL Data Warehouse list of reference topics
ALTER DATABASE (Parallel Data Warehouse)
5/25/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Modifies the maximum database size options for replicated tables, distributed tables, and the transaction log in
Parallel Data Warehouse. Use this statement to manage disk space allocations for a database as it grows or shrinks
in size. The topic also describes syntax related to setting database options in Parallel Data Warehouse.
Transact-SQL Syntax Conventions (Transact-SQL )
Syntax
-- Parallel Data Warehouse
ALTER DATABASE database_name
SET ( <set_database_options> | <db_encryption_option> )
[;]
<set_database_options> ::=
{
AUTOGROW = { ON | OFF }
| REPLICATED_SIZE = size [GB]
| DISTRIBUTED_SIZE = size [GB]
| LOG_SIZE = size [GB]
| SET AUTO_CREATE_STATISTICS { ON | OFF }
| SET AUTO_UPDATE_STATISTICS { ON | OFF }
| SET AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}
<db_encryption_option> ::=
ENCRYPTION { ON | OFF }
Arguments
database_name
The name of the database to be modified. To display a list of databases on the appliance, use sys.databases
(Transact-SQL ).
AUTOGROW = { ON | OFF }
Updates the AUTOGROW option. When AUTOGROW is ON, Parallel Data Warehouse automatically increases
the allocated space for replicated tables, distributed tables, and the transaction log as necessary to accommodate
growth in storage requirements. When AUTOGROW is OFF, Parallel Data Warehouse returns an error if replicated
tables, distributed tables, or the transaction log exceeds the maximum size setting.
REPLICATED_SIZE = size [GB ]
Specifies the new maximum gigabytes per Compute node for storing all of the replicated tables in the database
being altered. If you are planning for appliance storage space, you will need to multiply REPLICATED_SIZE by the
number of Compute nodes in the appliance.
DISTRIBUTED_SIZE = size [GB ]
Specifies the new maximum gigabytes per database for storing all of the distributed tables in the database being
altered. The size is distributed across all of the Compute nodes in the appliance.
LOG_SIZE = size [GB ]
Specifies the new maximum gigabytes per database for storing all of the transaction logs in the database being
altered. The size is distributed across all of the Compute nodes in the appliance.
ENCRYPTION { ON | OFF }
Sets the database to be encrypted (ON ) or not encrypted (OFF ). Encryption can only be configured for Parallel
Data Warehouse when sp_pdw_database_encryption has been set to 1. A database encryption key must be created
before transparent data encryption can be configured. For more information about database encryption, see
Transparent Data Encryption (TDE ).
SET AUTO_CREATE_STATISTICS { ON | OFF } When the automatic create statistics option,
AUTO_CREATE_STATISTICS, is ON, the Query Optimizer creates statistics on individual columns in the query
predicate, as necessary, to improve cardinality estimates for the query plan. These single-column statistics are
created on columns that do not already have a histogram in an existing statistics object.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created prior to
the upgrade.
For more information about statistics, see Statistics
SET AUTO_UPDATE_STATISTICS { ON | OFF } When the automatic update statistics option,
AUTO_UPDATE_STATISTICS, is ON, the query optimizer determines when statistics might be out-of-date and
then updates them when they are used by a query. Statistics become out-of-date after operations insert, update,
delete, or merge change the data distribution in the table or indexed view. The query optimizer determines when
statistics might be out-of-date by counting the number of data modifications since the last statistics update and
comparing the number of modifications to a threshold. The threshold is based on the number of rows in the table
or indexed view.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created prior to
the upgrade.
For more information about statistics, see Statistics.
SET AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF } The asynchronous statistics update option,
AUTO_UPDATE_STATISTICS_ASYNC, determines whether the Query Optimizer uses synchronous or
asynchronous statistics updates. The AUTO_UPDATE_STATISTICS_ASYNC option applies to statistics objects
created for indexes, single columns in query predicates, and statistics created with the CREATE STATISTICS
statement.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created prior to
the upgrade.
For more information about statistics, see Statistics.
Permissions
Requires the ALTER permission on the database.
Error Messages
If auto-stats is disabled and you try to alter the statistics settings, PDW gives the error "This option is not
supported in PDW." The system administrator can enable auto-stats by enabling the feature switch
AutoStatsEnabled.
General Remarks
The values for REPLICATED_SIZE, DISTRIBUTED_SIZE, and LOG_SIZE can be greater than, equal to, or less than
the current values for the database.
Limitations and Restrictions
Grow and shrink operations are approximate. The resulting actual sizes can vary from the size parameters.
Parallel Data Warehouse does not perform the ALTER DATABASE statement as an atomic operation. If the
statement is aborted during execution, changes that have already occurred will remain.
The statistics settings only work if the administrator has enable auto-stats. If you are an administrator, use the
feature switch AutoStatsEnabled to enable or disable auto-stats.
Locking Behavior
Takes a shared lock on the DATABASE object. You cannot alter a database that is in use by another user for reading
or writing. This includes sessions that have issued a USE statement on the database.
Performance
Shrinking a database can take a large amount of time and system resources, depending on the size of the actual
data within the database, and the amount of fragmentation on disk. For example, shrinking a database could take
serveral hours or more.
For a comprehensive example demonstrating all the steps in implementing TDE, see Transparent Data Encryption
(TDE ).
SELECT NAME,
is_auto_create_stats_on,
is_auto_update_stats_on,
is_auto_update_stats_async_on
FROM sys.databases;
See Also
CREATE DATABASE (Parallel Data Warehouse)
DROP DATABASE (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a database audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions
Syntax
ALTER DATABASE AUDIT SPECIFICATION audit_specification_name
{
[ FOR SERVER AUDIT audit_name ]
[ { { ADD | DROP } (
{ <audit_action_specification> | audit_action_group_name }
)
} [, ...n] ]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]
<audit_action_specification>::=
{
<action_specification>[ ,...n ] ON [ class :: ] securable
BY principal [ ,...n ]
}
Arguments
audit_specification_name
The name of the audit specification.
audit_name
The name of the audit to which this specification is applied.
audit_action_specification
Name of one or more database-level auditable actions. For a list of audit action groups, see SQL Server Audit
Action Groups and Actions.
audit_action_group_name
Name of one or more groups of database-level auditable actions. For a list of audit action groups, see SQL Server
Audit Action Groups and Actions.
class
Class name (if applicable) on the securable.
securable
Table, view, or other securable object in the database on which to apply the audit action or audit action group. For
more information, see Securables.
column
Column name (if applicable) on the securable.
principal
Name of SQL Server principal on which to apply the audit action or audit action group. For more information, see
Principals (Database Engine).
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification. Audit specification state changes
must be done outside a user transaction and may not have other changes in the same statement when the
transition is ON to OFF.
Remarks
Database audit specifications are non-securable objects that reside in a given database. You must set the state of
an audit specification to the OFF option in order to make changes to a database audit specification. If ALTER
DATABASE AUDIT SPECIFICATION is executed when an audit is enabled with any options other than
STATE=OFF, you will receive an error message. For more information, see tempdb Database.
Permissions
Users with the ALTER ANY DATABASE AUDIT permission can alter database audit specifications and bind them
to any audit.
After a database audit specification is created, it can be viewed by principals with the CONTROL SERVER,or
ALTER ANY DATABASE AUDIT permissions, the sysadmin account, or principals having explicit access to the
audit.
Examples
The following example alters a database audit specification called HIPPA_Audit_DB_Specification that audits the
SELECT statements by the dbo user, for a SQL Server audit called HIPPA_Audit .
For a full example about how to create an audit, see SQL Server Audit (Database Engine).
See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
ALTER DATABASE (Transact-SQL) Compatibility Level
5/16/2018 • 29 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets certain database behaviors to be compatible with the specified version of SQL Server. For other ALTER
DATABASE options, see ALTER DATABASE (Transact-SQL ).
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 | 90 }
Arguments
database_name
Is the name of the database to be modified.
COMPATIBILITY_LEVEL { 140 | 130 | 120 | 110 | 100 | 90 | 80 }
Is the version of SQL Server with which the database is to be made compatible. The following compatibility level
values can be configured:
SQL Server 2017 (14.x) 14 140 140, 130, 120, 110, 100
NOTE
As of January 2018, in Azure SQL Database, the default compatibility level is 140 for newly created databases. We do not
update database compatibility level for existing databases. This is up to customers to do at their own discretion. With that
said, we highly recommend customers plan on moving to the latest compatibility level in order to leverage the latest
improvements.
If you want to leverage database compatibility level 140 for your database overall, but you have reason to prefer the
cardinality estimation model of SQL Server 2012 (11.x), mapping to database compatibility level 110, see ALTER
DATABASE SCOPED CONFIGURATION (Transact-SQL), and in particular its keyword LEGACY_CARDINALITY_ESTIMATION = ON
.
For details about how to assess the performance differences of your most important queries, between two compatibility
levels on Azure SQL Database, see Improved Query Performance with Compatibility Level 130 in Azure SQL Database. Note
that this article refers to compatibility level 130 and SQL Server, but the same methodology applies for moves to 140 for
SQL Server and Azure SQL Database .
Execute the following query to determine the version of the Database Engine that you are connected to.
SELECT SERVERPROPERTY('ProductVersion');
NOTE
Not all features that vary by compatibility level are supported on Azure SQL Database.
To determine the current compatibility level, query the compatibility_level column of sys.databases (Transact-
SQL ).
Remarks
For all installations of SQL Server, the default compatibility level is set to the version of the Database Engine.
Databases are set to this level unless the model database has a lower compatibility level. When a database is
upgraded from any earlier version of SQL Server, the database retains its existing compatibility level, if it is at least
minimum allowed for that instance of SQL Server. Upgrading a database with a compatibility level lower than the
allowed level, automatically sets the database to the lowest compatibility level allowed. This applies to both system
and user databases.
The below behaviors are expected for SQL Server 2017 (14.x) when a database is attached or restored, and after
an in-place upgrade:
If the compatibility level of a user database was 100 or higher before the upgrade, it remains the same after
upgrade.
If the compatibility level of a user database was 90 before upgrade, in the upgraded database, the compatibility
level is set to 100, which is the lowest supported compatibility level in SQL Server 2017 (14.x).
The compatibility levels of the tempdb, model, msdb and Resource databases are set to the current
compatibility level after upgrade.
The master system database retains the compatibility level it had before upgrade.
Use ALTER DATABASE to change the compatibility level of the database. The new compatibility level setting for a
database takes effect when a USE <database> command is issued, or a new login is processed with that database
as the default database context.
To view the current compatibility level of a database, query the compatibility_level column in the sys.databases
catalog view.
NOTE
A distribution database that was created in an earlier version of SQL Server and is upgraded to SQL Server 2016 (13.x) RTM
or Service Pack 1 has a compatibility level of 90, which is not supported for other databases. This does not have an impact
on the functionality of replication. Upgrading to later service packs and versions of SQL Server will result in the compatibility
level of the distribution database to be increased to match that of the master database.
TIP
If an application was tested and certified on a given SQL Server version, then it was implicitly tested and certified on that
SQL Server version native database compatibility level.
So, database compatibility level provides an easy certification path for an existing application, when using the database
compatibility level corresponding to the tested SQL Server version.
For more information about differences between compatibility levels, see the appropriate sections later in this article.
To upgrade the SQL Server Database Engine to the latest version, while maintaining the database compatibility
level that existed before the upgrade and its supportability status, it is recommended to perform static functional
surface area validation of the application code in the database, by using the Microsoft Data Migration Assistant
tool (DMA). The absence of errors in the DMA tool output, about missing or incompatible functionality, protects
application from any functional regressions on the new target version. For more information on the DMA tool, see
here.
NOTE
DMA supports database compatibility level 100 and above. SQL Server 2005 as source version is excluded.
IMPORTANT
Microsoft recommends that some minimal testing is done to validate the success of an upgrade, while maintaining the
previous database compatibility level. You should determine what minimal testing means for your own application and
scenario.
NOTE
Microsoft provides query plan shape protection when:
The new SQL Server version (target) runs on hardware that is comparable to the hardware where the previous SQL
Server version (source) was running.
The same supported database compatibility level is used both at the target SQL Server and source SQL Server.
Any query plan shape regression (as compared to the source SQL Server) that occurs in the above conditions will be
addressed. Please contact Microsoft Customer Support if this is the case.
IMPORTANT
Discontinued functionality introduced in a given SQL Server version is not protected by compatibility level. This refers to
fucntionality that was removed from the SQL Server Database Engine.
For example, the FASTFIRSTROW hint was discontinued in SQL Server 2012 (11.x) and replaced with the
OPTION (FAST n ) hint. Setting the database compatibility level to 110 will not restore the discontinued hint. For more
information on discontinued functionality, see Discontinued Database Engine Functionality in SQL Server 2016, Discontinued
Database Engine Functionality in SQL Server 2014, Discontinued Database Engine Functionality in SQL Server 2012, and
Discontinued Database Engine Functionality in SQL Server 2008.
IMPORTANT
Breaking changes introduced in a given SQL Server version may not be protected by compatibility level. This refers to
behavior changes between versions of the SQL Server Database Engine. Transact-SQL behavior is usually protected by
compatibility level. However, changed or removed system objects are not protected by compatibility level.
An example of a breaking change protected by compatibility level is an implicit conversion from datetime to datetime2 data
types. Under database compatibility level 130, these show improved accuracy by accounting for the fractional milliseconds,
resulting in different converted values. To restore previous conversion behavior, set the database compatibility level to 120
or lower.
Examples of breaking changes not protected by compatibility level are:
Changed column names in system objects. In SQL Server 2012 (11.x) the column single_pages_kb in sys.dm_os_sys_info
was renamed to pages_kb. Regardless of the compatibility level, the query
SELECT single_pages_kb FROM sys.dm_os_sys_info will produce error 207 (Invalid column name).
Removed system objects. In SQL Server 2012 (11.x) the sp_dboption was removed. Regardless of the compatibility
level, the statement EXEC sp_dboption 'AdventureWorks2016CTP3', 'autoshrink', 'FALSE'; will produce error 2812
(Could not find stored procedure 'sp_dboption').
For more information on breaking changes, see Breaking Changes to Database Engine Features in SQL Server 2017,
Breaking Changes to Database Engine Features in SQL Server 2016, Breaking Changes to Database Engine Features in SQL
Server 2014, Breaking Changes to Database Engine Features in SQL Server 2012, and Breaking Changes to Database
Engine Features in SQL Server 2008.
Cardinality estimates for statements referencing multi- Cardinality estimates for eligible statements referencing multi-
statement table valued functions use a fixed row guess. statement table valued functions will use the actual cardinality
of the function output. This is enabled via interleaved
execution for multi-statement table valued functions.
Batch-mode queries that request insufficient memory grant Batch-mode queries that request insufficient memory grant
sizes that result in spills to disk may continue to have issues sizes that result in spills to disk may have improved
on consecutive executions. performance on consecutive executions. This is enabled via
batch mode memory grant feedback which will update
the memory grant size of a cached plan if spills have occurred
for batch mode operators.
COMPATIBILITY-LEVEL SETTING OF 130 OR LOWER COMPATIBILITY-LEVEL SETTING OF 140
Batch-mode queries that request an excessive memory grant Batch-mode queries that request an excessive memory grant
size that results in concurrency issues may continue to have size that results in concurrency issues may have improved
issues on consecutive executions. concurrency on consecutive executions. This is enabled via
batch mode memory grant feedback which will update
the memory grant size of a cached plan if an excessive
amount was originally requested.
Batch-mode queries that contain join operators are eligible There is an additional join operator called adaptive join. If
for three physical join algorithms, including nested loop, hash cardinality estimates are incorrect for the outer build join
join and merge join. If cardinality estimates are incorrect for input, an inappropriate join algorithm may be selected. If this
join inputs, an inappropriate join algorithm may be selected. If occurs and the statement is eligible for an adaptive join, a
this occurs, performance will suffer and the inappropriate join nested loop will be used for smaller join inputs and a hash join
algorithm will remain in-use until the cached plan is will be used for larger join inputs dynamically without
recompiled. requiring recompilation.
Trivial plans referencing Columnstore indexes are not eligible A trivial plan referencing Columnstore indexes will be
for batch mode execution. discarded in favor of a plan that is eligible for batch mode
execution.
The sp_execute_external_script UDX operator can only The sp_execute_external_script UDX operator is eligible
run in row mode. for batch mode execution.
Multi-statement table-valued functions (TVF's) do not have Interleaved execution for multi-statement TVFs to improve
interleaved execution plan quality .
Fixes that were under trace flag 4199 in earlier versions of SQL Server prior to SQL Server 2017 are now
enabled by default. With compatibility mode 140. Trace flag 4199 will still be applicable for new query optimizer
fixes that are released after SQL Server 2017. For information about Trace Flag 4199, see Trace Flag 4199.
The Insert in an Insert-select statement is single-threaded. The Insert in an Insert-select statement is multi-threaded or
can have a parallel plan.
Queries on a memory-optimized table execute single- Queries on a memory-optimized table can now have parallel
threaded. plans.
Introduced the SQL 2014 Cardinality estimator Further cardinality estimation ( CE) Improvements with the
CardinalityEstimationModelVersion="120" Cardinality Estimation Model 130 which is visible from a
Query plan. CardinalityEstimationModelVersion="130"
COMPATIBILITY-LEVEL SETTING OF 120 OR LOWER COMPATIBILITY-LEVEL SETTING OF 130
Batch mode versus Row Mode changes with Columnstore Batch mode versus Row Mode changes with Columnstore
indexes indexes
Sorts on a table with Columnstore index are in Row mode Sorts on a table with a Columnstore index are now in batch
mode
Windowing function aggregates operate in row mode such as
LAG or LEAD Windowing aggregates now operate in batch mode such as
LAG or LEAD
Queries on Columnstore tables with Multiple distinct clauses
operated in Row mode Queries on Columnstore tables with Multiple distinct clauses
operate in Batch mode
Queries running under MAXDOP 1 or with a serial plan
executed in Row mode Queries running under Maxdop1 or with a serial plan execute
in Batch Mode
Statistics can be automatically updated. The logic which automatically updates statistics is more
aggressive on large tables. In practice, this should reduce
cases where customers have seen performance issues on
queries where newly inserted rows are queried frequently but
where the statistics had not been updated to include those
values.
Trace 2371 is OFF by default in SQL Server 2014 (12.x). Trace 2371 is ON by default in SQL Server 2016 (13.x). Trace
flag 2371 tells the auto statistics updater to sample a smaller
yet wiser subset of rows, in a table that has a great many
rows.
For level 120, statistics are sampled by a single-threaded For level 130, statistics are sampled by a multi-threaded
process. process.
253 incoming foreign keys is the limit. A given table can be referenced by up to 10,000 incoming
foreign keys or similar references. For restrictions, see Create
Foreign Key Relationships.
The deprecated MD2, MD4, MD5, SHA, and SHA1 hash Only SHA2_256 and SHA2_512 hash algorithms are
algorithms are permitted. permitted.
Fixes that were under trace flag 4199 in earlier versions of SQL Server prior to SQL Server 2016 (13.x) are now
enabled by default. With compatibility mode 130. Trace flag 4199 will still be applicable for new query optimizer
fixes that are released after SQL Server 2016 (13.x). To use the older query optimizer in SQL Database you must
select compatibility level 110. For information about Trace Flag 4199, see Trace Flag 4199.
The older query optimizer is used. SQL Server 2014 (12.x) includes substantial improvements to
the component that creates and optimizes query plans. This
new query optimizer feature is dependent upon use of the
database compatibility level 120. New database applications
should be developed using database compatibility level 120
to take advantage of these improvements. Applications that
are migrated from earlier versions of SQL Server should be
carefully tested to confirm that good performance is
maintained or improved. If performance degrades, you can set
the database compatibility level to 110 or earlier to use the
older query optimizer methodology.
In compatibility levels lower than 120, the language setting is The language setting is not ignored when converting a date
ignored when converting a date value to a string value. Note value to a string value.
that this behavior is specific only to the date type. See
example B in the Examples section below.
Recursive references on the right-hand side of an EXCEPT Recursive references in an EXCEPT clause generates an error
clause create an infinite loop. Example C in the Examples in compliance with the ANSI SQL standard.
section below demonstrates this behavior.
Recursive common table expression (CTE) allows duplicate Recursive CTE do not allow duplicate column names.
column names.
Disabled triggers are enabled if the triggers are altered. Altering a trigger does not change the state (enabled or
disabled) of the trigger.
The OUTPUT INTO table clause ignores the You cannot insert explicit values for an identity column in a
IDENTITY_INSERT SETTING = OFF and allows explicit values table when IDENTITY_INSERT is set to OFF.
to be inserted.
When the database containment is set to partial, validating The collation of the values returned by the $action clause
the $action field in the OUTPUT clause of a MERGE of a MERGE statement is the database collation instead of the
statement can return a collation error. server collation and a collation conflict error is not returned.
A SELECT INTO statement always creates a single-threaded A SELECT INTO statement can create a parallel insert
insert operation. operation. When inserting a large numbers of rows, the
parallel operation can improve performance.
Common language runtime (CLR) database objects are CLR database objects are executed with version 4 of the CLR.
executed with version 4 of the CLR. However, some behavior
changes introduced in version 4 of the CLR are avoided. For
more information, see What's New in CLR Integration.
The XQuery functions string-length and substring count The XQuery functions string-length and substring count
each surrogate as two characters. each surrogate as one character.
PIVOT is allowed in a recursive common table expression PIVOT is not allowed in a recursive common table expression
(CTE) query. However, the query returns incorrect results (CTE) query. An error is returned.
when there are multiple rows per grouping.
The RC4 algorithm is only supported for backward New material cannot be encrypted using RC4 or RC4_128.
compatibility. New material can only be encrypted using RC4 Use a newer algorithm such as one of the AES algorithms
or RC4_128 when the database is in compatibility level 90 or instead. In SQL Server 2012 (11.x), material encrypted using
100. (Not recommended.) In SQL Server 2012 (11.x), material RC4 or RC4_128 can be decrypted in any compatibility level.
encrypted using RC4 or RC4_128 can be decrypted in any
compatibility level.
The default style for CAST and CONVERT operations on Under compatibility level 110, the default style for CAST and
time and datetime2 data types is 121 except when either CONVERT operations on time and datetime2 data types is
type is used in a computed column expression. For computed always 121. If your query relies on the old behavior, use a
columns, the default style is 0. This behavior impacts compatibility level less than 110, or explicitly specify the 0
computed columns when they are created, used in queries style in the affected query.
involving auto-parameterization, or used in constraint
definitions. Upgrading the database to compatibility level 110 will not
change user data that has been stored to disk. You must
Example D in the Examples section below shows the difference manually correct this data as appropriate. For example, if you
between styles 0 and 121. It does not demonstrate the used SELECT INTO to create a table from a source that
behavior described above. For more information about date contained a computed column expression described above,
and time styles, see CAST and CONVERT (Transact-SQL). the data (using style 0) would be stored rather than the
computed column definition itself. You would need to
manually update this data to match style 121.
Any columns in remote tables of type smalldatetime that Any columns in remote tables of type smalldatetime that are
are referenced in a partitioned view are mapped as datetime. referenced in a partitioned view are mapped as
Corresponding columns in local tables (in the same ordinal smalldatetime. Corresponding columns in local tables (in the
position in the select list) must be of type datetime. same ordinal position in the select list) must be of type
smalldatetime.
SOUNDEX function implements the following rules: SOUNDEX function implements the following rules:
1) Upper-case H or upper-case W are ignored when 1) If upper-case H or upper-case W separate two consonants
separating two consonants that have the same number in the that have the same number in the SOUNDEX code, the
SOUNDEX code. consonant to the right is ignored
2) If the first 2 characters of character_expression have the 2) If a set of side-by-side consonants have same number in
same number in the SOUNDEX code, both characters are the SOUNDEX code, all of them are excluded except the first.
included. Else, if a set of side-by-side consonants have same
number in the SOUNDEX code, all of them are excluded
except the first.
The additional rules may cause the values computed by the
SOUNDEX function to be different than the values computed
under earlier compatibility levels. After upgrading to
compatibility level 110, you may need to rebuild the indexes,
heaps, or CHECK constraints that use the SOUNDEX function.
For more information, see SOUNDEX (Transact-SQL).
When you create or alter a partition The current language setting is used to Medium
function, datetime and smalldatetime evaluate datetime and smalldatetime
literals in the function are evaluated literals in the partition function.
assuming US_English as the language
setting.
The FOR BROWSE clause is allowed (and The FOR BROWSE clause is not allowed Medium
ignored) in INSERT and in INSERT and SELECT INTO
SELECT INTO statements. statements.
Full-text predicates are allowed in the Full-text predicates are not allowed in Low
OUTPUT clause. the OUTPUT clause.
MERGE is not enforced as a reserved MERGE is a fully reserved keyword. The Low
keyword. MERGE statement is supported under
both 100 and 90 compatibility levels.
COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT
If WITH EXTENDED_LOGICAL_CHECKS is
specified, logical checks are performed
on indexed views, XML indexes, and
spatial indexes, where present. By
default, physical consistency checks are
performed before the logical
consistency checks. If NOINDEX is also
specified, only the logical checks are
performed.
When an OUTPUT clause is used with a When an OUTPUT clause is used with a Low
data manipulation language (DML) data manipulation language (DML)
statement and a run-time error occurs statement and a run-time error occurs
during statement execution, the entire during statement execution, the
transaction is terminated and rolled behavior depends on the
back. SET XACT_ABORT setting. If
SET XACT_ABORT is OFF, a statement
abort error generated by the DML
statement using the OUTPUT clause
will terminate the statement, but the
execution of the batch continues and
the transaction is not rolled back. If
SET XACT_ABORT is ON, all run-time
errors generated by the DML
statement using the OUTPUT clause will
terminate the batch, and the
transaction is rolled back.
CUBE and ROLLUP are not enforced as CUBE and ROLLUP are reserved Low
reserved keywords. keywords within the GROUP BY clause.
The special attributes xsi:nil and The special attributes xsi:nil and Low
xsi:type cannot be queried or modified xsi:type are stored as regular
by data manipulation language attributes and can be queried and
statements. modified.
This means that /e/@xsi:nil fails For example, executing the query
while /e/@* ignores the xsi:nil and SELECT x.query('a/b/@*') returns
xsi:type attributes. However, /e all attributes including xsi:nil and
returns the xsi:nil and xsi:type xsi:type. To exclude these types in the
attributes for consistency with query, replace @* with
SELECT xmlCol , even if @*[namespace-uri(.) != " insert xsi
xsi:nil = "false" . namespace uri " and not
(local-name(.) = "type" or
local-name(.) ="nil".
The XML union and list types are not The union and list types are fully Low
fully supported. supported including the following
functionality:
Union of list
Union of union
List of union
The SET options required for an xQuery The SET options required for an xQuery Low
method are not validated when the method are validated when the method
method is contained in a view or inline is contained in a view or inline table-
table-valued function. valued function. An error is raised if the
SET options of the method are set
incorrectly.
XML attribute values that contain end- XML attribute values that contain end- Low
of-line characters (carriage return and of-line characters (carriage return and
line feed) are not normalized according line feed) are normalized according to
to the XML standard. That is, both the XML standard. That is, all line
characters are returned instead of a breaks in external parsed entities
single line-feed character. (including the document entity) are
normalized on input by translating both
the two-character sequence #xD #xA
and any #xD that is not followed by
#xA to a single #xA character.
See example E in the Examples section See example F in the Examples section Low
below. below.
The ODBC function {fn CONVERT()} The ODBC function {fn CONVERT()} Low
uses the default date format of the uses style 121 (a language-
language. For some languages, the independent YMD format) when
default format is YDM, which can result converting to the ODBC data types
in conversion errors when CONVERT() SQL_TIMESTAMP, SQL_DATE, SQL_TIME,
is combined with other functions, such SQLDATE, SQL_TYPE_TIME, and
as {fn CURDATE()} , that expect a SQL_TYPE_TIMESTAMP.
YMD format.
Reserved Keywords
The compatibility setting also determines the keywords that are reserved by the Database Engine. The following
table shows the reserved keywords that are introduced by each of the compatibility levels.
130 To be determined.
120 None.
COMPATIBILITY-LEVEL SETTING RESERVED KEYWORDS
At a given compatibility level, the reserved keywords include all of the keywords introduced at or below that level.
Thus, for instance, for applications at level 110, all of the keywords listed in the preceding table are reserved. At
the lower compatibility levels, level-100 keywords remain valid object names, but the level-110 language features
corresponding to those keywords are unavailable.
Once introduced, a keyword remains reserved. For example, the reserved keyword PIVOT, which was introduced
in compatibility level 90, is also reserved in levels 100, 110, and 120.
If an application uses an identifier that is reserved as a keyword for its compatibility level, the application will fail.
To work around this, enclose the identifier between either brackets ([]) or quotation marks (""); for example, to
upgrade an application that uses the identifier EXTERNAL to compatibility level 90, you could change the
identifier to either [EXTERNAL ] or "EXTERNAL".
For more information, see Reserved Keywords (Transact-SQL ).
Permissions
Requires ALTER permission on the database.
Examples
A. Changing the compatibility level
The following example changes the compatibility level of the AdventureWorks2012 database to 110, SQL
Server 2012 (11.x).
The following example returns the compatibility level of the current database.
B. Ignoring the SET LANGUAGE statement except under compatibility level 120
The following query ignores the SET L ANGUAGE statement except under compatibility level 120.
SET DATEFORMAT dmy;
DECLARE @t2 date = '12/5/2011' ;
SET LANGUAGE dutch;
SELECT CONVERT(varchar(11), @t2, 106);
C.
For compatibility-level setting of 110 or lower, recursive references on the right-hand side of an EXCEPT clause
create an infinite loop.
WITH
cte AS (SELECT * FROM (VALUES (1),(2),(3)) v (a)),
r
AS (SELECT a FROM Table1
UNION ALL
(SELECT a FROM Table1 EXCEPT SELECT a FROM r) )
SELECT a
FROM r;
D.
This example shows the difference between styles 0 and 121. For more information about date and time styles,
see CAST and CONVERT (Transact-SQL ).
E.
Variable assignment is allowed in a statement containing a top-level UNION operator, but returns unexpected
results. For example, in the following statements, local variable @v is assigned the value of the column
BusinessEntityID from the union of two tables. By definition, when the SELECT statement returns more than one
value, the variable is assigned the last value that is returned. In this case, the variable is correctly assigned the last
value, however, the result set of the SELECT UNION statement is also returned.
ALTER DATABASE AdventureWorks2012
SET compatibility_level = 90;
GO
USE AdventureWorks2012;
GO
DECLARE @v int;
SELECT @v = BusinessEntityID FROM HumanResources.Employee
UNION ALL
SELECT @v = BusinessEntityID FROM HumanResources.EmployeeAddress;
SELECT @v;
F.
Variable assignment is not allowed in a statement containing a top-level UNION operator. Error 10734 is
returned. To resolve the error, rewrite the query as shown in the following example.
DECLARE @v int;
SELECT @v = BusinessEntityID FROM
(SELECT BusinessEntityID FROM HumanResources.Employee
UNION ALL
SELECT BusinessEntityID FROM HumanResources.EmployeeAddress) AS Test;
SELECT @v;
See Also
ALTER DATABASE (Transact-SQL )
Reserved Keywords (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
sys.databases (Transact-SQL )
sys.database_files (Transact-SQL )
View or Change the Compatibility Level of a Database
ALTER DATABASE (Transact-SQL) Database Mirroring
5/4/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature. Use Always On availability groups instead.
Controls database mirroring for a database. Values specified with the database mirroring options apply to both
copies of the database and to the database mirroring session as a whole. Only one <database_mirroring_option>
is permitted per ALTER DATABASE statement.
NOTE
We recommend that you configure database mirroring during off-peak hours because configuration can affect performance.
For ALTER DATABASE options, see ALTER DATABASE (Transact-SQL ). For ALTER DATABASE SET options, see
ALTER DATABASE SET Options (Transact-SQL ).
Transact-SQL Syntax Conventions
Syntax
ALTER DATABASE database_name
SET { <partner_option> | <witness_option> }
<partner_option> ::=
PARTNER { = 'partner_server'
| FAILOVER
| FORCE_SERVICE_ALLOW_DATA_LOSS
| OFF
| RESUME
| SAFETY { FULL | OFF }
| SUSPEND
| TIMEOUT integer
}
<witness_option> ::=
WITNESS { = 'witness_server'
| OFF
}
Arguments
IMPORTANT
A SET PARTNER or SET WITNESS command can complete successfully when entered, but fail later.
NOTE
ALTER DATABASE database mirroring options are not available for a contained database.
database_name
Is the name of the database to be modified.
PARTNER <partner_option>
Controls the database properties that define the failover partners of a database mirroring session and their
behavior. Some SET PARTNER options can be set on either partner; others are restricted to the principal server or
to the mirror server. For more information, see the individual PARTNER options that follow. A SET PARTNER
clause affects both copies of the database, regardless of the partner on which it is specified.
To execute a SET PARTNER statement, the STATE of the endpoints of both partners must be set to STARTED.
Note, also, that the ROLE of the database mirroring endpoint of each partner server instance must be set to either
PARTNER or ALL. For information about how to specify an endpoint, see Create a Database Mirroring Endpoint
for Windows Authentication (Transact-SQL ). To learn the role and state of the database mirroring endpoint of a
server instance, on that instance, use the following Transact-SQL statement:
<partner_option> ::=
NOTE
Only one <partner_option> is permitted per SET PARTNER clause.
IMPORTANT
If a session is set up by using the ALTER DATABASE statement instead of SQL Server Management Studio, the session is set
to full transaction safety by default (SAFETY is set to FULL) and runs in high-safety mode without automatic failover. To allow
automatic failover, configure a witness; to run in high-performance mode, turn off transaction safety (SAFETY OFF).
FAILOVER
Manually fails over the principal server to the mirror server. You can specify FAILOVER only on the principal
server. This option is valid only when the SAFETY setting is FULL (the default).
The FAILOVER option requires master as the database context.
FORCE_SERVICE_ALLOW_DATA_LOSS
Forces database service to the mirror database after the principal server fails with the database in an
unsynchronized state or in a synchronized state when automatic failover does not occur.
We strongly recommend that you force service only if the principal server is no longer running. Otherwise, some
clients might continue to access the original principal database instead of the new principal database.
FORCE_SERVICE_ALLOW_DATA_LOSS is available only on the mirror server and only under all the following
conditions:
The principal server is down.
WITNESS is set to OFF or the witness is connected to the mirror server.
Force service only if you are willing to risk losing some data in order to restore service to the database
immediately.
Forcing service suspends the session, temporarily preserving all the data in the original principal database.
Once the original principal is in service and able to communicate with the new principal server, the
database administrator can resume service. When the session resumes, any unsent log records and the
corresponding updates are lost.
OFF
Removes a database mirroring session and removes mirroring from the database. You can specify OFF on
either partner. For information, see about the impact of removing mirroring, see Removing Database
Mirroring (SQL Server).
RESUME
Resumes a suspended database mirroring session. You can specify RESUME only on the principal server.
SAFETY { FULL | OFF }
Sets the level of transaction safety. You can specify SAFETY only on the principal server.
The default is FULL. With full safety, the database mirroring session runs synchronously (in high-safety
mode). If SAFETY is set to OFF, the database mirroring session runs asynchronously (in high-performance
mode).
The behavior of high-safety mode depends partly on the witness, as follows:
When safety is set to FULL and a witness is set for the session, the session runs in high-safety mode with
automatic failover. When the principal server is lost, the session automatically fails over if the database is
synchronized and the mirror server instance and witness are still connected to each other (that is, they have
quorum). For more information, see Quorum: How a Witness Affects Database Availability (Database
Mirroring).
If a witness is set for the session but is currently disconnected, the loss of the mirror server causes the
principal server to go down.
When safety is set to FULL and the witness is set to OFF, the session runs in high-safety mode without
automatic failover. If the mirror server instance goes down, the principal server instance is unaffected. If the
principal server instance goes down, you can force service (with possible data loss) to the mirror server
instance.
If SAFETY is set to OFF, the session runs in high-performance mode, and automatic failover and manual
failover are not supported. However, problems on the mirror do not affect the principal, and if the principal
server instance goes down, you can, if necessary, force service (with possible data loss) to the mirror server
instance—if WITNESS is set to OFF or the witness is currently connected to the mirror. For more
information on forcing service, see "FORCE_SERVICE_ALLOW_DATA_LOSS" earlier in this section.
IMPORTANT
High-performance mode is not intended to use a witness. However, whenever you set SAFETY to OFF, we strongly
recommend that you ensure that WITNESS is set to OFF.
SUSPEND
Pauses a database mirroring session.
You can specify SUSPEND on either partner.
TIMEOUT integer
Specifies the time-out period in seconds. The time-out period is the maximum time that a server instance waits to
receive a PING message from another instance in the mirroring session before considering that other instance to
be disconnected.
You can specify the TIMEOUT option only on the principal server. If you do not specify this option, by default, the
time period is 10 seconds. If you specify 5 or greater, the time-out period is set to the specified number of seconds.
If you specify a time-out value of 0 to 4 seconds, the time-out period is automatically set to 5 seconds.
IMPORTANT
We recommend that you keep the time-out period at 10 seconds or greater. Setting the value to less than 10 seconds
creates the possibility of a heavily loaded system missing PINGs and declaring a false failure.
NOTE
Database properties cannot be set on the witness.
<witness_option> ::=
NOTE
Only one <witness_option> is permitted per SET WITNESS clause.
Remarks
Examples
A. Creating a database mirroring session with a witness
Setting up database mirroring with a witness requires configuring security and preparing the mirror database, and
also using ALTER DATABASE to set the partners. For an example of the complete setup process, see Setting Up
Database Mirroring (SQL Server).
B. Manually failing over a database mirroring session
Manual failover can be initiated from either database mirroring partner. Before failing over, you should verify that
the server you believe to be the current principal server actually is the principal server. For example, for the
AdventureWorks2012 database, on that server instance that you think is the current principal server, execute the
following query:
If the server instance is in fact the principal, the value of mirroring_role_desc is Principal . If this server instance
were the mirror server, the SELECT statement would return Mirror .
The following example assumes that the server is the current principal.
1. Manually fail over to the database mirroring partner:
2. To verify the results of the failover on the new mirror, execute the following query:
See Also
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
sys.database_mirroring_witnesses (Transact-SQL )
ALTER DATABASE ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters an encryption key and certificate that is used for transparently encrypting a database. For more information
about transparent database encryption, see Transparent Data Encryption (TDE ).
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
Arguments
REGENERATE WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
Specifies the encryption algorithm that is used for the encryption key.
ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name
Specifies the name of the certificate used to encrypt the database encryption key.
ENCRYPTION BY SERVER ASYMMETRIC KEY Encryptor_Name
Specifies the name of the asymmetric key used to encrypt the database encryption key.
Remarks
The certificate or asymmetric key that is used to encrypt the database encryption key must be located in the
master system database.
When the database owner (dbo) is changed, the database encryption key does not have to be regenerated.
After a database encryption key has been modified twice, a log backup must be performed before the database
encryption key can be modified again.
Permissions
Requires CONTROL permission on the database and VIEW DEFINITION permission on the certificate or
asymmetric key that is used to encrypt the database encryption key.
Examples
The following example alters the database encryption key to use the AES_256 algorithm.
-- Uses AdventureWorks
See Also
Transparent Data Encryption (TDE )
SQL Server Encryption
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
ALTER DATABASE SET Options (Transact-SQL )
CREATE DATABASE ENCRYPTION KEY (Transact-SQL )
DROP DATABASE ENCRYPTION KEY (Transact-SQL )
sys.dm_database_encryption_keys (Transact-SQL )
ALTER DATABASE (Transact-SQL) File and Filegroup
Options
5/3/2018 • 18 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Modifies the files and filegroups associated with the database in SQL Server. Adds or removes files and filegroups
from a database, and changes the attributes of a database or its files and filegroups. For other ALTER DATABASE
options, see ALTER DATABASE (Transact-SQL ).
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
ALTER DATABASE database_name
{
<add_or_modify_files>
| <add_or_modify_filegroups>
}
[;]
<add_or_modify_files>::=
{
ADD FILE <filespec> [ ,...n ]
[ TO FILEGROUP { filegroup_name } ]
| ADD LOG FILE <filespec> [ ,...n ]
| REMOVE FILE logical_file_name
| MODIFY FILE <filespec>
}
<filespec>::=
(
NAME = logical_file_name
[ , NEWNAME = new_logical_name ]
[ , FILENAME = {'os_file_name' | 'filestream_path' | 'memory_optimized_data_path' } ]
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB| % ] ]
[ , OFFLINE ]
)
<add_or_modify_filegroups>::=
{
| ADD FILEGROUP filegroup_name
[ CONTAINS FILESTREAM | CONTAINS MEMORY_OPTIMIZED_DATA ]
| REMOVE FILEGROUP filegroup_name
| MODIFY FILEGROUP filegroup_name
{ <filegroup_updatability_option>
| DEFAULT
| NAME = new_filegroup_name
| { AUTOGROW_SINGLE_FILE | AUTOGROW_ALL_FILES }
}
}
<filegroup_updatability_option>::=
{
{ READONLY | READWRITE }
| { READ_ONLY | READ_WRITE }
}
Arguments
<add_or_modify_files>::=
Specifies the file to be added, removed, or modified.
database_name
Is the name of the database to be modified.
ADD FILE
Adds a file to the database.
TO FILEGROUP { filegroup_name }
Specifies the filegroup to which to add the specified file. To display the current filegroups and which filegroup is
the current default, use the sys.filegroups catalog view.
ADD LOG FILE
Adds a log file be added to the specified database.
REMOVE FILE logical_file_name
Removes the logical file description from an instance of SQL Server and deletes the physical file. The file cannot
be removed unless it is empty.
logical_file_name
Is the logical name used in SQL Server when referencing the file.
WARNING
Removing a database file that has FILE_SNAPSHOT backups associated with it will succeed, but any associated snapshots will
not be deleted to avoid invalidating the backups referring to the database file. The file will be truncated, but will not be
physically deleted in order to keep the FILE_SNAPSHOT backups intact. For more information, see SQL Server Backup and
Restore with Microsoft Azure Blob Storage Service. Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server
2017).
MODIFY FILE
Specifies the file that should be modified. Only one <filespec> property can be changed at a time. NAME must
always be specified in the <filespec> to identify the file to be modified. If SIZE is specified, the new size must be
larger than the current file size.
To modify the logical name of a data file or log file, specify the logical file name to be renamed in the NAME clause,
and specify the new logical name for the file in the NEWNAME clause. For example:
To move a data file or log file to a new location, specify the current logical file name in the NAME clause and specify
the new path and operating system file name in the FILENAME clause. For example:
When you move a full-text catalog, specify only the new path in the FILENAME clause. Do not specify the
operating-system file name.
For more information, see Move Database Files.
For a FILESTREAM filegroup, NAME can be modified online. FILENAME can be modified online; however, the
change does not take effect until after the container is physically relocated and the server is shutdown and then
restarted.
You can set a FILESTREAM file to OFFLINE. When a FILESTREAM file is offline, its parent filegroup will be
internally marked as offline; therefore, all access to FILESTREAM data within that filegroup will fail.
NOTE
<add_or_modify_files> options are not available in a Contained Database.
<filespec>::=
Controls the file properties.
NAME logical_file_name
Specifies the logical name of the file.
logical_file_name
Is the logical name used in an instance of SQL Server when referencing the file.
NEWNAME new_logical_file_name
Specifies a new logical name for the file.
new_logical_file_name
Is the name to replace the existing logical file name. The name must be unique within the database and comply
with the rules for identifiers. The name can be a character or Unicode constant, a regular identifier, or a delimited
identifier.
FILENAME { 'os_file_name' | 'filestream_path' | 'memory_optimized_data_path'}
Specifies the operating system (physical) file name.
' os_file_name '
For a standard (ROWS ) filegroup, this is the path and file name that is used by the operating system when you
create the file. The file must reside on the server on which SQL Server is installed. The specified path must exist
before executing the ALTER DATABASE statement.
SIZE, MAXSIZE, and FILEGROWTH parameters cannot be set when a UNC path is specified for the file.
NOTE
System databases cannot reside in UNC share directories.
Data files should not be put on compressed file systems unless the files are read-only secondary files, or if the
database is read-only. Log files should never be put on compressed file systems.
If the file is on a raw partition, os_file_name must specify only the drive letter of an existing raw partition. Only one
file can be put on each raw partition.
' filestream_path '
For a FILESTREAM filegroup, FILENAME refers to a path where FILESTREAM data will be stored. The path up to
the last folder must exist, and the last folder must not exist. For example, if you specify the path
C:\MyFiles\MyFilestreamData, C:\MyFiles must exist before you run ALTER DATABASE, but the
MyFilestreamData folder must not exist.
The SIZE and FILEGROWTH properties do not apply to a FILESTREAM filegroup.
' memory_optimized_data_path '
For a memory-optimized filegroup, FILENAME refers to a path where memory-optimized data will be stored. The
path up to the last folder must exist, and the last folder must not exist. For example, if you specify the path
C:\MyFiles\MyData, C:\MyFiles must exist before you run ALTER DATABASE, but the MyData folder must not
exist.
The filegroup and file ( <filespec> ) must be created in the same statement.
The SIZE, MAXSIZE, and FILEGROWTH properties do not apply to a memory-optimized filegroup.
SIZE size
Specifies the file size. SIZE does not apply to FILESTREAM filegroups.
size
Is the size of the file.
When specified with ADD FILE, size is the initial size for the file. When specified with MODIFY FILE, size is the
new size for the file, and must be larger than the current file size.
When size is not supplied for the primary file, the SQL Server uses the size of the primary file in the model
database. When a secondary data file or log file is specified but size is not specified for the file, the Database
Engine makes the file 1 MB.
The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes, or terabytes. The default is
MB. Specify a whole number and do not include a decimal. To specify a fraction of a megabyte, convert the value
to kilobytes by multiplying the number by 1024. For example, specify 1536 KB instead of 1.5 MB (1.5 x 1024 =
1536).
MAXSIZE { max_size| UNLIMITED }
Specifies the maximum file size to which the file can grow.
max_size
Is the maximum file size. The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes,
or terabytes. The default is MB. Specify a whole number and do not include a decimal. If max_size is not specified,
the file size will increase until the disk is full.
UNLIMITED
Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a
maximum size of 2 TB, and a data file has a maximum size of 16 TB. There is no maximum size when this option is
specified for a FILESTREAM container. It continues to grow until the disk is full.
FILEGROWTH growth_increment
Specifies the automatic growth increment of the file. The FILEGROWTH setting for a file cannot exceed the
MAXSIZE setting. FILEGROWTH does not apply to FILESTREAM filegroups.
growth_increment
Is the amount of space added to the file every time new space is required.
The value can be specified in MB, KB, GB, TB, or percent (%). If a number is specified without an MB, KB, or %
suffix, the default is MB. When % is specified, the growth increment size is the specified percentage of the size of
the file at the time the increment occurs. The size specified is rounded to the nearest 64 KB.
A value of 0 indicates that automatic growth is set to off and no additional space is allowed.
If FILEGROWTH is not specified, the default values are:
Starting with SQL Server 2016 (13.x) Data 64 MB. Log files 64 MB.
Starting with SQL Server 2005 Data 1 MB. Log files 10%.
OFFLINE
Sets the file offline and makes all objects in the filegroup inaccessible.
Cau t i on
Use this option only when the file is corrupted and can be restored. A file set to OFFLINE can only be set online by
restoring the file from backup. For more information about restoring a single file, see RESTORE (Transact-SQL ).
NOTE
<filespec> options are not available in a Contained Database.
<add_or_modify_filegroups>::=
Add, modify, or remove a filegroup from the database.
ADD FILEGROUP filegroup_name
Adds a filegroup to the database.
CONTAINS FILESTREAM
Specifies that the filegroup stores FILESTREAM binary large objects (BLOBs) in the file system.
CONTAINS MEMORY_OPTIMIZED_DATA
Applies to: SQL Server ( SQL Server 2014 (12.x) through SQL Server 2017)
Specifies that the filegroup stores memory optimized data in the file system. For more information, see In-
Memory OLTP (In-Memory Optimization). Only one MEMORY_OPTIMIZED_DATA filegroup is allowed per
database. For creating memory optimized tables, the filegroup cannot be empty. There must be at least one file.
filegroup_name refers to a path. The path up to the last folder must exist, and the last folder must not exist.
The following example creates a filegroup that is added to a database named xtp_db, and adds a file to the
filegroup. The filegroup stores memory_optimized data.
NOTE
Unless the FILESTREAM Garbage Collector has removed all the files from a FILESTREAM container, the ALTER DATABASE
REMOVE FILE operation to remove a FILESTREAM container will fail and return an error. See the "Remove FILESTREAM
Container" section in Remarks later in this topic.
NOTE
The keyword READONLY will be removed in a future version of Microsoft SQL Server. Avoid using READONLY in new
development work, and plan to modify applications that currently use READONLY. Use READ_ONLY instead.
READ_WRITE | READWRITE
Specifies the group is READ_WRITE. Updates are enabled for the objects in the filegroup. To change this state, you
must have exclusive access to the database. For more information, see the SINGLE_USER clause.
NOTE
The keyword READWRITE will be removed in a future version of Microsoft SQL Server. Avoid using READWRITE in new
development work, and plan to modify applications that currently use READWRITE to use READ_WRITE instead.
The status of these options can be determined by examining the is_read_only column in the sys.databases
catalog view or the Updateability property of the DATABASEPROPERTYEX function.
Remarks
To decrease the size of a database, use DBCC SHRINKDATABASE.
You cannot add or remove a file while a BACKUP statement is running.
A maximum of 32,767 files and 32,767 filegroups can be specified for each database.
Starting with SQL Server 2005, the state of a database file (for example, online or offline), is maintained
independently from the state of the database. For more information, see File States.
The state of the files within a filegroup determines the availability of the whole filegroup. For a filegroup to be
available, all files within the filegroup must be online.
If a filegroup is offline, any try to access the filegroup by an SQL statement will fail with an error. When you
build query plans for SELECT statements, the query optimizer avoids nonclustered indexes and indexed views
that reside in offline filegroups. This enables these statements to succeed. However, if the offline filegroup
contains the heap or clustered index of the target table, the SELECT statements fail. Additionally, any INSERT ,
UPDATE , or DELETE statement that modifies a table with any index in an offline filegroup will fail.
Moving Files
You can move system or user-defined data and log files by specifying the new location in FILENAME. This may be
useful in the following scenarios:
Failure recovery. For example, the database is in suspect mode or shutdown caused by hardware failure.
Planned relocation.
Relocation for scheduled disk maintenance.
For more information, see Move Database Files.
Initializing Files
By default, data and log files are initialized by filling the files with zeros when you perform one of the following
operations:
Create a database.
Add files to an existing database.
Increase the size of an existing file.
Restore a database or filegroup.
Data files can be initialized instantaneously. This enables for fast execution of these file operations. For more
information, see Database File Initialization.
Examples
A. Adding a file to a database
The following example adds a 5-MB data file to the AdventureWorks2012 database.
USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = Test1dat2,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat2.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO
USE master
GO
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP Test1FG1;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = test1dat3,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat3.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1dat4,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat4.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
)
TO FILEGROUP Test1FG1;
GO
USE master;
GO
ALTER DATABASE AdventureWorks2012
REMOVE FILE test1dat4;
GO
E. Modifying a file
The following example increases the size of one of the files added in example B.
The ALTER DATABASE with MODIFY FILE command can only make a file size bigger, so if you need to make the
file size smaller you need to use DBCC SHRINKFILE.
USE master;
GO
This example shrinks the size of a data file to 100 MB, and then specifies the size at that amount.
USE AdventureWorks2012;
GO
USE master;
GO
NOTE
You must physically move the file to the new directory before running this example. Afterward, stop and start the instance of
SQL Server or take the AdventureWorks2012 database OFFLINE and then ONLINE to implement the change.
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(
NAME = Test1dat2,
FILENAME = N'c:\t1dat2.ndf'
);
GO
USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'E:\SQLData\tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'E:\SQLData\templog.ldf');
GO
5. Delete the tempdb.mdf and templog.ldf files from their original location.
H. Making a filegroup the default
The following example makes the Test1FG1 filegroup created in example B the default filegroup. Then, the default
filegroup is reset to the PRIMARY filegroup. Note that PRIMARY must be delimited by brackets or quotation marks.
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP Test1FG1 DEFAULT;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP [PRIMARY] DEFAULT;
GO
J. Change filegroup so that when a file in the filegroup meets the autogrow threshold, all files in the filegroup
grow
The following example generates the required ALTER DATABASE statements to modify read-write filegroups with the
AUTOGROW_ALL_FILES setting.
--Generate ALTER DATABASE ... MODIFY FILEGROUP statements
--so that all read-write filegroups grow at the same time.
SET NOCOUNT ON;
SET @query = 'SELECT ' + CAST(@dbid AS NVARCHAR) + ', ''' + @dbname + ''', [name], 0 FROM [' + @dbname +
'].sys.filegroups WHERE [type] = ''FG'' AND is_read_only = 0;'
INSERT INTO #tmpfgs
EXEC (@query)
UPDATE #tmpdbs
SET isdone = 1
WHERE [dbid] = @dbid
END;
SET @query = 'ALTER DATABASE [' + @dbname + '] MODIFY FILEGROUP [' + @fgname + '] AUTOGROW_ALL_FILES;'
PRINT @query
UPDATE #tmpfgs
SET isdone = 1
WHERE [dbid] = @dbid AND fgname = @fgname
END
END;
GO
See Also
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
DROP DATABASE (Transact-SQL )
sp_spaceused (Transact-SQL )
sys.databases (Transact-SQL )
sys.database_files (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.filegroups (Transact-SQL )
sys.master_files (Transact-SQL )
Binary Large Object (Blob) Data (SQL Server)
DBCC SHRINKFILE (Transact-SQL )
sp_filestream_force_garbage_collection (Transact-SQL )
Database File Initialization
ALTER DATABASE (Transact-SQL) SET HADR
5/4/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This topic contains the ALTER DATABASE syntax for setting Always On availability groups options on a
secondary database. Only one SET HADR option is permitted per ALTER DATABASE statement. These options
are supported only on secondary replicas.
Transact-SQL Syntax Conventions
Syntax
ALTER DATABASE database_name
SET HADR
{
{ AVAILABILITY GROUP = group_name | OFF }
| { SUSPEND | RESUME }
}
[;]
Arguments
database_name
Is the name of the secondary database to be modified.
SET HADR
Executes the specified Always On availability groups command on the specified database.
{ AVAIL ABILITY GROUP =group_name | OFF }
Joins or removes the availability database from the specified availability group, as follows:
group_name
Joins the specified database on the secondary replica that is hosted by the server instance on which you execute
the command to the availability group specified by group_name.
The prerequisites for this operation are as follows:
The database must already have been added to the availability group on the primary replica.
The primary replica must be active. For information about how troubleshoot an inactive primary replica,
see Troubleshooting Always On Availability Groups Configuration (SQL Server).
The primary replica must be online, and the secondary replica must be connected to the primary replica.
The secondary database must have been restored using WITH NORECOVERY from recent database and
log backups of the primary database, ending with a log backup that is recent enough to permit the
secondary database to catch up to the primary database.
NOTE
To add a database to the availability group, connect to the server instance that hosts the primary replica, and use
the ALTER AVAILABILITY GROUPgroup_name ADD DATABASE database_name statement.
For more information, see Join a Secondary Database to an Availability Group (SQL Server).
OFF
Removes the specified secondary database from the availability group.
Removing a secondary database can be useful if it has fallen far behind the primary database, and you do
not want to wait for the secondary database to catch up. After removing the secondary database, you can
update it by restoring a sequence of backups ending with a recent log backup (using RESTORE … WITH
NORECOVERY ).
IMPORTANT
To completely remove an availability database from an availability group, connect to the server instance that hosts the
primary replica, and use the ALTER AVAILABILITY GROUPgroup_name REMOVE DATABASE availability_database_name
statement. For more information, see Remove a Primary Database from an Availability Group (SQL Server).
SUSPEND
Suspends data movement on a secondary database. A SUSPEND command returns as soon as it has been
accepted by the replica that hosts the target database, but actually suspending the database occurs
asynchronously.
The scope of the impact depends on where you execute the ALTER DATABASE statement:
If you suspend a secondary database on a secondary replica, only the local secondary database is
suspended. Existing connections on the readable secondary remain usable. New connections to the
suspended database on the readable secondary are not allowed until data movement is resumed.
If you suspend a database on the primary replica, data movement is suspended to the corresponding
secondary databases on every secondary replica. Existing connections on a readable secondary remain
usable, and new read-intent connections will not connect to readable secondary replicas.
When data movement is suspended due to a forced manual failover, connections to the new secondary
replica are not allowed while data movement is suspended.
When a database on a secondary replica is suspended, both the database and replica become
unsynchronized and are marked as NOT SYNCHRONIZED.
IMPORTANT
While a secondary database is suspended, the send queue of the corresponding primary database will accumulate unsent
transaction log records. Connections to the secondary replica return data that was available at the time the data movement
was suspended.
NOTE
Suspending and resuming an Always On secondary database does not directly affect the availability of the primary
database, though suspending a secondary database can impact redundancy and failover capabilities for the primary
database, until the suspended secondary database is resumed. This is in contrast to database mirroring, where the mirroring
state is suspended on both the mirror database and the principal database until mirroring is resumed. Suspending an
Always On primary database suspends data movement on all the corresponding secondary databases, and redundancy and
failover capabilities cease for that database until the primary database is resumed.
Database States
When a secondary database is joined to an availability group, the local secondary replica changes the state of that
secondary database from RESTORING to ONLINE. If a secondary database is removed from the availability
group, it is set back to the RESTORING state by the local secondary replica. This allows you to apply subsequent
log backups from the primary database to that secondary database.
Restrictions
Execute ALTER DATABASE statements outside of both transactions and batches.
Security
Permissions
Requires ALTER permission on the database. Joining a database to an availability group requires membership in
the db_owner fixed database role.
Examples
The following example joins the secondary database, AccountsDb1 , to the local secondary replica of the
AccountsAG availability group.
See Also
ALTER DATABASE (Transact-SQL )
ALTER AVAIL ABILITY GROUP (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
Overview of AlwaysOn Availability Groups (SQL Server) Troubleshoot AlwaysOn Availability Groups
Configuration (SQL Server)
ALTER DATABASE SCOPED CREDENTIAL (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Changes the properties of a database scoped credential.
Transact-SQL Syntax Conventions
Syntax
ALTER DATABASE SCOPED CREDENTIAL credential_name WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]
Arguments
credential_name
Specifies the name of the database scoped credential that is being altered.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server. To import a file from Azure Blob
storage, the identity name must be SHARED ACCESS SIGNATURE . For more information about shared access
signatures, see Using Shared Access Signatures (SAS ).
SECRET ='secret'
Specifies the secret required for outgoing authentication. secret is required to import a file from Azure Blob
storage. secret may be optional for other purposes.
WARNING
The SAS key value might begin with a '?' (question mark). When you use the SAS key, you must remove the leading '?'.
Otherwise your efforts might be blocked.
Remarks
When a database scoped credential is changed, the values of both identity_name and secret are reset. If the
optional SECRET argument is not specified, the value of the stored secret will be set to NULL.
The secret is encrypted by using the service master key. If the service master key is regenerated, the secret is
reencrypted by using the new service master key.
Information about database scoped credentials is visible in the sys.database_scoped_credentials catalog view.
Permissions
Requires ALTER permission on the credential.
Examples
A. Changing the password of a database scoped credential
The following example changes the secret stored in a database scoped credential called Saddles . The database
scoped credential contains the Windows login RettigB and its password. The new password is added to the
database scoped credential using the SECRET clause.
See Also
Credentials (Database Engine)
CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.database_scoped_credentials
CREATE CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
ALTER DATABASE SCOPED CONFIGURATION
(Transact-SQL)
5/16/2018 • 13 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This statement enables several database configuration settings at the individual database level. This statement is
available in Azure SQL Database and in SQL Server beginning with SQL Server 2016 (13.x). Those settings are:
Clear procedure cache.
Set the MAXDOP parameter to an arbitrary value (1,2, ...) for the primary database based on what works best
for that particular database and set a different value (e.g. 0) for all secondary database used (such as for
reporting queries).
Set the query optimizer cardinality estimation model independent of the database to compatibility level.
Enable or disable parameter sniffing at the database level.
Enable or disable query optimization hotfixes at the database level.
Enable or disable the identity cache at the database level.
Enable or disable a compiled plan stub to be stored in cache when a batch is compiled for the first time.
Enable or disable collection of execution statistics for natively compiled T-SQL modules.
Enable or disable online by default options for DDL statements that support the ONLINE= syntax.
Enable or disable resumable by default options for DDL statements that support the RESUMABLE= syntax.
Transact-SQL Syntax Conventions
Syntax
ALTER DATABASE SCOPED CONFIGURATION
{
{ [ FOR SECONDARY] SET <set_options> }
}
| CLEAR PROCEDURE_CACHE
| SET < set_options >
[;]
Arguments
FOR SECONDARY
Specifies the settings for secondary databases (all secondary databases must have the identical values).
MAXDOP = {<value> | PRIMARY }
<value>
Specifies the default MAXDOP setting that should be used for statements. 0 is the default value and indicates that
the server configuration will be used instead. The MAXDOP at the database scope overrides (unless it is set to 0)
the max degree of parallelism set at the server level by sp_configure. Query hints can still override the DB
scoped MAXDOP in order to tune specific queries that need different setting. All these settings are limited by the
MAXDOP set for the Workload Group.
You can use the max degree of parallelism option to limit the number of processors to use in parallel plan
execution. SQL Server considers parallel execution plans for queries, index data definition language (DDL )
operations, parallel insert, online alter column, parallel stats collection, and static and keyset-driven cursor
population.
To set this option at the instance level, see Configure the max degree of parallelism Server Configuration Option.
TIP
To accomplish this at the query level, add the MAXDOP query hint.
PRIMARY
Can only be set for the secondaries, while the database in on the primary, and indicates that the configuration will
be the one set for the primary. If the configuration for the primary changes, the value on the secondaries will
change accordingly without the need to set the secondaries value explicitly. PRIMARY is the default setting for the
secondaries.
LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY }
Enables you to set the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version
independent of the compatibility level of the database. The default is OFF, which sets the query optimizer
cardinality estimation model based on the compatibility level of the database. Setting this to ON is equivalent to
enabling Trace Flag 9481.
TIP
To accomplish this at the query level, add the QUERYTRACEON query hint. Starting with SQL Server 2016 (13.x) SP1, to
accomplish this at the query level, add the USE HINT query hint instead of using the trace flag.
PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the query optimizer
cardinality estimation model setting on all secondaries will be the value set for the primary. If the configuration on
the primary for the query optimizer cardinality estimation model changes, the value on the secondaries will change
accordingly. PRIMARY is the default setting for the secondaries.
PARAMETER_SNIFFING = { ON | OFF | PRIMARY }
Enables or disables parameter sniffing. The default is ON. This is equivalent to Trace Flag 4136.
TIP
To accomplish this at the query level, see the OPTIMIZE FOR UNKNOWN query hint. Starting with SQL Server 2016 (13.x)
SP1, to accomplish this at the query level, the USE HINT query hint is also available.
PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the value for this
setting on all secondaries will be the value set for the primary. If the configuration on the primary for using
parameter sniffing changes, the value on the secondaries will change accordingly without the need to set the
secondaries value explicitly. This is the default setting for the secondaries.
QUERY_OPTIMIZER_HOTFIXES = { ON | OFF | PRIMARY }
Enables or disables query optimization hotfixes regardless of the compatibility level of the database. The default is
OFF, which disables query optimization hotfixes that were released after the highest available compatibility level
was introduced for a specific version (post-RTM ). Setting this to ON is equivalent to enabling Trace Flag 4199.
TIP
To accomplish this at the query level, add the QUERYTRACEON query hint. Starting with SQL Server 2016 (13.x) SP1, to
accomplish this at the query level, add the USE HINT query hint instead of using the trace flag.
PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the value for this
setting on all secondaries is the value set for the primary. If the configuration for the primary changes, the value on
the secondaries changes accordingly without the need to set the secondaries value explicitly. This is the default
setting for the secondaries.
CLEAR PROCEDURE_CACHE
Clears the procedure (plan) cache for the database. This can be executed both on the primary and the secondaries.
IDENTITY_CACHE = { ON | OFF }
Applies to: SQL Server 2017 (14.x) and Azure SQL Database
Enables or disables identity cache at the database level. The default is ON. Identity caching is used to improve
INSERT performance on tables with identity columns. To avoid gaps in the values of an identity column in cases
where the server restarts unexpectedly or fails over to a secondary server, disable the IDENTITY_CACHE option.
This option is similar to the existing Trace Flag 272, except that it can be set at the database level rather than only at
the server level.
NOTE
This option can only be set for the PRIMARY. For more information, see identity columns.
OPTIMIZE_FOR_AD_HOC_WORKLOADS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables a compiled plan stub to be stored in cache when a batch is compiled for the first time. The
default is OFF. Once the database scoped configuration OPTIMIZE_FOR_AD_HOC_WORKLOADS is enabled for
a database, a compiled plan stub will be stored in cache when a batch is compiled for the first time. Plan stubs have
a smaller memory footprint compared to the size of the full compiled plan. If a batch is compiled or executed again,
the compiled plan stub will be removed and replaced with a full compiled plan.
XTP_PROCEDURE_EXECUTION_STATISTICS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables collection of execution statistics at the module-level for natively compiled T-SQL modules in
the current database. The default is OFF. The execution statistics are reflected in sys.dm_exec_procedure_stats.
Module-level execution statistics for natively compiled T-SQL modules are collected if either this option is ON, or
if statistics collection is enabled through sp_xtp_control_proc_exec_stats.
XTP_QUERY_EXECUTION_STATISTICS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables collection of execution statistics at the statement-level for natively compiled T-SQL modules in
the current database. The default is OFF. The execution statistics are reflected in sys.dm_exec_query_stats and in
Query Store.
Statement-level execution statistics for natively compiled T-SQL modules are collected if either this option is ON,
or if statistics collection is enabled through sp_xtp_control_query_exec_stats.
For more details about performance monitoring of natively-compiled T-SQL modules see Monitoring
Performance of Natively Compiled Stored Procedures.
ELEVATE_ONLINE = { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }
Applies to: Azure SQL Database (feature is in public preview )
Allows you to select options to cause the engine to automatically elevate supported operations to online. The
default is OFF, which means operations will not be elevated to online unless specified in the statement.
sys.database_scoped_configurations reflects the current value of ELEVATE_ONLINE. These options will only apply
to operations that are generally supported for online.
FAIL_UNSUPPORTED
This value elevates all supported DDL operations to ONLINE. Operations that do not support online execution will
fail and throw a warning.
WHEN_SUPPORTED
This value elevates operations that support ONLINE. Operations that do not support online will be run offline.
NOTE
You can override the default setting by submitting a statement with the ONLINE option specified.
Permissions
Requires ALTER ANY DATABASE SCOPE CONFIGURATION
on the database. This permission can be granted by a user with CONTROL permission on a database.
General Remarks
While you can configure secondary databases to have different scoped configuration settings from their primary,
all secondary databases use the same configuration. Different settings cannot be configured for individual
secondaries.
Executing this statement clears the procedure cache in the current database, which means that all queries have to
recompile.
For 3-part name queries, the settings for the current database connection for the query is honored, other than for
SQL modules (such as procedures, functions, and triggers) that are compiled in the current database context and
therefore uses the options of the database in which they reside.
The ALTER_DATABASE_SCOPED_CONFIGURATION event is added as a DDL event that can be used to fire a
DDL trigger. This is a child of the ALTER_DATABASE_EVENTS trigger group.
Database scoped configuration settings will be carried over with the database. This means that when a given
database is restored or attached, the existing configuration settings remain.
Metadata
Examples
These examples demonstrate the use of ALTER DATABASE SCOPED CONFIGURATION
A. Grant Permission
This example grant permission required to execute ALTER DATABASE SCOPED CONFIGURATION
to user [Joe].
B. Set MAXDOP
This example sets MAXDOP = 1 for a primary database and MAXDOP = 4 for a secondary database in a geo-
replication scenario.
This example sets MAXDOP for a secondary database to be the same as it is set for its primary database in a geo-
replication scenario.
C. Set LEGACY_CARDINALITY_ESTIMATION
This example sets LEGACY_CARDINALITY_ESTIMATION to ON for a secondary database in a geo-replication
scenario.
This example sets LEGACY_CARDINALITY_ESTIMATION for a secondary database as it is for its primary
database in a geo-replication scenario.
D. Set PARAMETER_SNIFFING
This example sets PARAMETER_SNIFFING to OFF for a primary database in a geo-replication scenario.
This example sets PARAMETER_SNIFFING to OFF for a primary database in a geo-replication scenario.
This example sets PARAMETER_SNIFFING for secondary database as it is on primary database in a geo-
replication scenario.
E. Set QUERY_OPTIMIZER_HOTFIXES
Set QUERY_OPTIMIZER_HOTFIXES to ON for a primary database in a geo-replication scenario.
G. Set IDENTITY_CACHE
Applies to: SQL Server 2017 (14.x) and SQL Database (feature is in public preview )
This example disables the identity cache.
H. Set OPTIMIZE_FOR_AD_HOC_WORKLOADS
Applies to: SQL Database
This example enables a compiled plan stub to be stored in cache when a batch is compiled for the first time.
I. Set ELEVATE_ONLINE
Applies to: Azure SQL Database (feature is in public preview )
This example sets ELEVATE_ONLINE to FAIL_UNSUPPORTED. tsqlCopy
J. Set ELEVATE_RESUMABLE
Applies to: Azure SQL Database (feature is in public preview )
This example sets ELEVEATE_RESUMABLE to WHEN_SUPPORTED. tsqlCopy
Additional Resources
MAXDOP Resources
Degree of Parallelism
Recommendations and guidelines for the "max degree of parallelism" configuration option in SQL Server
LEGACY_CARDINALITY_ESTIMATION Resources
Cardinality Estimation (SQL Server)
Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator
PARAMETER_SNIFFING Resources
Parameter Sniffing
"I smell a parameter!"
QUERY_OPTIMIZER_HOTFIXES Resources
Trace Flags
SQL Server query optimizer hotfix trace flag 4199 servicing model
ELEVATE_ONLINE Resources
Guidelines for Online Index Operations
ELEVATE_RESUMABLE Resources
Guidelines for Online Index Operations
More information
sys.database_scoped_configurations
sys.configurations
Databases and Files Catalog Views
Server Configuration Options sys.configurations
ALTER DATABASE SET Options (Transact-SQL)
5/3/2018 • 50 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This topic contains the ALTER DATABASE syntax that is related to setting database options in SQL Server. For
other ALTER DATABASE syntax, see the following topics.
ALTER DATABASE (Transact-SQL )
ALTER DATABASE (Azure SQL Database)
ALTER DATABASE (Azure SQL Data Warehouse)
ALTER DATABASE (Parallel Data Warehouse)
Database mirroring, Always On availability groups, and compatibility levels are SET options but are described in
separate topics because of their length. For more information, see ALTER DATABASE Database Mirroring
(Transact-SQL ), ALTER DATABASE SET HADR (Transact-SQL ), and ALTER DATABASE Compatibility Level
(Transact-SQL ).
NOTE
Many database set options can be configured for the current session by using SET Statements (Transact-SQL) and are often
configured by applications when they connect. Session level set options override the ALTER DATABASE SET values. The
database options described below are values that can be set for sessions that do not explicitly provide other set option
values.
Syntax
ALTER DATABASE { database_name | CURRENT }
SET
{
<optionspec> [ ,...n ] [ WITH <termination> ]
}
<optionspec> ::=
{
<auto_option>
| <automatic_tuning_option>
| <change_tracking_option>
| <containment_option>
| <cursor_option>
| <database_mirroring_option>
| <date_correlation_optimization_option>
| <db_encryption_option>
| <db_state_option>
| <db_update_option>
| <db_user_access_option>
| <delayed_durability_option>
| <external_access_option>
| FILESTREAM ( <FILESTREAM_option> )
| <HADR_options>
| <mixed_page_allocation_option>
| <parameterization_option>
| <query_store_options>
| <recovery_option>
| <remote_data_archive_option>
| <service_broker_option>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <termination>
}
<auto_option> ::=
{
AUTO_CLOSE { ON | OFF }
| AUTO_CREATE_STATISTICS { OFF | ON [ ( INCREMENTAL = { ON | OFF } ) ] }
| AUTO_SHRINK { ON | OFF }
| AUTO_UPDATE_STATISTICS { ON | OFF }
| AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}
<automatic_tuning_option> ::=
{
AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = { ON | OFF } )
}
<change_tracking_option> ::=
{
CHANGE_TRACKING
{
= OFF
| = ON [ ( <change_tracking_option_list > [,...n] ) ]
| ( <change_tracking_option_list> [,...n] )
}
}
<change_tracking_option_list> ::=
{
AUTO_CLEANUP = { ON | OFF }
| CHANGE_RETENTION = retention_period { DAYS | HOURS | MINUTES }
}
<containment_option> ::=
CONTAINMENT = { NONE | PARTIAL }
<cursor_option> ::=
{
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
| CURSOR_DEFAULT { LOCAL | GLOBAL }
}
<database_mirroring_option>
ALTER DATABASE Database Mirroring
<date_correlation_optimization_option> ::=
DATE_CORRELATION_OPTIMIZATION { ON | OFF }
<db_encryption_option> ::=
ENCRYPTION { ON | OFF }
<db_state_option> ::=
{ ONLINE | OFFLINE | EMERGENCY }
<db_update_option> ::=
{ READ_ONLY | READ_WRITE }
<db_user_access_option> ::=
{ SINGLE_USER | RESTRICTED_USER | MULTI_USER }
<delayed_durability_option> ::=
<delayed_durability_option> ::=
DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }
<external_access_option> ::=
{
DB_CHAINING { ON | OFF }
| TRUSTWORTHY { ON | OFF }
| DEFAULT_FULLTEXT_LANGUAGE = { <lcid> | <language name> | <language alias> }
| DEFAULT_LANGUAGE = { <lcid> | <language name> | <language alias> }
| NESTED_TRIGGERS = { OFF | ON }
| TRANSFORM_NOISE_WORDS = { OFF | ON }
| TWO_DIGIT_YEAR_CUTOFF = { 1753, ..., 2049, ..., 9999 }
}
<FILESTREAM_option> ::=
{
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL
| DIRECTORY_NAME = <directory_name>
}
<HADR_options> ::=
ALTER DATABASE SET HADR
<mixed_page_allocation_option> ::=
MIXED_PAGE_ALLOCATION { OFF | ON }
<parameterization_option> ::=
PARAMETERIZATION { SIMPLE | FORCED }
<query_store_options> ::=
{
QUERY_STORE
{
= OFF
| = ON [ ( <query_store_option_list> [,...n] ) ]
| ( < query_store_option_list> [,...n] )
| CLEAR [ ALL ]
}
}
<query_store_option_list> ::=
{
OPERATION_MODE = { READ_WRITE | READ_ONLY }
| CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = number )
| DATA_FLUSH_INTERVAL_SECONDS = number
| MAX_STORAGE_SIZE_MB = number
| INTERVAL_LENGTH_MINUTES = number
| SIZE_BASED_CLEANUP_MODE = [ AUTO | OFF ]
| QUERY_CAPTURE_MODE = [ ALL | AUTO | NONE ]
| MAX_PLANS_PER_QUERY = number
| WAIT_STATS_CAPTURE_MODE = [ ON | OFF ]
}
<recovery_option> ::=
{
RECOVERY { FULL | BULK_LOGGED | SIMPLE }
| TORN_PAGE_DETECTION { ON | OFF }
| PAGE_VERIFY { CHECKSUM | TORN_PAGE_DETECTION | NONE }
}
<remote_data_archive_option> ::=
{
REMOTE_DATA_ARCHIVE =
{
ON ( SERVER = <server_name> ,
{ CREDENTIAL = <db_scoped_credential_name>
| FEDERATED_SERVICE_ACCOUNT = ON | OFF
}
)
| OFF
}
}
}
<service_broker_option> ::=
{
ENABLE_BROKER
| DISABLE_BROKER
| NEW_BROKER
| ERROR_BROKER_CONVERSATIONS
| HONOR_BROKER_PRIORITY { ON | OFF}
}
<snapshot_option> ::=
{
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
| READ_COMMITTED_SNAPSHOT {ON | OFF }
| MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = {ON | OFF }
}
<sql_option> ::=
{
ANSI_NULL_DEFAULT { ON | OFF }
| ANSI_NULLS { ON | OFF }
| ANSI_PADDING { ON | OFF }
| ANSI_WARNINGS { ON | OFF }
| ARITHABORT { ON | OFF }
| COMPATIBILITY_LEVEL = { 90 | 100 | 110 | 120 | 130 | 140 }
| CONCAT_NULL_YIELDS_NULL { ON | OFF }
| NUMERIC_ROUNDABORT { ON | OFF }
| QUOTED_IDENTIFIER { ON | OFF }
| RECURSIVE_TRIGGERS { ON | OFF }
}
<target_recovery_time_option> ::=
TARGET_RECOVERY_TIME = target_recovery_time { SECONDS | MINUTES }
<termination> ::=
{
ROLLBACK AFTER integer [ SECONDS ]
| ROLLBACK IMMEDIATE
| NO_WAIT
}
Arguments
database_name
Is the name of the database to be modified.
CURRENT
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
CURRENT performs the action in the current database. CURRENT is not supported for all options in all contexts. If
CURRENT fails, provide the database name.
<auto_option> ::=
Controls automatic options.
AUTO_CLOSE { ON | OFF }
ON
The database is shut down cleanly and its resources are freed after the last user exits.
The database automatically reopens when a user tries to use the database again. For example, by issuing a
USE database_name statement. If the database is shut down cleanly while AUTO_CLOSE is set to ON, the
database is not reopened until a user tries to use the database the next time the Database Engine is restarted.
OFF
The database remains open after the last user exits.
The AUTO_CLOSE option is useful for desktop databases because it allows for database files to be managed as
regular files. They can be moved, copied to make backups, or even e-mailed to other users. The AUTO_CLOSE
process is asynchronous; repeatedly opening and closing the database does not reduce performance.
NOTE
The AUTO_CLOSE option is not available in a Contained Database or on SQL Database.
The status of this option can be determined by examining the is_auto_close_on column in the sys.databases
catalog view or the IsAutoClose property of the DATABASEPROPERTYEX function.
NOTE
When AUTO_CLOSE is ON, some columns in the sys.databases catalog view and DATABASEPROPERTYEX function will
return NULL because the database is unavailable to retrieve the data. To resolve this, execute a USE statement to open the
database.
NOTE
Database mirroring requires AUTO_CLOSE OFF.
When the database is set to AUTOCLOSE = ON, an operation that initiates an automatic database shutdown
clears the plan cache for the instance of SQL Server. Clearing the plan cache causes a recompilation of all
subsequent execution plans and can cause a sudden, temporary decrease in query performance. In SQL Server
2005 Service Pack 2 and higher, for each cleared cachestore in the plan cache, the SQL Server error log contains
the following informational message: " SQL Server has encountered %d occurrence(s) of cachestore flush for the
'%s' cachestore (part of plan cache) due to some database maintenance or reconfigure operations". This
message is logged every five minutes as long as the cache is flushed within that time interval.
AUTO_CREATE_STATISTICS { ON | OFF }
ON
The query optimizer creates statistics on single columns in query predicates, as necessary, to improve query
plans and query performance. These single-column statistics are created when the query optimizer compiles
queries. The single-column statistics are created only on columns that are not already the first column of an
existing statistics object.
The default is ON. We recommend that you use the default setting for most databases.
OFF
The query optimizer does not create statistics on single columns in query predicates when it is compiling
queries. Setting this option to OFF can cause suboptimal query plans and degraded query performance.
The status of this option can be determined by examining the is_auto_create_stats_on column in the
sys.databases catalog view or the IsAutoCreateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
INCREMENTAL = ON | OFF
When AUTO_CREATE_STATISTICS is ON, and INCREMENTAL is set to ON, automatically created stats are
created as incremental whenever incremental stats is supported. The default value is OFF. For more information,
see CREATE STATISTICS (Transact-SQL ).
Applies to: SQL Server 2014 (12.x) through SQL Server 2017, SQL Database.
AUTO_SHRINK { ON | OFF }
ON
The database files are candidates for periodic shrinking.
Both data file and log files can be automatically shrunk. AUTO_SHRINK reduces the size of the transaction log
only if the database is set to SIMPLE recovery model or if the log is backed up. When set to OFF, the database
files are not automatically shrunk during periodic checks for unused space.
The AUTO_SHRINK option causes files to be shrunk when more than 25 percent of the file contains unused
space. The file is shrunk to a size where 25 percent of the file is unused space, or to the size of the file when it
was created, whichever is larger.
You cannot shrink a read-only database.
OFF
The database files are not automatically shrunk during periodic checks for unused space.
The status of this option can be determined by examining the is_auto_shrink_on column in the sys.databases
catalog view or the IsAutoShrink property of the DATABASEPROPERTYEX function.
NOTE
The AUTO_SHRINK option is not available in a Contained Database.
AUTO_UPDATE_STATISTICS { ON | OFF }
ON
Specifies that the query optimizer updates statistics when they are used by a query and when they might be out-
of-date. Statistics become out-of-date after insert, update, delete, or merge operations change the data
distribution in the table or indexed view. The query optimizer determines when statistics might be out-of-date by
counting the number of data modifications since the last statistics update and comparing the number of
modifications to a threshold. The threshold is based on the number of rows in the table or indexed view.
The query optimizer checks for out-of-date statistics before compiling a query and before executing a cached
query plan. Before compiling a query, the query optimizer uses the columns, tables, and indexed views in the
query predicate to determine which statistics might be out-of-date. Before executing a cached query plan, the
Database Engine verifies that the query plan references up-to-date statistics.
The AUTO_UPDATE_STATISTICS option applies to statistics created for indexes, single-columns in query
predicates, and statistics that are created by using the CREATE STATISTICS statement. This option also applies
to filtered statistics.
The default is ON. We recommend that you use the default setting for most databases.
Use the AUTO_UPDATE_STATISTICS_ASYNC option to specify whether the statistics are updated
synchronously or asynchronously.
OFF
Specifies that the query optimizer does not update statistics when they are used by a query and when they
might be out-of-date. Setting this option to OFF can cause suboptimal query plans and degraded query
performance.
The status of this option can be determined by examining the is_auto_update_stats_on column in the
sys.databases catalog view or the IsAutoUpdateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
ON
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are asynchronous. The query
optimizer does not wait for statistics updates to complete before it compiles queries.
Setting this option to ON has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
By default, the AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF, and the query optimizer updates
statistics synchronously.
OFF
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are synchronous. The query
optimizer waits for statistcs updates to complete before it compiles queries.
Setting this option to OFF has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
The status of this option can be determined by examining the is_auto_update_stats_async_on column in the
sys.databases catalog view.
For more information that describes when to use synchronous or asynchronous statistics updates, see the
section "Using the Database-Wide Statistics Options" in Statistics.
<automatic_tuning_option> ::=
Applies to: SQL Server 2017 (14.x).
Enables or disables FORCE_LAST_GOOD_PLAN automatic tuning option.
FORCE_L AST_GOOD_PL AN = { ON | OFF }
ON
The Database Engine automatically forces the last known good plan on the Transact-SQL queries where new
SQL plan causes performance regressions. The Database Engine continously monitors query performance of the
Transact-SQL query with the forced plan. If there are performance gains, the Database Engine will keep using
last known good plan. If performance gains are not detected, the Database Engine will produce a new SQL plan.
The statement will fail if Query Store is not enabled or if it is not in Read -Write mode.
OFF
The Database Engine reports potential query performance regressions caused by SQL plan changes in
sys.dm_db_tuning_recommendations view. However, these recommendations are not automatically applied. User
can monitor active recomendations and fix identified problems by applying Transact-SQL scripts that are shown
in the view. This is the default value.
<change_tracking_option> ::=
Applies to: SQL Server and SQL Database.
Controls change tracking options. You can enable change tracking, set options, change options, and disable
change tracking. For examples, see the Examples section later in this topic.
ON
Enables change tracking for the database. When you enable change tracking, you can also set the AUTO
CLEANUP and CHANGE RETENTION options.
AUTO_CLEANUP = { ON | OFF }
ON
Change tracking information is automatically removed after the specified retention period.
OFF
Change tracking data is not removed from the database.
CHANGE_RETENTION =retention_period { DAYS | HOURS | MINUTES }
Specifies the minimum period for keeping change tracking information in the database. Data is removed only
when the AUTO_CLEANUP value is ON.
retention_period is an integer that specifies the numerical component of the retention period.
The default retention period is 2 days. The minimum retention period is 1 minute. The default retention type is
DAYS.
OFF
Disables change tracking for the database. You must disable change tracking on all tables before you can disable
change tracking off the database.
<containment_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017. Not available in SQL Database.
Controls database containment options.
CONTAINMENT = { NONE | PARTIAL }
NONE
The database is not a contained database.
PARTIAL
The database is a contained database. Setting database containment to partial will fail if the database has
replication, change data capture, or change tracking enabled. Error checking stops after one failure. For more
information about contained databases, see Contained Databases.
NOTE
Containment cannot be configured in SQL Database. Containment is not explicitly designated, but SQL Database can use
contained features such as contained database users.
<cursor_option> ::=
Controls cursor options.
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
ON
Any cursors open when a transaction is committed or rolled back are closed.
OFF
Cursors remain open when a transaction is committed; rolling back a transaction closes any cursors except those
defined as INSENSITIVE or STATIC.
Connection-level settings that are set by using the SET statement override the default database setting for
CURSOR_CLOSE_ON_COMMIT. By default, ODBC and OLE DB clients issue a connection-level SET statement
setting CURSOR_CLOSE_ON_COMMIT to OFF for the session when connecting to an instance of SQL Server.
For more information, see SET CURSOR_CLOSE_ON_COMMIT (Transact-SQL ).
The status of this option can be determined by examining the is_cursor_close_on_commit_on column in the
sys.databases catalog view or the IsCloseCursorsOnCommitEnabled property of the DATABASEPROPERTYEX
function.
CURSOR_DEFAULT { LOCAL | GLOBAL }
Applies to: SQL Server. Not available in SQL Database.
Controls whether cursor scope uses LOCAL or GLOBAL.
LOCAL
When LOCAL is specified and a cursor is not defined as GLOBAL when created, the scope of the cursor is local
to the batch, stored procedure, or trigger in which the cursor was created. The cursor name is valid only within
this scope. The cursor can be referenced by local cursor variables in the batch, stored procedure, or trigger, or a
stored procedure OUTPUT parameter. The cursor is implicitly deallocated when the batch, stored procedure, or
trigger ends, unless it was passed back in an OUTPUT parameter. If the cursor is passed back in an OUTPUT
parameter, the cursor is deallocated when the last variable that references it is deallocated or goes out of scope.
GLOBAL
When GLOBAL is specified, and a cursor is not defined as LOCAL when created, the scope of the cursor is global
to the connection. The cursor name can be referenced in any stored procedure or batch executed by the
connection.
The cursor is implicitly deallocated only at disconnect. For more information, see DECL ARE CURSOR (Transact-
SQL ).
The status of this option can be determined by examining the is_local_cursor_default column in the sys.databases
catalog view or the IsLocalCursorsDefault property of the DATABASEPROPERTYEX function.
<database_mirroring>
Applies to: SQL Server. Not available in SQL Database.
For the argument descriptions, see ALTER DATABASE Database Mirroring (Transact-SQL ).
<date_correlation_optimization_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls the date_correlation_optimization option.
DATE_CORREL ATION_OPTIMIZATION { ON | OFF }
ON
SQL Server maintains correlation statistics between any two tables in the database that are linked by a
FOREIGN KEY constraint and have datetime columns.
OFF
Correlation statistics are not maintained.
To set DATE_CORREL ATION_OPTIMIZATION to ON, there must be no active connections to the database
except for the connection that is executing the ALTER DATABASE statement. Afterwards, multiple connections
are supported.
The current setting of this option can be determined by examining the is_date_correlation_on column in the
sys.databases catalog view.
<db_encryption_option> ::=
Controls the database encryption state.
ENCRYPTION {ON | OFF }
Sets the database to be encrypted (ON ) or not encrypted (OFF ). For more information about database
encryption, see Transparent Data Encryption (TDE ), and Transparent Data Encryption with Azure SQL Database.
When encryption is enabled at the database level all filegroups will be encrypted. Any new filegroups will inherit
the encrypted property. If any filegroups in the database are set to READ ONLY, the database encryption
operation will fail.
You can see the encryption state of the database by using the sys.dm_database_encryption_keys dynamic
management view.
<db_state_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls the state of the database.
OFFLINE
The database is closed, shut down cleanly, and marked offline. The database cannot be modified while it is
offline.
ONLINE
The database is open and available for use.
EMERGENCY
The database is marked READ_ONLY, logging is disabled, and access is limited to members of the sysadmin
fixed server role. EMERGENCY is primarily used for troubleshooting purposes. For example, a database marked
as suspect due to a corrupted log file can be set to the EMERGENCY state. This could enable the system
administrator read-only access to the database. Only members of the sysadmin fixed server role can set a
database to the EMERGENCY state.
NOTE
Permissions: ALTER DATABASE permission for the subject database is required to change a database to the offline or
emergency state. The server level ALTER ANY DATABASE permission is required to move a database from offline to online.
The status of this option can be determined by examining the state and state_desc columns in the sys.databases
catalog view or the Status property of the DATABASEPROPERTYEX function. For more information, see
Database States.
A database marked as RESTORING cannot be set to OFFLINE, ONLINE, or EMERGENCY. A database may be
in the RESTORING state during an active restore operation or when a restore operation of a database or log file
fails because of a corrupted backup file.
<db_update_option> ::=
Controls whether updates are allowed on the database.
READ_ONLY
Users can read data from the database but not modify it.
NOTE
To improve query performance, update statistics before setting a database to READ_ONLY. If additional statistics are
needed after a database is set to READ_ONLY, the Database Engine will create statistics in tempdb. For more information
about statistics for a read-only database, see Statistics.
READ_WRITE
The database is available for read and write operations.
To change this state, you must have exclusive access to the database. For more information, see the
SINGLE_USER clause.
NOTE
On SQL Database federated databases, SET { READ_ONLY | READ_WRITE } is disabled.
<db_user_access_option> ::=
Controls user access to the database.
SINGLE_USER
Applies to: SQL Server. Not available in SQL Database.
Specifies that only one user at a time can access the database. If SINGLE_USER is specified and there are other
users connected to the database the ALTER DATABASE statement will be blocked until all users disconnect from
the specified database. To override this behavior, see the WITH <termination> clause.
The database remains in SINGLE_USER mode even if the user that set the option logs off. At that point, a
different user, but only one, can connect to the database.
Before you set the database to SINGLE_USER, verify the AUTO_UPDATE_STATISTICS_ASYNC option is set to
OFF. When set to ON, the background thread used to update statistics takes a connection against the database,
and you will be unable to access the database in single-user mode. To view the status of this option, query the
is_auto_update_stats_async_on column in the sys.databases catalog view. If the option is set to ON, perform the
following tasks:
1. Set AUTO_UPDATE_STATISTICS_ASYNC to OFF.
2. Check for active asynchronous statistics jobs by querying the sys.dm_exec_background_job_queue
dynamic management view.
If there are active jobs, either allow the jobs to complete or manually terminate them by using KILL
STATS JOB.
RESTRICTED_USER
RESTRICTED_USER allows for only members of the db_owner fixed database role and dbcreator and sysadmin
fixed server roles to connect to the database, but does not limit their number. All connections to the database are
disconnected in the timeframe specified by the termination clause of the ALTER DATABASE statement. After the
database has transitioned to the RESTRICTED_USER state, connection attempts by unqualified users are
refused.
MULTI_USER
All users that have the appropriate permissions to connect to the database are allowed.
The status of this option can be determined by examining the user_access column in the sys.databases catalog
view or the UserAccess property of the DATABASEPROPERTYEX function.
<delayed_durability_option> ::=
Applies to: SQL Server 2014 (12.x) through SQL Server 2017, SQL Database.
Controls whether transactions commit fully durable or delayed durable.
DISABLED
All transactions following SET DISABLED are fully durable. Any durability options set in an atomic block or
commit statement are ignored.
ALLOWED
All transactions following SET ALLOWED are either fully durable or delayed durable, depending upon the
durability option set in the atomic block or commit statement.
FORCED
All transactions following SET FORCED are delayed durable. Any durability options set in an atomic block or
commit statement are ignored.
<external_access_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls whether the database can be accessed by external resources, such as objects from another database.
DB_CHAINING { ON | OFF }
ON
Database can be the source or target of a cross-database ownership chain.
OFF
Database cannot participate in cross-database ownership chaining.
IMPORTANT
The instance of SQL Server will recognize this setting when the cross db ownership chaining server option is 0 (OFF). When
cross db ownership chaining is 1 (ON), all user databases can participate in cross-database ownership chains, regardless of
the value of this option. This option is set by using sp_configure.
IMPORTANT
This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is set to NONE, errors will
occur.
DEFAULT_L ANGUAGE
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the default language for all newly created logins. Language can be specified by providing the local id
(lcid), the language name, or the language alias. For a list of acceptable language names and aliases, see
sys.syslanguages (Transact-SQL ). This option is allowable only when CONTAINMENT has been set to PARTIAL.
If CONTAINMENT is set to NONE, errors will occur.
NESTED_TRIGGERS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies whether an AFTER trigger can cascade; that is, perform an action that initiates another trigger, which
initiates another trigger, and so on. This option is allowable only when CONTAINMENT has been set to
PARTIAL. If CONTAINMENT is set to NONE, errors will occur.
TRANSFORM_NOISE_WORDS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Used to suppress an error message if noise words, or stopwords, cause a Boolean operation on a full-text query
to fail. This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is set to
NONE, errors will occur.
TWO_DIGIT_YEAR_CUTOFF
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies an integer from 1753 to 9999 that represents the cutoff year for interpreting two-digit years as four-
digit years. This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is
set to NONE, errors will occur.
<FILESTREAM_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Controls the settings for FileTables.
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL }
OFF
Non-transactional access to FileTable data is disabled.
READ_ONLY
FILESTREAM data in FileTables in this database can be read by non-transactional processes.
FULL
Full non-transactional access to FILESTREAM data in FileTables is enabled.
DIRECTORY_NAME = <directory_name>
A windows-compatible directory name. This name should be unique among all the database-level directory
names in the SQL Server instance. Uniqueness comparison is case-insensitive, regardless of collation settings.
This option must be set before creating a FileTable in this database.
<HADR_options> ::=
Applies to: SQL Server. Not available in SQL Database.
See ALTER DATABASE SET HADR (Transact-SQL ).
<mixed_page_allocation_option> ::=
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version). Not available in SQL Database.
MIXED_PAGE_ALLOCATION { OFF | ON } controls whether the database can create initial pages using a mixed
extent for the first eight pages of a table or index.
OFF
The database always creates initial pages using uniform extents. This is the default value.
ON
The database can create initial pages using mixed extents.
This setting is ON for all system databases. tempdb is the only system database that supports OFF.
<PARAMETERIZATION_option> ::=
Controls the parameterization option.
PARAMETERIZATION { SIMPLE | FORCED }
SIMPLE
Queries are parameterized based on the default behavior of the database.
FORCED
SQL Server parameterizes all queries in the database.
The current setting of this option can be determined by examining the is_parameterization_forced column in the
sys.databases catalog view.
<query_store_options> ::=
Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017), SQL Database.
ON | OFF | CLEAR [ ALL ]
Controls if the query store is enabled in this database, and also controls removing the contents of the query
store.
ON
Enables the query store.
OFF
Disables the query store. This is the default value.
CLEAR
Remove the contents of the query store.
OPERATION_MODE
Describes the operation mode of the query store. Valid values are READ_ONLY and READ_WRITE. In
READ_WRITE mode, the query store collects and persists query plan and runtime execution statistics
information. In READ_ONLY mode, information can be read from the query store, but new information is not
added. If the maximum allocated space of the query store has been exhausted, the query store will change is
operation mode to READ_ONLY.
CLEANUP_POLICY
Describes the data retention policy of the query store. STALE_QUERY_THRESHOLD_DAYS determines the
number of days for which the information for a query is retained in the query store.
STALE_QUERY_THRESHOLD_DAYS is type bigint.
DATA_FLUSH_INTERVAL_SECONDS
Determines the frequency at which data written to the query store is persisted to disk. To optimize for
performance, data collected by the query store is asynchronously written to the disk. The frequency at which this
asynchronous transfer occurs is configured by using the DATA_FLUSH_INTERVAL_SECONDS argument.
DATA_FLUSH_INTERVAL_SECONDS is type bigint.
MAX_STORAGE_SIZE_MB
Determines the space allocated to the query store. MAX_STORAGE_SIZE_MB is type bigint.
INTERVAL_LENGTH_MINUTES
Determines the time interval at which runtime execution statistics data is aggregated into the query store. To
optimize for space usage, the runtime execution statistics in the runtime stats store are aggregated over a fixed
time window. This fixed time window is configured by using the INTERVAL_LENGTH_MINUTES argument.
INTERVAL_LENGTH_MINUTES is type bigint.
SIZE_BASED_CLEANUP_MODE
Controls whether cleanup will be automatically activated when total amount of data gets close to maximum size:
OFF
Size based cleanup won’t be automatically activated.
AUTO
Size based cleanup will be automatically activated when size on disk reaches 90% of max_storage_size_mb.
Size based cleanup removes the least expensive and oldest queries first. It stops at approximately 80% of
max_storage_size_mb. This is the default configuration value.
SIZE_BASED_CLEANUP_MODE is type nvarchar.
QUERY_CAPTURE_MODE
Designates the currently active query capture mode:
ALL All queries are captured. This is the default configuration value. This is the default configuration value for
SQL Server 2016 (13.x)
AUTO Capture relevant queries based on execution count and resource consumption. This is the default
configuration value for SQL Database
NONE Stop capturing new queries. Query Store will continue to collect compile and runtime statistics for
queries that were captured already. Use this configuration with caution since you may miss to capture important
queries.
QUERY_CAPTURE_MODE is type nvarchar.
MAX_PL ANS_PER_QUERY
An integer representing the maximum number of plans maintained for each query. Default is 200.
<recovery_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls database recovery options and disk I/O error checking.
FULL
Provides full recovery after media failure by using transaction log backups. If a data file is damaged, media
recovery can restore all committed transactions. For more information, see Recovery Models (SQL Server).
BULK_LOGGED
Provides recovery after media failure by combining the best performance and least amount of log-space use for
certain large-scale or bulk operations. For information about what operations can be minimally logged, see The
Transaction Log (SQL Server). Under the BULK_LOGGED recovery model, logging for these operations is
minimal. For more information, see Recovery Models (SQL Server).
SIMPLE
A simple backup strategy that uses minimal log space is provided. Log space can be automatically reused when
it is no longer required for server failure recovery. For more information, see Recovery Models (SQL Server).
IMPORTANT
The simple recovery model is easier to manage than the other two models but at the expense of greater data loss
exposure if a data file is damaged. All changes since the most recent database or differential database backup are lost and
must be manually reentered.
The default recovery model is determined by the recovery model of the model database. For more information
about selecting the appropriate recovery model, see Recovery Models (SQL Server).
The status of this option can be determined by examining the recovery_model and recovery_model_desc
columns in the sys.databases catalog view or the Recovery property of the DATABASEPROPERTYEX function.
TORN_PAGE_DETECTION { ON | OFF }
ON
Incomplete pages can be detected by the Database Engine.
OFF
Incomplete pages cannot be detected by the Database Engine.
IMPORTANT
The syntax structure TORN_PAGE_DETECTION ON | OFF will be removed in a future version of SQL Server. Avoid using this
syntax structure in new development work, and plan to modify applications that currently use the syntax structure. Use
the PAGE_VERIFY option instead.
NOTE
In earlier versions of SQL Server, the PAGE_VERIFY database option is set to NONE for the tempdb database and
cannot be modified. In SQL Server 2008 and later versions, the default value for the tempdb database is
CHECKSUM for new installations of SQL Server. When upgrading an installation SQL Server, the default value
remains NONE. The option can be modified. We recommend that you use CHECKSUM for the tempdb database.
TORN_PAGE_DETECTION may use fewer resources but provides a minimal subset of the CHECKSUM
protection.
PAGE_VERIFY can be set without taking the database offline, locking the database, or otherwise impeding
concurrency on that database.
CHECKSUM is mutually exclusive to TORN_PAGE_DETECTION. Both options cannot be enabled at the
same time.
When a torn page or checksum failure is detected, you can recover by restoring the data or potentially
rebuilding the index if the failure is limited only to index pages. If you encounter a checksum failure, to
determine the type of database page or pages affected, run DBCC CHECKDB. For more information
about restore options, see RESTORE Arguments (Transact-SQL ). Although restoring the data will resolve
the data corruption problem, the root cause, for example, disk hardware failure, should be diagnosed and
corrected as soon as possible to prevent continuing errors.
SQL Server will retry any read that fails with a checksum, torn page, or other I/O error four times. If the
read is successful in any one of the retry attempts, a message will be written to the error log and the
command that triggered the read will continue. If the retry attempts fail, the command will fail with error
message 824.
For more information about error messages 823, 824 and 825, see How to troubleshoot a Msg 823 error
in SQL Server, How to troubleshoot Msg 824 in SQL Server and How to troubleshoot Msg 825 (read
retry) in SQL Server.
The current setting of this option can be determined by examining the page_verify_option column in the
sys.databases catalog view or the IsTornPageDetectionEnabled property of the DATABASEPROPERTYEX
function.
<remote_data_archive_option> ::=
Applies to: SQL Server 2016 (13.x) through SQL Server 2017. Not available in SQL Database.
Enables or disables Stretch Database for the database. For more info, see Stretch Database.
REMOTE_DATA_ARCHIVE = { ON ( SERVER = <server_name> , { CREDENTIAL =
<db_scoped_credential_name> | FEDERATED_SERVICE_ACCOUNT = ON | OFF } )| OFF ON
Enables Stretch Database for the database. For more info, including additional prerequisites, see Enable Stretch
Database for a database.
Permissions. Enabling Stretch Database for a database or a table requires db_owner permissions. Enabling
Stretch Database for a database also requires CONTROL DATABASE permissions.
SERVER = <server_name>
Specifies the address of the Azure server. Include the .database.windows.net portion of the name. For example,
MyStretchDatabaseServer.database.windows.net .
CREDENTIAL = <db_scoped_credential_name>
Specifies the database scoped credential that the instance of SQL Server uses to connect to the Azure server.
Make sure the credential exists before you run this command. For more info, see CREATE DATABASE SCOPED
CREDENTIAL (Transact-SQL ).
FEDERATED_SERVICE_ACCOUNT = ON | OFF
You can use a federated service account for the on premises SQL Server to communicate with the remote Azure
server when the following conditions are all true.
The service account under which the instance of SQL Server is running is a domain account.
The domain account belongs to a domain whose Active Directory is federated with Azure Active
Directory.
The remote Azure server is configured to support Azure Active Directory authentication.
The service account under which the instance of SQL Server is running must be configured as a
dbmanager or sysadmin account on the remote Azure server.
If you specify ON, you can't also specify the CREDENTIAL argument. If you specify OFF, you have to
provide the CREDENTIAL argument.
OFF
Disables Stretch Database for the database. For more info, see Disable Stretch Database and bring back
remote data.
You can only disable Stretch Database for a database after the database no longer contains any tables that
are enabled for Stretch Database. After you disable Stretch Database, data migration stops and query
results no longer include results from remote tables.
Disabling Stretch does not remove the remote database. If you want to delete the remote database, you
have to drop it by using the Azure management portal.
<service_broker_option> ::=
Applies to: SQL Server. Not available in SQL Database.
Controls the following Service Broker options: enables or disables message delivery, sets a new Service Broker
identifier, or sets conversation priorities to ON or OFF.
ENABLE_BROKER
Specifies that Service Broker is enabled for the specified database. Message delivery is started, and the
is_broker_enabled flag is set to true in the sys.databases catalog view. The database retains the existing Service
Broker identifier. Service broker cannot be enabled while the database is the principal in a database mirroring
configuration.
NOTE
ENABLE_BROKER requires an exclusive database lock. If other sessions have locked resources in the database,
ENABLE_BROKER will wait until the other sessions release their locks. To enable Service Broker in a user database, ensure
that no other sessions are using the database before you run the ALTER DATABASE SET ENABLE_BROKER statement, such
as by putting the database in single user mode. To enable Service Broker in the msdb database, first stop SQL Server
Agent so that Service Broker can obtain the necessary lock.
DISABLE_BROKER
Specifies that Service Broker is disabled for the specified database. Message delivery is stopped, and the
is_broker_enabled flag is set to false in the sys.databases catalog view. The database retains the existing Service
Broker identifier.
NEW_BROKER
Specifies that the database should receive a new broker identifier. Because the database is considered to be a
new service broker, all existing conversations in the database are immediately removed without producing end
dialog messages. Any route that references the old Service Broker identifier must be re-created with the new
identifier.
ERROR_BROKER_CONVERSATIONS
Specifies that Service Broker message delivery is enabled. This preserves the existing Service Broker identifier
for the database. Service Broker ends all conversations in the database with an error. This enables applications to
perform regular cleanup for existing conversations.
HONOR_BROKER_PRIORITY {ON | OFF }
ON
Send operations take into consideration the priority levels that are assigned to conversations. Messages from
conversations that have high priority levels are sent before messages from conversations that are assigned low
priority levels.
OFF
Send operations run as if all conversations have the default priority level.
Changes to the HONOR_BROKER_PRIORITY option take effect immediately for new dialogs or dialogs that
have no messages waiting to be sent. Dialogs that have messages waiting to be sent when ALTER DATABASE is
run will not pick up the new setting until some of the messages for the dialog have been sent. The amount of
time before all dialogs start using the new setting can vary considerably.
The current setting of this property is reported in the is_broker_priority_honored column in the sys.databases
catalog view.
<snapshot_option> ::=
Determines the transaction isolation level.
ALLOW_SNAPSHOT_ISOL ATION { ON | OFF }
ON
Enables Snapshot option at the database level. When it is enabled, DML statements start generating row
versions even when no transaction uses Snapshot Isolation. Once this option is enabled, transactions can specify
the SNAPSHOT transaction isolation level. When a transaction runs at the SNAPSHOT isolation level, all
statements see a snapshot of data as it exists at the start of the transaction. If a transaction running at the
SNAPSHOT isolation level accesses data in multiple databases, either ALLOW_SNAPSHOT_ISOL ATION must
be set to ON in all the databases, or each statement in the transaction must use locking hints on any reference in
a FROM clause to a table in a database where ALLOW_SNAPSHOT_ISOL ATION is OFF.
OFF
Turns off the Snapshot option at the database level. Transactions cannot specify the SNAPSHOT transaction
isolation level.
When you set ALLOW_SNAPSHOT_ISOL ATION to a new state (from ON to OFF, or from OFF to ON ), ALTER
DATABASE does not return control to the caller until all existing transactions in the database are committed. If
the database is already in the state specified in the ALTER DATABASE statement, control is returned to the caller
immediately. If the ALTER DATABASE statement does not return quickly, use
sys.dm_tran_active_snapshot_database_transactions to determine whether there are long-running transactions.
If the ALTER DATABASE statement is canceled, the database remains in the state it was in when ALTER
DATABASE was started. The sys.databases catalog view indicates the state of snapshot-isolation transactions in
the database. If snapshot_isolation_state_desc = IN_TRANSITION_TO_ON, ALTER DATABASE
ALLOW_SNAPSHOT_ISOL ATION OFF will pause six seconds and retry the operation.
You cannot change the state of ALLOW_SNAPSHOT_ISOL ATION if the database is OFFLINE.
If you set ALLOW_SNAPSHOT_ISOL ATION in a READ_ONLY database, the setting will be retained if the
database is later set to READ_WRITE.
You can change the ALLOW_SNAPSHOT_ISOL ATION settings for the master, model, msdb, and tempdb
databases. If you change the setting for tempdb, the setting is retained every time the instance of the Database
Engine is stopped and restarted. If you change the setting for model, that setting becomes the default for any
new databases that are created, except for tempdb.
The option is ON, by default, for the master and msdb databases.
The current setting of this option can be determined by examining the snapshot_isolation_state column in the
sys.databases catalog view.
READ_COMMITTED_SNAPSHOT { ON | OFF }
ON
Enables Read-Committed Snapshot option at the database level. When it is enabled, DML statements start
generating row versions even when no transaction uses Snapshot Isolation. Once this option is enabled, the
transactions specifying the read committed isolation level use row versioning instead of locking. When a
transaction runs at the read committed isolation level, all statements see a snapshot of data as it exists at the
start of the statement.
OFF
Turns off Read-Committed Snapshot option at the database level. Transactions specifying the READ
COMMITTED isolation level use locking.
To set READ_COMMITTED_SNAPSHOT ON or OFF, there must be no active connections to the database
except for the connection executing the ALTER DATABASE command. However, the database does not have to
be in single-user mode. You cannot change the state of this option when the database is OFFLINE.
If you set READ_COMMITTED_SNAPSHOT in a READ_ONLY database, the setting will be retained when the
database is later set to READ_WRITE.
READ_COMMITTED_SNAPSHOT cannot be turned ON for the master, tempdb, or msdb system databases. If
you change the setting for model, that setting becomes the default for any new databases created, except for
tempdb.
The current setting of this option can be determined by examining the is_read_committed_snapshot_on column
in the sys.databases catalog view.
WARNING
When a table is created with DURABILITY = SCHEMA_ONLY, and READ_COMMITTED_SNAPSHOT is subsequently
changed using ALTER DATABASE, data in the table will be lost.
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT { ON | OFF }
Applies to: SQL Server 2014 (12.x) through SQL Server 2017, SQL Database.
ON
When the transaction isolation level is set to any isolation level lower than SNAPSHOT (for example, READ
COMMITTED or READ UNCOMMITTED ), all interpreted Transact-SQL operations on memory-optimized
tables are performed under SNAPSHOT isolation. This is done regardless of whether the transaction isolation
level is set explicitly at the session level, or the default is used implicitly.
OFF
Does not elevate the transaction isolation level for interpreted Transact-SQL operations on memory-optimized
tables.
You cannot change the state of MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT if the database is OFFLINE.
The option is OFF, by default.
The current setting of this option can be determined by examining the
is_memory_optimized_elevate_to_snapshot_on column in the sys.databases (Transact-SQL ) catalog view.
<sql_option> ::=
Controls the ANSI compliance options at the database level.
ANSI_NULL_DEFAULT { ON | OFF }
Determines the default value, NULL or NOT NULL, of a column or CLR user-defined type for which the
nullability is not explicitly defined in CREATE TABLE or ALTER TABLE statements. Columns that are defined with
constraints follow constraint rules regardless of this setting.
ON
The default value is NULL.
OFF
The default value is NOT NULL.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_NULL_DEFAULT. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULL_DEFAULT to ON for the session when connecting to an instance of SQL Server. For more
information, see SET ANSI_NULL_DFLT_ON (Transact-SQL ).
For ANSI compatibility, setting the database option ANSI_NULL_DEFAULT to ON changes the database default
to NULL.
The status of this option can be determined by examining the is_ansi_null_default_on column in the
sys.databases catalog view or the IsAnsiNullDefault property of the DATABASEPROPERTYEX function.
ANSI_NULLS { ON | OFF }
ON
All comparisons to a null value evaluate to UNKNOWN.
OFF
Comparisons of non-UNICODE values to a null value evaluate to TRUE if both values are NULL.
IMPORTANT
In a future version of SQL Server, ANSI_NULLS will always be ON and any applications that explicitly set the option to OFF
will produce an error. Avoid using this feature in new development work, and plan to modify applications that currently use
this feature.
Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_NULLS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULLS to ON for the session when connecting to an instance of SQL Server. For more information, see
SET ANSI_NULLS (Transact-SQL ).
SET ANSI_NULLS also must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
The status of this option can be determined by examining the is_ansi_nulls_on column in the sys.databases
catalog view or the IsAnsiNullsEnabled property of the DATABASEPROPERTYEX function.
ANSI_PADDING { ON | OFF }
ON
Strings are padded to the same length before conversion or inserting to a varchar or nvarchar data type.
Trailing blanks in character values inserted into varchar or nvarchar columns and trailing zeros in binary values
inserted into varbinary columns are not trimmed. Values are not padded to the length of the column.
OFF
Trailing blanks for varchar or nvarchar and zeros for varbinary are trimmed.
When OFF is specified, this setting affects only the definition of new columns.
IMPORTANT
In a future version of SQL Server, ANSI_PADDING will always be ON and any applications that explicitly set the option to
OFF will produce an error. Avoid using this feature in new development work, and plan to modify applications that
currently use this feature. We recommend that you always set ANSI_PADDING to ON. ANSI_PADDING must be ON when
you create or manipulate indexes on computed columns or indexed views.
char(n ) and binary(n ) columns that allow for nulls are padded to the length of the column when
ANSI_PADDING is set to ON, but trailing blanks and zeros are trimmed when ANSI_PADDING is OFF. char(n )
and binary(n ) columns that do not allow nulls are always padded to the length of the column.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_PADDING. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_PADDING to ON for the session when connecting to an instance of SQL Server. For more information,
see SET ANSI_PADDING (Transact-SQL ).
The status of this option can be determined by examining the is_ansi_padding_on column in the sys.databases
catalog view or the IsAnsiPaddingEnabled property of the DATABASEPROPERTYEX function.
ANSI_WARNINGS { ON | OFF }
ON
Errors or warnings are issued when conditions such as divide-by-zero occur or null values appear in aggregate
functions.
OFF
No warnings are raised and null values are returned when conditions such as divide-by-zero occur.
SET ANSI_WARNINGS must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_WARNINGS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_WARNINGS to ON for the session when connecting to an instance of SQL Server. For more information,
see SET ANSI_WARNINGS (Transact-SQL ).
The status of this option can be determined by examining the is_ansi_warnings_on column in the sys.databases
catalog view or the IsAnsiWarningsEnabled property of the DATABASEPROPERTYEX function.
ARITHABORT { ON | OFF }
ON
A query is ended when an overflow or divide-by-zero error occurs during query execution.
OFF
A warning message is displayed when one of these errors occurs, but the query, batch, or transaction continues
to process as if no error occurred.
SET ARITHABORT must be set to ON when you create or make changes to indexes on computed columns or
indexed views.
The status of this option can be determined by examining the is_arithabort_on column in the sys.databases
catalog view or the IsArithmeticAbortEnabled property of the DATABASEPROPERTYEX function.
COMPATIBILITY_LEVEL = { 90 | 100 | 110 | 120 | 130 | 140 }
For more information, see ALTER DATABASE Compatibility Level (Transact-SQL ).
CONCAT_NULL_YIELDS_NULL { ON | OFF }
ON
The result of a concatenation operation is NULL when either operand is NULL. For example, concatenating the
character string "This is" and NULL causes the value NULL, instead of the value "This is".
OFF
The null value is treated as an empty character string.
CONCAT_NULL_YIELDS_NULL must be set to ON when you create or make changes to indexes on computed
columns or indexed views.
IMPORTANT
In a future version of SQL Server, CONCAT_NULL_YIELDS_NULL will always be ON and any applications that explicitly set
the option to OFF will produce an error. Avoid using this feature in new development work, and plan to modify
applications that currently use this feature.
Connection-level settings that are set by using the SET statement override the default database setting for
CONCAT_NULL_YIELDS_NULL. By default, ODBC and OLE DB clients issue a connection-level SET statement
setting CONCAT_NULL_YIELDS_NULL to ON for the session when connecting to an instance of SQL Server.
For more information, see SET CONCAT_NULL_YIELDS_NULL (Transact-SQL ).
The status of this option can be determined by examining the is_concat_null_yields_null_on column in the
sys.databases catalog view or the IsNullConcat property of the DATABASEPROPERTYEX function.
QUOTED_IDENTIFIER { ON | OFF }
ON
Double quotation marks can be used to enclose delimited identifiers.
All strings delimited by double quotation marks are interpreted as object identifiers. Quoted identifiers do not
have to follow the Transact-SQL rules for identifiers. They can be keywords and can include characters not
generally allowed in Transact-SQL identifiers. If a single quotation mark (') is part of the literal string, it can be
represented by double quotation marks (").
OFF
Identifiers cannot be in quotation marks and must follow all Transact-SQL rules for identifiers. Literals can be
delimited by either single or double quotation marks.
SQL Server also allows for identifiers to be delimited by square brackets ([ ]). Bracketed identifiers can always be
used, regardless of the setting of QUOTED_IDENTIFIER. For more information, see Database Identifiers.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the metadata of the table,
even if the option is set to OFF when the table is created.
Connection-level settings that are set by using the SET statement override the default database setting for
QUOTED_IDENTIFIER. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
QUOTED_IDENTIFIER to ON when connecting to an instance of SQL Server. For more information, see SET
QUOTED_IDENTIFIER (Transact-SQL ).
The status of this option can be determined by examining the is_quoted_identifier_on column in the
sys.databases catalog view or the IsQuotedIdentifiersEnabled property of the DATABASEPROPERTYEX
function.
NUMERIC_ROUNDABORT { ON | OFF }
ON
An error is generated when loss of precision occurs in an expression.
OFF
Losses of precision do not generate error messages and the result is rounded to the precision of the column or
variable storing the result.
NUMERIC_ROUNDABORT must be set to OFF when you create or make changes to indexes on computed
columns or indexed views.
The status of this option can be determined by examining the is_numeric_roundabort_on column in the
sys.databases catalog view or the IsNumericRoundAbortEnabled property of the DATABASEPROPERTYEX
function.
RECURSIVE_TRIGGERS { ON | OFF }
ON
Recursive firing of AFTER triggers is allowed.
OFF
Only direct recursive firing of AFTER triggers is not allowed. To also disable indirect recursion of AFTER triggers,
set the nested triggers server option to 0 by using sp_configure.
NOTE
Only direct recursion is prevented when RECURSIVE_TRIGGERS is set to OFF. To disable indirect recursion, you must also
set the nested triggers server option to 0.
The status of this option can be determined by examining the is_recursive_triggers_on column in the
sys.databases catalog view or the IsRecursiveTriggersEnabled property of the DATABASEPROPERTYEX
function.
<target_recovery_time_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017. Not available in SQL Database.
Specifies the frequency of indirect checkpoints on a per-database basis. Beginning with SQL Server 2016 (13.x)
the default value for new databases is 1 minute, which indicates database will use indirect checkpoints. For older
versions the default is 0, which indicates that the database will use automatic checkpoints, whose frequency
depends on the recovery interval setting of the server instance. Microsoft recommends 1 minute for most
systems.
TARGET_RECOVERY_TIME =target_recovery_time { SECONDS | MINUTES }
target_recovery_time
Specifies the maximum bound on the time to recover the specified database in the event of a crash.
SECONDS
Indicates that target_recovery_time is expressed as the number of seconds.
MINUTES
Indicates that target_recovery_time is expressed as the number of minutes.
For more information about indirect checkpoints, see Database Checkpoints (SQL Server).
WITH <termination> ::=
Specifies when to roll back incomplete transactions when the database is transitioned from one state to another.
If the termination clause is omitted, the ALTER DATABASE statement waits indefinitely if there is any lock on the
database. Only one termination clause can be specified, and it follows the SET clauses.
NOTE
Not all database options use the WITH <termination> clause. For more information, see the table under "Setting Options
of the "Remarks" section of this topic.
Setting Options
To retrieve current settings for database options, use the sys.databases catalog view or DATABASEPROPERTYEX
After you set a database option, the modification takes effect immediately.
To change the default values for any one of the database options for all newly created databases, change the
appropriate database option in the model database.
Not all database options use the WITH <termination> clause or can be specified in combination with other
options. The following table lists these options and their option and termination status.
<external_access_option> Yes No
<cursor_option> Yes No
<auto_option> Yes No
<sql_option> Yes No
<recovery_option> Yes No
<target_recovery_time_option> No Yes
<database_mirroring_option> No No
ALLOW_SNAPSHOT_ISOLATION No No
CAN USE THE WITH <TERMINATION>
OPTIONS CATEGORY CAN BE SPECIFIED WITH OTHER OPTIONS CLAUSE
READ_COMMITTED_SNAPSHOT No Yes
<service_broker_option> Yes No
<db_encryption> Yes No
The plan cache for the instance of SQL Server is cleared by setting one of the following options:
OFFLINE READ_WRITE
READ_ONLY
Examples
A. Setting options on a database
The following example sets the recovery model and data page verification options for the
AdventureWorks2012 sample database.
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL PAGE_VERIFY CHECKSUM;
GO
NOTE
This example uses the termination option WITH ROLLBACK IMMEDIATE in the first ALTER DATABASE statement. All
incomplete transactions will be rolled back and any other connections to the AdventureWorks2012 database will be
immediately disconnected.
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
ALTER DATABASE AdventureWorks2012
SET READ_ONLY
GO
ALTER DATABASE AdventureWorks2012
SET MULTI_USER;
GO
USE AdventureWorks2012;
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
-- Check the state of the snapshot_isolation_framework
-- in the database.
SELECT name, snapshot_isolation_state,
snapshot_isolation_state_desc AS description
FROM sys.databases
WHERE name = N'AdventureWorks2012';
GO
The result set shows that the snapshot isolation framework is enabled.
NAME SNAPSHOT_ISOLATION_STATE DESCRIPTION
AdventureWorks2012 1 ON
The following example shows how to change the retention period to 3 days.
The following example shows how to disable change tracking for the AdventureWorks2012 database.
See Also
ALTER DATABASE Compatibility Level (Transact-SQL )
ALTER DATABASE Database Mirroring (Transact-SQL )
ALTER DATABASE SET HADR (Transact-SQL )
Statistics
CREATE DATABASE (SQL Server Transact-SQL )
Enable and Disable Change Tracking (SQL Server)
DATABASEPROPERTYEX (Transact-SQL )
DROP DATABASE (Transact-SQL )
SET TRANSACTION ISOL ATION LEVEL (Transact-SQL )
sp_configure (Transact-SQL )
sys.databases (Transact-SQL )
sys.data_spaces (Transact-SQL )
Best Practice with the Query Store
ALTER ENDPOINT (Transact-SQL)
5/4/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Enables modifying an existing endpoint in the following ways:
By adding a new method to an existing endpoint.
By modifying or dropping an existing method from the endpoint.
By changing the properties of an endpoint.
NOTE
This topic describes the syntax and arguments that are specific to ALTER ENDPOINT. For descriptions of the arguments that
are common to both CREATE ENDPOINT and ALTER ENDPOINT, see CREATE ENDPOINT (Transact-SQL).
Native XML Web Services (SOAP/HTTP endpoints) is removed beginning in SQL Server 2012 (11.x).
Transact-SQL Syntax Conventions
Syntax
ALTER ENDPOINT endPointName [ AUTHORIZATION login ]
[ STATE = { STARTED | STOPPED | DISABLED } ]
[ AS { TCP } ( <protocol_specific_items> ) ]
[ FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING } (
<language_specific_items>
) ]
Arguments
NOTE
The following arguments are specific to ALTER ENDPOINT. For descriptions of the remaining arguments, see CREATE
ENDPOINT (Transact-SQL).
AS { TCP }
You cannot change the transport protocol with ALTER ENDPOINT.
AUTHORIZATION login
The AUTHORIZATION option is not available in ALTER ENDPOINT. Ownership can only be assigned when
the endpoint is created.
FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING }
You cannot change the payload type with ALTER ENDPOINT.
Remarks
When you use ALTER ENDPOINT, specify only those parameters that you want to update. All properties of an
existing endpoint remain the same unless you explicitly change them.
The ENDPOINT DDL statements cannot be executed inside a user transaction.
For information on choosing an encryption algorithm for use with an endpoint, see Choose an Encryption
Algorithm.
NOTE
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or RC4_128
when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of the AES
algorithms instead. In SQL Server 2012 (11.x) and later versions, material encrypted using RC4 or RC4_128 can be decrypted
in any compatibility level.
RC4 is a relatively weak algorithm, and AES is a relatively strong algorithm. But AES is considerably slower than RC4. If
security is a higher priority for you than speed, we recommend you use AES.
Permissions
User must be a member of the sysadmin fixed server role, the owner of the endpoint, or have been granted
ALTER ANY ENDPOINT permission.
To change ownership of an existing endpoint, you must use the ALTER AUTHORIZATION statement. For more
information, see ALTER AUTHORIZATION (Transact-SQL ).
For more information, see GRANT Endpoint Permissions (Transact-SQL ).
See Also
DROP ENDPOINT (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER EVENT SESSION (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Starts or stops an event session or changes an event session configuration.
Transact-SQL Syntax Conventions
Syntax
ALTER EVENT SESSION event_session_name
ON SERVER
{
[ [ { <add_drop_event> [ ,...n] }
| { <add_drop_event_target> [ ,...n ] } ]
[ WITH ( <event_session_options> [ ,...n ] ) ]
]
| [ STATE = { START | STOP } ]
}
<add_drop_event>::=
{
[ ADD EVENT <event_specifier>
[ ( {
[ SET { event_customizable_attribute = <value> [ ,...n ] } ]
[ ACTION ( { [event_module_guid].event_package_name.action_name [ ,...n ] } ) ]
[ WHERE <predicate_expression> ]
} ) ]
]
| DROP EVENT <event_specifier> }
<event_specifier> ::=
{
[event_module_guid].event_package_name.event_name
}
<predicate_expression> ::=
{
[ NOT ] <predicate_factor> | {( <predicate_expression> ) }
[ { AND | OR } [ NOT ] { <predicate_factor> | ( <predicate_expression> ) } ]
[ ,...n ]
}
<predicate_factor>::=
{
<predicate_leaf> | ( <predicate_expression> )
}
<predicate_leaf>::=
{
<predicate_source_declaration> { = | < > | ! = | > | > = | < | < = } <value>
| [event_module_guid].event_package_name.predicate_compare_name ( <predicate_source_declaration>, <value>
)
}
<predicate_source_declaration>::=
{
event_field_name | ( [event_module_guid].event_package_name.predicate_source_name )
}
}
<value>::=
{
number | 'string'
}
<add_drop_event_target>::=
{
ADD TARGET <event_target_specifier>
[ ( SET { target_parameter_name = <value> [ ,...n] } ) ]
| DROP TARGET <event_target_specifier>
}
<event_target_specifier>::=
{
[event_module_guid].event_package_name.target_name
}
<event_session_options>::=
{
[ MAX_MEMORY = size [ KB | MB] ]
[ [,] EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS | ALLOW_MULTIPLE_EVENT_LOSS | NO_EVENT_LOSS } ]
[ [,] MAX_DISPATCH_LATENCY = { seconds SECONDS | INFINITE } ]
[ [,] MAX_EVENT_SIZE = size [ KB | MB ] ]
[ [,] MEMORY_PARTITION_MODE = { NONE | PER_NODE | PER_CPU } ]
[ [,] TRACK_CAUSALITY = { ON | OFF } ]
[ [,] STARTUP_STATE = { ON | OFF } ]
}
Arguments
Term Definition
STATE = START | STOP Starts or stops the event session. This argument is only valid
when ALTER EVENT SESSION is applied to an event session
object.
SET { event_customizable_attribute= <value> [ ,...n] } Specifies customizable attributes for the event. Customizable
attributes appear in the sys.dm_xe_object_columns view as
column_type 'customizable ' and object_name = event_name.
ACTION ( { Is the action to associate with the event session, where:
[event_module_guid].event_package_name.action_name [
,...n] } ) - event_module_guid is the GUID for the module that
contains the event.
- event_package_name is the package that contains the
action object.
- action_name is the action object.
event_field_name Is the name of the event field that identifies the predicate
source.
EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS Specifies the event retention mode to use for handling event
| ALLOW_MULTIPLE_EVENT_LOSS | NO_EVENT_LOSS } loss.
ALLOW_SINGLE_EVENT_LOSS
An event can be lost from the session. A single event is only
dropped when all the event buffers are full. Losing a single
event when event buffers are full allows for acceptable SQL
Server performance characteristics, while minimizing the loss
of data in the processed event stream.
ALLOW_MULTIPLE_EVENT_LOSS
Full event buffers containing multiple events can be lost from
the session. The number of events lost is dependent upon the
memory size allocated to the session, the partitioning of the
memory, and the size of the events in the buffer. This option
minimizes performance impact on the server when event
buffers are quickly filled, but large numbers of events can be
lost from the session.
NO_EVENT_LOSS
No event loss is allowed. This option ensures that all events
raised will be retained. Using this option forces all tasks that
fire events to wait until space is available in an event buffer.
This may cause detectable performance issues while the event
session is active. User connections may stall while waiting for
events to be flushed from the buffer.
MAX_DISPATCH_LATENCY = { seconds SECONDS | INFINITE } Specifies the amount of time that events are buffered in
memory before being dispatched to event session targets. The
minimum latency value is 1 second. However, 0 can be used to
specify INFINITE latency. By default, this value is set to 30
seconds.
seconds SECONDS
The time, in seconds, to wait before starting to flush buffers to
targets. seconds is a whole number.
INFINITE
Flush buffers to targets only when the buffers are full, or when
the event session closes.
MEMORY_PARTITION_MODE = { NONE | PER_NODE | Specifies the location where event buffers are created.
PER_CPU }
NONE
A single set of buffers is created within the SQL Server
instance.
Remarks
The ADD and DROP arguments cannot be used in the same statement.
Permissions
Requires the ALTER ANY EVENT SESSION permission.
Examples
The following example starts an event session, obtains some live session statistics, and then adds two events to the
existing session.
-- Start the event session
ALTER EVENT SESSION test_session
ON SERVER
STATE = start;
GO
-- Obtain live session statistics
SELECT * FROM sys.dm_xe_sessions;
SELECT * FROM sys.dm_xe_session_events;
GO
See Also
CREATE EVENT SESSION (Transact-SQL )
DROP EVENT SESSION (Transact-SQL )
SQL Server Extended Events Targets
sys.server_event_sessions (Transact-SQL )
sys.dm_xe_objects (Transact-SQL )
sys.dm_xe_object_columns (Transact-SQL )
ALTER EXTERNAL DATA SOURCE (Transact-SQL)
5/4/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies an external data source used to create an external table. The external data source can be Hadoop or Azure
blob storage (WASB ).
Syntax
-- Modify an external data source
-- Applies to: SQL Server (2016 or later)
ALTER EXTERNAL DATA SOURCE data_source_name SET
{
LOCATION = 'server_name_or_IP' [,] |
RESOURCE_MANAGER_LOCATION = <'IP address;Port'> [,] |
CREDENTIAL = credential_name
}
[;]
Arguments
data_source_name Specifies the user-defined name for the data source. The name must be unique.
LOCATION = ‘server_name_or_IP’ Specifies the name of the server or an IP address.
RESOURCE_MANAGER_LOCATION = ‘<IP address;Port>’ Specifies the Hadoop Resource Manager location.
When specified, the query optimizer might choose to pre-process data for a PolyBase query by using Hadoop’s
computation capabilities. This is a cost-based decision. Called predicate pushdown, this can significantly reduce the
volume of data transferred between Hadoop and SQL, and therefore improve query performance.
CREDENTIAL = Credential_Name Specifies the named credential. See CREATE DATABASE SCOPED
CREDENTIAL (Transact-SQL ).
TYPE = BLOB_STORAGE
Applies to: SQL Server 2017 (14.x). For bulk operations only, LOCATION must be valid the URL to Azure Blob
storage. Do not put /, file name, or shared access signature parameters at the end of the LOCATION URL. The
credential used, must be created using SHARED ACCESS SIGNATURE as the identity. For more information on shared
access signatures, see Using Shared Access Signatures (SAS ).
Remarks
Only single source can be modified at a time. Concurrent requests to modify the same source cause one statement
to wait. However, different sources can be modified at the same time. This statement can run concurrently with
other statements.
Permissions
Requires ALTER ANY EXTERNAL DATA SOURCE permission.
IMPORTANT
The ALTER ANY EXTERNAL DATA SOURCE permission grants any principal the ability to create and modify any external data
source object, and therefore, it also grants the ability to access all database scoped credentials on the database. This
permission must be considered as highly privileged, and therefore must be granted only to trusted principals in the system.
Examples
The following example alters the location and resource manager location of an existing data source.
The following example alters the credential to connect to an existing data source.
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the content of an existing external package library.
Syntax
ALTER EXTERNAL LIBRARY library_name
[ AUTHORIZATION owner_name ]
SET <file_spec>
WITH ( LANGUAGE = 'R' )
[ ; ]
<file_spec> ::=
{
(CONTENT = { <client_library_specifier> | <library_bits> | NONE}
[, PLATFORM = WINDOWS )
}
<client_library_specifier> :: =
'[\\computer_name\]share_name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'
| '<relative_path_in_external_data_source>'
<library_bits> :: =
{ varbinary_literal | varbinary_expression }
Arguments
library_name
Specifies the name of an existing package library. Libraries are scoped to the user. Library names are must be
unique within the context of a specific user or owner.
The library name cannot be arbitrarily assigned. That is, you must use the name that the calling runtime expects
when it loads the package.
owner_name
Specifies the name of the user or role that owns the external library.
file_spec
Specifies the content of the package for a specific platform. Only one file artifact per platform is supported.
The file can be specified in the form of a local path or network path. If the data source option is specified, the file
name can be a relative path with respect to the container referenced in the EXTERNAL DATA SOURCE .
Optionally, an OS platform for the file can be specified. Only one file artifact or content is permitted for each OS
platform for a specific language or runtime.
library_bits
Specifies the content of the package as a hex literal, similar to assemblies.
This option is useful if you have the required permission to alter a library, but file access on the server is restricted
and you cannot save the contents to a path the server can access.
Instead, you can pass the package contents as a variable in binary format.
PLATFORM = WINDOWS
Specifies the platform for the content of the library. This value is required when modifying an existing library to
add a different platform. Windows is the only supported platform.
Remarks
For the R language, packages must be prepared in the form of zipped archive files with the .ZIP extension for
Windows. Currently, only the Windows platform is supported.
The ALTER EXTERNAL LIBRARY statement only uploads the library bits to the database. The modified library is
installed when a user runs code in sp_execute_external_script (Transact-SQL ) that calls the library.
Permissions
By default, the dbo user or any member of the role db_owner has permission to run ALTER EXTERNAL
LIBRARY. Additionally, the user who created the external library can alter that external library.
Examples
The following examples change an external library called customPackage .
A. Replace the contents of a library using a file
The following example modifies an external library called customPackage , using a zipped file containing the
updated bits.
EXEC sp_execute_external_script
@language =N'R',
@script=N'library(customPackage)'
;
NOTE
This code sample only demonstrates the syntax; the binary value in CONTENT = has been truncated for readability and does
not create a working library. The actual contents of the binary variable would be much longer.
See also
CREATE EXTERNAL LIBRARY (Transact-SQL ) DROP EXTERNAL LIBRARY (Transact-SQL )
sys.external_library_files
sys.external_libraries
ALTER EXTERNAL RESOURCE POOL (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Applies to: SQL Server 2016 (13.x) R Services (In-Database) and SQL Server 2017 (14.x) Machine Learning
Services (In-Database)
Changes a Resource Governor external pool that specifies resources that can be used by external processes.
For R Services (In-Database) in SQL Server 2016 (13.x), the external pool governs rterm.exe ,
BxlServer.exe , and other processes spawned by them.
For Machine Learning Services (In-Database) in SQL Server 2017, the external pool governs the R
processes listed for the previous version, as well as python.exe , BxlServer.exe , and other processes
spawned by them.
Transact-SQL Syntax Conventions.
Syntax
ALTER EXTERNAL RESOURCE POOL { pool_name | "default" }
[ WITH (
[ MAX_CPU_PERCENT = value ]
[ [ , ] AFFINITY CPU =
{
AUTO
| ( <cpu_range_spec> )
| NUMANODE = (( <NUMA_node_id> )
} ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MAX_PROCESSES = value ]
)
]
[ ; ]
<CPU_range_spec> ::=
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]
Arguments
{ pool_name | "default" }
Is the name of an existing user-defined external resource pool or the default external resource pool that is created
when SQL Server is installed. "default" must be enclosed by quotation marks ("") or brackets ([]) when used with
ALTER EXTERNAL RESOURCE POOL to avoid conflict with DEFAULT , which is a system reserved word.
MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in the external resource pool can receive when
there is CPU contention. value is an integer with a default setting of 100. The allowed range for value is from 1
through 100.
AFFINITY {CPU = AUTO | ( <CPU_range_spec> ) | NUMANODE = (<NUMA_node_range_spec>)}
Attach the external resource pool to specific CPUs. The default value is AUTO.
AFFINITY CPU = ( <CPU_range_spec> ) maps the external resource pool to the SQL Server CPUs identified by
the given CPU_IDs. When you use AFFINITY NUMANODE = ( <NUMA_node_range_spec> ), the external
resource pool is affinitized to the SQL Server physical CPUs that correspond to the given NUMA node or range of
nodes.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this external resource pool. value is an integer
with a default setting of 100. The allowed range for value is from 1 through 100.
MAX_PROCESSES =value
Specifies the maximum number of processes allowed for the external resource pool. Specify 0 to set an unlimited
threshold for the pool, which is thereafter bound only by computer resources. The default is 0.
Remarks
The Database Engine implements the resource pool when you execute the ALTER RESOURCE GOVERNOR
RECONFIGURE statement.
For general information about resource pools, see Resource Governor Resource Pool,
sys.resource_governor_external_resource_pools (Transact-SQL ), and
sys.dm_resource_governor_external_resource_pool_affinity (Transact-SQL ).
For information specific to the use of external resource pools to govern machine learning jobs, see Resource
governance for machine learning in SQL Server...
Permissions
Requires CONTROL SERVER permission.
Examples
The following statement changes an external pool, restricting the CPU usage to 50 percent and the maximum
memory to 25 percent of the available memory on the computer.
See also
Resource governance for machine learning in SQL Server
external scripts enabled Server Configuration Option
CREATE EXTERNAL RESOURCE POOL (Transact-SQL )
DROP EXTERNAL RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
Resource Governor Resource Pool
ALTER RESOURCE GOVERNOR (Transact-SQL )
ALTER FULLTEXT CATALOG (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a full-text catalog.
Transact-SQL Syntax Conventions
Syntax
ALTER FULLTEXT CATALOG catalog_name
{ REBUILD [ WITH ACCENT_SENSITIVITY = { ON | OFF } ]
| REORGANIZE
| AS DEFAULT
}
Arguments
catalog_name
Specifies the name of the catalog to be modified. If a catalog with the specified name does not exist, Microsoft SQL
Server returns an error and does not perform the ALTER operation.
REBUILD
Tells SQL Server to rebuild the entire catalog. When a catalog is rebuilt, the existing catalog is deleted and a new
catalog is created in its place. All the tables that have full-text indexing references are associated with the new
catalog. Rebuilding resets the full-text metadata in the database system tables.
WITH ACCENT_SENSITIVITY = {ON|OFF }
Specifies if the catalog to be altered is accent-sensitive or accent-insensitive for full-text indexing and querying.
To determine the current accent-sensitivity property setting of a full-text catalog, use the
FULLTEXTCATALOGPROPERTY function with the accentsensitivity property value against catalog_name. If the
function returns '1', the full-text catalog is accent sensitive; if the function returns '0', the catalog is not accent
sensitive.
The catalog and database default accent sensitivity are the same.
REORGANIZE
Tells SQL Server to perform a master merge, which involves merging the smaller indexes created in the process of
indexing into one large index. Merging the full-text index fragments can improve performance and free up disk and
memory resources. If there are frequent changes to the full-text catalog, use this command periodically to
reorganize the full-text catalog.
REORGANIZE also optimizes internal index and catalog structures.
Keep in mind that, depending on the amount of indexed data, a master merge may take some time to complete.
Master merging a large amount of data can create a long running transaction, delaying truncation of the
transaction log during checkpoint. In this case, the transaction log might grow significantly under the full recovery
model. As a best practice, ensure that your transaction log contains sufficient space for a long-running transaction
before reorganizing a large full-text index in a database that uses the full recovery model. For more information,
see Manage the Size of the Transaction Log File.
AS DEFAULT
Specifies that this catalog is the default catalog. When full-text indexes are created with no specified catalogs, the
default catalog is used. If there is an existing default full-text catalog, setting this catalog AS DEFAULT will override
the existing default.
Permissions
User must have ALTER permission on the full-text catalog, or be a member of the db_owner, db_ddladmin fixed
database roles, or sysadmin fixed server role.
NOTE
To use ALTER FULLTEXT CATALOG AS DEFAULT, the user must have ALTER permission on the full-text catalog and CREATE
FULLTEXT CATALOG permission on the database.
Examples
The following example changes the accentsensitivity property of the default full-text catalog ftCatalog , which is
accent sensitive.
See Also
sys.fulltext_catalogs (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
DROP FULLTEXT CATALOG (Transact-SQL )
Full-Text Search
ALTER FULLTEXT INDEX (Transact-SQL)
5/3/2018 • 13 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a full-text index in SQL Server.
Transact-SQL Syntax Conventions
Syntax
ALTER FULLTEXT INDEX ON table_name
{ ENABLE
| DISABLE
| SET CHANGE_TRACKING [ = ] { MANUAL | AUTO | OFF }
| ADD ( column_name
[ TYPE COLUMN type_column_name ]
[ LANGUAGE language_term ]
[ STATISTICAL_SEMANTICS ]
[,...n]
)
[ WITH NO POPULATION ]
| ALTER COLUMN column_name
{ ADD | DROP } STATISTICAL_SEMANTICS
[ WITH NO POPULATION ]
| DROP ( column_name [,...n] )
[ WITH NO POPULATION ]
| START { FULL | INCREMENTAL | UPDATE } POPULATION
| {STOP | PAUSE | RESUME } POPULATION
| SET STOPLIST [ = ] { OFF| SYSTEM | stoplist_name }
[ WITH NO POPULATION ]
| SET SEARCH PROPERTY LIST [ = ] { OFF | property_list_name }
[ WITH NO POPULATION ]
}
[;]
Arguments
table_name
Is the name of the table or indexed view that contains the column or columns included in the full-text index.
Specifying database and table owner names is optional.
ENABLE | DISABLE
Tells SQL Server whether to gather full-text index data for table_name. ENABLE activates the full-text index;
DISABLE turns off the full-text index. The table will not support full-text queries while the index is disabled.
Disabling a full-text index allows you to turn off change tracking but keep the full-text index, which you can
reactivate at any time using ENABLE. When the full-text index is disabled, the full-text index metadata remains in
the system tables. If CHANGE_TRACKING is in the enabled state (automatic or manual update) when the full-text
index is disabled, the state of the index freezes, any ongoing crawl stops, and new changes to the table data are
not tracked or propagated to the index.
SET CHANGE_TRACKING {MANUAL | AUTO | OFF }
Specifies whether changes (updates, deletes, or inserts) made to table columns that are covered by the full-text
index will be propagated by SQL Server to the full-text index. Data changes through WRITETEXT and
UPDATETEXT are not reflected in the full-text index, and are not picked up with change tracking.
NOTE
For information about the interaction of change tracking and WITH NO POPULATION, see "Remarks," later in this topic.
MANUAL
Specifies that the tracked changes will be propagated manually by calling the ALTER FULLTEXT INDEX … START
UPDATE POPUL ATION Transact-SQL statement (manual population). You can use SQL Server Agent to call this
Transact-SQL statement periodically.
AUTO
Specifies that the tracked changes will be propagated automatically as data is modified in the base table
(automatic population). Although changes are propagated automatically, these changes might not be reflected
immediately in the full-text index. AUTO is the default.
OFF
Specifies that SQL Server will not keep a list of changes to the indexed data.
ADD | DROP column_name
Specifies the columns to be added or deleted from a full-text index. The column or columns must be of type char,
varchar, nchar, nvarchar, text, ntext, image, xml, varbinary, or varbinary(max).
Use the DROP clause only on columns that have been enabled previously for full-text indexing.
Use TYPE COLUMN and L ANGUAGE with the ADD clause to set these properties on the column_name. When a
column is added, the full-text index on the table must be repopulated in order for full-text queries against this
column to work.
NOTE
Whether the full-text index is populated after a column is added or dropped from a full-text index depends on whether
change-tracking is enabled and whether WITH NO POPULATION is specified. For more information, see "Remarks," later in
this topic.
NOTE
At indexing time, the Full-Text Engine uses the abbreviation in the type column of each table row to identify which full-text
search filter to use for the document in column_name. The filter loads the document as a binary stream, removes the
formatting information, and sends the text from the document to the word-breaker component. For more information, see
Configure and Manage Filters for Search.
L ANGUAGE language_term
Is the language of the data stored in column_name.
language_term is optional and can be specified as a string, integer, or hexadecimal value corresponding to the
locale identifier (LCID ) of a language. If language_term is specified, the language it represents will be applied to
all elements of the search condition. If no value is specified, the default full-text language of the SQL Server
instance is used.
Use the sp_configure stored procedure to access information about the default full-text language of the SQL
Server instance.
When specified as a string, language_term corresponds to the alias column value in the syslanguages system
table. The string must be enclosed in single quotation marks, as in 'language_term'. When specified as an integer,
language_term is the actual LCID that identifies the language. When specified as a hexadecimal value,
language_term is 0x followed by the hex value of the LCID. The hex value must not exceed eight digits, including
leading zeros.
If the value is in double-byte character set (DBCS ) format, SQL Server will convert it to Unicode.
Resources, such as word breakers and stemmers, must be enabled for the language specified as language_term. If
such resources do not support the specified language, SQL Server returns an error.
For non-BLOB and non-XML columns containing text data in multiple languages, or for cases when the language
of the text stored in the column is unknown, use the neutral (0x0) language resource. For documents stored in
XML - or BLOB -type columns, the language encoding within the document will be used at indexing time. For
example, in XML columns, the xml:lang attribute in XML documents will identify the language. At query time, the
value previously specified in language_term becomes the default language used for full-text queries unless
language_term is specified as part of a full-text query.
STATISTICAL_SEMANTICS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Creates the additional key phrase and document similarity indexes that are part of statistical semantic indexing.
For more information, see Semantic Search (SQL Server).
[ ,...n]
Indicates that multiple columns may be specified for the ADD, ALTER, or DROP clauses. When multiple columns
are specified, separate these columns with commas.
WITH NO POPUL ATION
Specifies that the full-text index will not be populated after an ADD or DROP column operation or a SET
STOPLIST operation. The index will only be populated if the user executes a START...POPUL ATION command.
When NO POPUL ATION is specified, SQL Server does not populate an index. The index is populated only after
the user gives an ALTER FULLTEXT INDEX...START POPUL ATION command. When NO POPUL ATION is not
specified, SQL Server populates the index.
If CHANGE_TRACKING is enabled and WITH NO POPUL ATION is specified, SQL Server returns an error. If
CHANGE_TRACKING is enabled and WITH NO POPUL ATION is not specified, SQL Server performs a full
population on the index.
NOTE
For more information about the interaction of change tracking and WITH NO POPULATION, see "Remarks," later in this
topic.
IMPORTANT
If the full-text index was previously associated with a different search it must be rebuilt property list in order to bring the
index into a consistent state. The index is truncated immediately and is empty until the full population runs. For more
information about when changing the search property list causes rebuilding, see "Remarks," later in this topic.
NOTE
You can associate a given search property list with more than one full-text index in the same database.
For more information about populating full-text indexes, see Populate Full-Text Indexes.
NOTE
For more information about how full-text search works with search property lists, see Search Document Properties with
Search Property Lists. For information about full populations, see Populate Full-Text Indexes.
3. The full-text index is later associated a different search property list, spl_2 , using the following statement:
This statement causes a full population, the default behavior. However, before beginning this population,
the Full-Text Engine automatically truncates the index.
Scenario B: Turning Off the Search Property List and Later Associating the Index with Any Search Property List
1. A full-text index is created on table_1 with a search property list spl_1 , followed by an automatic full
population (the default behavior):
3. The full-text index is once more associated either the same search property list or a different one.
For example the following statement re-associates the full-text index with the original search property list,
spl_1 :
Permissions
The user must have ALTER permission on the table or indexed view, or be a member of the sysadmin fixed
server role, or the db_ddladmin or db_owner fixed database roles.
If SET STOPLIST is specified, the user must have REFERENCES permission on the stoplist. If SET SEARCH
PROPERTY LIST is specified, the user must have REFERENCES permission on the search property list. The
owner of the specified stoplist or search property list can grant REFERENCES permission, if the owner has
ALTER FULLTEXT CATALOG permissions.
NOTE
The public is granted REFERENCES permission to the default stoplist that is shipped with SQL Server.
Examples
A. Setting manual change tracking
The following example sets manual change tracking on the full-text index on the JobCandidate table.
USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON HumanResources.JobCandidate
SET CHANGE_TRACKING MANUAL;
GO
NOTE
For an example that creates the DocumentPropertyList property list, see CREATE SEARCH PROPERTY LIST (Transact-SQL).
USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON Production.Document
SET SEARCH PROPERTY LIST DocumentPropertyList;
GO
USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON Production.Document
SET SEARCH PROPERTY LIST OFF WITH NO POPULATION;
GO
USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON HumanResources.JobCandidate
START FULL POPULATION;
GO
See Also
sys.fulltext_indexes (Transact-SQL )
CREATE FULLTEXT INDEX (Transact-SQL )
DROP FULLTEXT INDEX (Transact-SQL )
Full-Text Search
Populate Full-Text Indexes
ALTER FULLTEXT STOPLIST (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Inserts or deletes a stop word in the default full-text stoplist of the current database.
Transact-SQL Syntax Conventions
Syntax
ALTER FULLTEXT STOPLIST stoplist_name
{
ADD [N] 'stopword' LANGUAGE language_term
| DROP
{
'stopword' LANGUAGE language_term
| ALL LANGUAGE language_term
| ALL
}
;
Arguments
stoplist_name
Is the name of the stoplist being altered. stoplist_name can be a maximum of 128 characters.
' stopword '
Is a string that could be a word with linguistic meaning in the specified language or a token that does not have a
linguistic meaning. stopword is limited to the maximum token length (64 characters). A stopword can be specified
as a Unicode string.
L ANGUAGE language_term
Specifies the language to associate with the stopword being added or dropped.
language_term can be specified as a string, integer, or hexadecimal value corresponding to the locale identifier
(LCID ) of the language, as follows:
FORMAT DESCRIPTION
Remarks
CREATE FULLTEXT STOPLIST is supported only for compatibility level 100 and higher. For compatibility levels 80
and 90, the system stoplist is always assigned to the database.
Permissions
To designate a stoplist as the default stoplist of the database requires ALTER DATABASE permission. To otherwise
alter a stoplist requires being the stoplist owner or membership in the db_owner or db_ddladmin fixed database
roles.
Examples
The following example alters a stoplist named CombinedFunctionWordList , adding the word 'en', first for Spanish
and then for French.
See Also
CREATE FULLTEXT STOPLIST (Transact-SQL )
DROP FULLTEXT STOPLIST (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
sys.fulltext_stoplists (Transact-SQL )
sys.fulltext_stopwords (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
ALTER FUNCTION (Transact-SQL)
5/3/2018 • 14 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters an existing Transact-SQL or CLR function that was previously created by executing the CREATE FUNCTION
statement, without changing permissions and without affecting any dependent functions, stored procedures, or
triggers.
Transact-SQL Syntax Conventions
Syntax
-- Transact-SQL Scalar Function Syntax
ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
[ ; ]
<table_type_definition>:: =
( { <column_definition> <column_constraint>
| <computed_column_definition> }
[ <table_constraint> ] [ ,...n ]
)
<column_definition>::=
{
{ column_name data_type }
[ [ DEFAULT constant_expression ]
[ COLLATE collation_name ] | [ ROWGUIDCOL ]
]
| [ IDENTITY [ (seed , increment ) ] ]
[ <column_constraint> [ ...n ] ]
}
<column_constraint>::=
{
[ NULL | NOT NULL ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
[ ON { filegroup | "default" } ]
| [ CHECK ( logical_expression ) ] [ ,...n ]
}
<computed_column_definition>::=
column_name AS computed_column_expression
<table_constraint>::=
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
( column_name [ ASC | DESC ] [ ,...n ] )
[ WITH FILLFACTOR = fillfactor
| WITH ( <index_option> [ , ...n ] )
| [ CHECK ( logical_expression ) ] [ ,...n ]
}
<index_option>::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS ={ ON | OFF }
}
-- CLR Scalar and Table-Valued Function Syntax
ALTER FUNCTION [ schema_name. ] function_name
( { @parameter_name [AS] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
)
RETURNS { return_data_type | TABLE <clr_table_type_definition> }
[ WITH <clr_function_option> [ ,...n ] ]
[ AS ] EXTERNAL NAME <method_specifier>
[ ; ]
<clr_function_option>::=
}
[ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
| [ EXECUTE_AS_Clause ]
}
<clr_table_type_definition>::=
( { column_name data_type } [ ,...n ] )
<function_option>::=
{ | NATIVE_COMPILATION
| SCHEMABINDING
| [ EXECUTE_AS_Clause ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
}
Arguments
schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the user-defined function to be changed.
NOTE
Parentheses are required after the function name even if a parameter is not specified.
@ parameter_name
Is a parameter in the user-defined function. One or more parameters can be declared.
A function can have a maximum of 2,100 parameters. The value of each declared parameter must be supplied by
the user when the function is executed, unless a default for the parameter is defined.
Specify a parameter name by using an at sign (@) as the first character. The parameter name must comply with the
rules for identifiers. Parameters are local to the function; the same parameter names can be used in other
functions. Parameters can take the place only of constants; they cannot be used instead of table names, column
names, or the names of other database objects.
NOTE
ANSI_WARNINGS is not honored when passing parameters in a stored procedure, user-defined function, or when declaring
and setting variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger than
three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.
[ type_schema_name. ] parameter_data_type
Is the parameter data type and optionally, the schema to which it belongs. For Transact-SQL functions, all data
types, including CLR user-defined types, are allowed except the timestamp data type. For CLR functions, all data
types, including CLR user-defined types, are allowed except text, ntext, image, and timestamp data types. The
nonscalar types cursor and table cannot be specified as a parameter data type in either Transact-SQL or CLR
functions.
If type_schema_name is not specified, the SQL Server 2005 Database Engine looks for the parameter_data_type
in the following order:
The schema that contains the names of SQL Server system data types.
The default schema of the current user in the current database.
The dbo schema in the current database.
[ =default ]
Is a default value for the parameter. If a default value is defined, the function can be executed without
specifying a value for that parameter.
NOTE
Default parameter values can be specified for CLR functions except for varchar(max) and varbinary(max) data types.
When a parameter of the function has a default value, the keyword DEFAULT must be specified when calling the
function to retrieve the default value. This behavior is different from using parameters with default values in stored
procedures in which omitting the parameter also implies the default value.
return_data_type
Is the return value of a scalar user-defined function. For Transact-SQL functions, all data types, including CLR user-
defined types, are allowed except the timestamp data type. For CLR functions, all data types, including CLR user-
defined types, are allowed except text, ntext, image, and timestamp data types. The nonscalar types cursor and
table cannot be specified as a return data type in either Transact-SQL or CLR functions.
function_body
Specifies that a series of Transact-SQL statements, which together do not produce a side effect such as modifying a
table, define the value of the function. function_body is used only in scalar functions and multistatement table-
valued functions.
In scalar functions, function_body is a series of Transact-SQL statements that together evaluate to a scalar value.
In multistatement table-valued functions, function_body is a series of Transact-SQL statements that populate a
TABLE return variable.
scalar_expression
Specifies that the scalar function returns a scalar value.
TABLE
Specifies that the return value of the table-valued function is a table. Only constants and @local_variables can be
passed to table-valued functions.
In inline table-valued functions, the TABLE return value is defined through a single SELECT statement. Inline
functions do not have associated return variables.
In multistatement table-valued functions, @return_variable is a TABLE variable used to store and accumulate the
rows that should be returned as the value of the function. @return_variable can be specified only for Transact-SQL
functions and not for CLR functions.
select-stmt
Is the single SELECT statement that defines the return value of an inline table-valued function.
EXTERNAL NAME <method_specifier>assembly_name.class_name.method_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the method of an assembly to bind with the function. assembly_name must match an existing assembly
in SQL Server in the current database with visibility on. class_name must be a valid SQL Server identifier and
must exist as a class in the assembly. If the class has a namespace-qualified name that uses a period (.) to separate
namespace parts, the class name must be delimited by using brackets ([]) or quotation marks (""). method_name
must be a valid SQL Server identifier and must exist as a static method in the specified class.
NOTE
By default, SQL Server cannot execute CLR code. You can create, modify, and drop database objects that reference common
language runtime modules; however, you cannot execute these references in SQL Server until you enable the clr enabled
option. To enable the option, use sp_configure.
NOTE
This option is not available in a contained database.
NOTE
EXECUTE AS cannot be specified for inline user-defined functions.
Remarks
ALTER FUNCTION cannot be used to change a scalar-valued function to a table-valued function, or vice versa.
Also, ALTER FUNCTION cannot be used to change an inline function to a multistatement function, or vice versa.
ALTER FUNCTION cannot be used to change a Transact-SQL function to a CLR function or vice-versa.
The following Service Broker statements cannot be included in the definition of a Transact-SQL user-defined
function:
BEGIN DIALOG CONVERSATION
END CONVERSATION
GET CONVERSATION GROUP
MOVE CONVERSATION
RECEIVE
SEND
Permissions
Requires ALTER permission on the function or on the schema. If the function specifies a user-defined type,
requires EXECUTE permission on the type.
See Also
CREATE FUNCTION (Transact-SQL )
DROP FUNCTION (Transact-SQL )
Make Schema Changes on Publication Databases
EVENTDATA (Transact-SQL )
ALTER INDEX (Transact-SQL)
5/16/2018 • 46 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies an existing table or view index (relational or XML ) by disabling, rebuilding, or reorganizing the index; or
by setting options on the index.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}
<single_partition_rebuild_index_option> ::=
{
SORT_IN_TEMPDB = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| RESUMABLE = { ON | OFF }
| MAX_DURATION = <time> [MINUTES}
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE} }
| ONLINE = { ON [ ( <low_priority_lock_wait> ) ] | OFF }
}
<reorganize_option>::=
{
LOB_COMPACTION = { ON | OFF }
| COMPRESS_ALL_ROW_GROUPS = { ON | OFF}
}
<set_index_option>::=
{
ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| COMPRESSION_DELAY= {0 | delay [Minutes]}
}
<resumable_index_option> ::=
{
MAXDOP = max_degree_of_parallelism
| MAX_DURATION =<time> [MINUTES]
| <low_priority_lock_wait>
}
<low_priority_lock_wait>::=
{
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = <time> [ MINUTES ] ,
ABORT_AFTER_WAIT = { NONE | SELF | BLOCKERS } )
}
Arguments
index_name
Is the name of the index. Index names must be unique within a table or view but do not have to be unique within
a database. Index names must follow the rules of identifiers.
ALL
Specifies all indexes associated with the table or view regardless of the index type. Specifying ALL causes the
statement to fail if one or more indexes are in an offline or read-only filegroup or the specified operation is not
allowed on one or more index types. The following table lists the index operations and disallowed index types.
USING THE KEYWORD ALL WITH THIS OPERATION FAILS IF THE TABLE HAS ONE OR MORE
Spatial index
REBUILD PARTITION = partition_number Nonpartitioned index, XML index, spatial index, or disabled
index
REORGANIZE PARTITION = partition_number Nonpartitioned index, XML index, spatial index, or disabled
index
Spatial index
Spatial index
WARNING
For more detailed information about index operations that can be performed online, see Guidelines for Online Index
Operations.
If ALL is specified with PARTITION = partition_number, all indexes must be aligned. This means that they are
partitioned based on equivalent partition functions. Using ALL with PARTITION causes all index partitions with
the same partition_number to be rebuilt or reorganized. For more information about partitioned indexes, see
Partitioned Tables and Indexes.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or_view_name
Is the name of the table or view associated with the index. To display a report of the indexes on an object, use the
sys.indexes catalog view.
SQL Database supports the three-part name format database_name.[schema_name].table_or_view_name when
the database_name is the current database or the database_name is tempdb and the table_or_view_name starts
with #.
REBUILD [ WITH (<rebuild_index_option> [ ,... n]) ]
Specifies the index will be rebuilt using the same columns, index type, uniqueness attribute, and sort order. This
clause is equivalent to DBCC DBREINDEX. REBUILD enables a disabled index. Rebuilding a clustered index does
not rebuild associated nonclustered indexes unless the keyword ALL is specified. If index options are not
specified, the existing index option values stored in sys.indexes are applied. For any index option whose value is
not stored in sys.indexes, the default indicated in the argument definition of the option applies.
If ALL is specified and the underlying table is a heap, the rebuild operation has no effect on the table. Any
nonclustered indexes associated with the table are rebuilt.
The rebuild operation can be minimally logged if the database recovery model is set to either bulk-logged or
simple.
NOTE
When you rebuild a primary XML index, the underlying user table is unavailable for the duration of the index operation.
Applies to: SQL Server (Starting with SQL Server 2012 (11.x)) and SQL Database.
For columnstore indexes, the rebuild operation:
1. Does not use the sort order.
2. Acquires an exclusive lock on the table or partition while the rebuild occurs. The data is “offline” and
unavailable during the rebuild, even when using NOLOCK, RCSI, or SI.
3. Re-compresses all data into the columnstore. Two copies of the columnstore index exist while the rebuild
is taking place. When the rebuild is finished, SQL Server deletes the original columnstore index.
For more information about rebuilding columnstore indexes, see Columnstore indexes - defragmentation
PARTITION
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies that only one partition of an index will be rebuilt or reorganized. PARTITION cannot be specified if
index_name is not a partitioned index.
PARTITION = ALL rebuilds all partitions.
WARNING
Creating and rebuilding nonaligned indexes on a table with more than 1,000 partitions is possible, but is not supported.
Doing so may cause degraded performance or excessive memory consumption during these operations. We recommend
using only aligned indexes when the number of partitions exceed 1,000.
partition_number
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Is the partition number of a partitioned index that is to be rebuilt or reorganized. partition_number is a constant
expression that can reference variables. These include user-defined type variables or functions and user-defined
functions, but cannot reference a Transact-SQL statement. partition_number must exist or the statement fails.
WITH (<single_partition_rebuild_index_option>)
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
SORT_IN_TEMPDB, MAXDOP, and DATA_COMPRESSION are the options that can be specified when you
rebuild a single partition (PARTITION = n). XML indexes cannot be specified in a single partition rebuild
operation.
DISABLE
Marks the index as disabled and unavailable for use by the Database Engine. Any index can be disabled. The
index definition of a disabled index remains in the system catalog with no underlying index data. Disabling a
clustered index prevents user access to the underlying table data. To enable an index, use ALTER INDEX
REBUILD or CREATE INDEX WITH DROP_EXISTING. For more information, see Disable Indexes and
Constraints and Enable Indexes and Constraints.
REORGANIZE a rowstore index
For rowstore indexes, REORGANIZE specifies to reorganize the index leaf level. The REORGANIZE operation is:
Always performed online. This means long-term blocking table locks are not held and queries or updates to
the underlying table can continue during the ALTER INDEX REORGANIZE transaction.
Not allowed for a disabled index
Not allowed when ALLOW_PAGE_LOCKS is set to OFF
Not rolled back when it is performed within a transaction and the transaction is rolled back.
REORGANIZE WITH ( LOB_COMPACTION = { ON | OFF } )
Applies to rowstore indexes.
LOB_COMPACTION = ON
Specifies to compact all pages that contain data of these large object (LOB ) data types: image, text, ntext,
varchar(max), nvarchar(max), varbinary(max), and xml. Compacting this data can reduce the data size on
disk.
For a clustered index, this compacts all LOB columns that are contained in the table.
For a nonclustered index, this compacts all LOB columns that are nonkey (included) columns in the index.
REORGANIZE ALL performs LOB_COMPACTION on all indexes. For each index, this compacts all LOB
columns in the clustered index, underlying table, or included columns in a nonclustered index.
LOB_COMPACTION = OFF
Pages that contain large object data are not compacted.
OFF has no effect on a heap.
REORGANIZE a columnstore index
For columnstore indexes, REORGANIZE compresses each CLOSED delta rowgroup into the columnstore
as a compressed rowgroup. The REORGANIZE operation is always performed online. This means long-
term blocking table locks are not held and queries or updates to the underlying table can continue during
the ALTER INDEX REORGANIZE transaction.
REORGANIZE is not required in order to move CLOSED delta rowgroups into compressed rowgroups.
The background tuple-mover (TM ) process wakes up periodically to compress CLOSED delta rowgroups.
We recommend using REORGANIZE when tuple-mover is falling behind. REORGANIZE can compress
rowgroups more aggressively.
To compress all OPEN and CLOSED rowgroups, see the REORGANIZE WITH
(COMPRESS_ALL_ROW_GROUPS ) option in this section.
For columnstore indexes in SQL Server (Starting with 2016) and SQL Database, REORGANIZE performs the
following additional defragmentation optimizations online:
Physically removes rows from a rowgroup when 10% or more of the rows have been logically deleted. The
deleted bytes are reclaimed on the physical media. For example, if a compressed row group of 1 million
rows has 100K rows deleted, SQL Server will remove the deleted rows and recompress the rowgroup
with 900k rows. It saves on the storage by removing deleted rows.
Combines one or more compressed rowgroups to increase rows per rowgroup up to the maximum of
1,024,576 rows. For example, if you bulk import 5 batches of 102,400 rows you will get 5 compressed
rowgroups. If you run REORGANIZE, these rowgroups will get merged into 1 compressed rowgroup of
size 512,000 rows. This assumes there were no dictionary size or memory limitations.
For rowgroups in which 10% or more of the rows have been logically deleted, SQL Server will try to
combine this rowgroup with one or more rowgroups. For example, rowgroup 1 is compressed with
500,000 rows and rowgroup 21 is compressed with the maximum of 1,048,576 rows. Rowgroup 21 has
60% of the rows deleted which leaves 409,830 rows. SQL Server favors combining these two rowgroups
to compress a new rowgroup that has 909,830 rows.
REORGANIZE WITH ( COMPRESS_ALL_ROW_GROUPS = { ON | OFF } )
Applies to: SQL Server (Starting with SQL Server 2016 (13.x)) and SQL Database
COMPRESS_ALL_ROW_GROUPS provides a way to force OPEN or CLOSED delta rowgroups into the
columnstore. With this option, it is not necessary to rebuild the columnstore index to empty the delta rowgroups.
This, combined with the other remove and merge defragmentation features makes it no longer necessary to
rebuild the index in most situations.
ON forces all rowgroups into the columnstore, regardless of size and state (CLOSED or OPEN ).
OFF forces all CLOSED rowgroups into the columnstore.
SET ( <set_index option> [ ,... n] )
Specifies index options without rebuilding or reorganizing the index. SET cannot be specified for a disabled index.
PAD_INDEX = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by FILLFACTOR is applied to the intermediate-level pages of the
index. If FILLFACTOR is not specified at the same time PAD_INDEX is set to ON, the fill factor value stored in
sys.indexes is used.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity. This leaves sufficient space for at least one row of the
maximum size that the index can have, based on the set of keys on the intermediate pages.
For more information, see CREATE INDEX (Transact-SQL ).
FILLFACTOR = fillfactor
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Fill factor
values 0 and 100 are the same in all respects.
An explicit FILLFACTOR setting applies only when the index is first created or rebuilt. The Database Engine does
not dynamically keep the specified percentage of empty space in the pages. For more information, see CREATE
INDEX (Transact-SQL ).
To view the fill factor setting, use sys.indexes.
IMPORTANT
Creating or altering a clustered index with a FILLFACTOR value affects the amount of storage space the data occupies,
because the Database Engine redistributes the data when it creates the clustered index.
SORT_IN_TEMPDB = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether to store the sort results in tempdb. The default is OFF.
ON
The intermediate sort results that are used to build the index are stored in tempdb. If tempdb is on a different
set of disks than the user database, this may reduce the time needed to create an index. However, this increases
the amount of disk space that is used during the index build.
OFF
The intermediate sort results are stored in the same database as the index.
If a sort operation is not required, or if the sort can be performed in memory, the SORT_IN_TEMPDB option is
ignored.
For more information, see SORT_IN_TEMPDB Option For Indexes.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique index.
The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The default
is OFF.
ON
A warning message will occur when duplicate key values are inserted into a unique index. Only the rows violating
the uniqueness constraint will fail.
OFF
An error message will occur when duplicate key values are inserted into a unique index. The entire INSERT
operation will be rolled back.
IGNORE_DUP_KEY cannot be set to ON for indexes created on a view, non-unique indexes, XML indexes, spatial
indexes, and filtered indexes.
To view IGNORE_DUP_KEY, use sys.indexes.
In backward compatible syntax, WITH IGNORE_DUP_KEY is equivalent to WITH IGNORE_DUP_KEY = ON.
STATISTICS_NORECOMPUTE = { ON | OFF }
Specifies whether distribution statistics are recomputed. The default is OFF.
ON
Out-of-date statistics are not automatically recomputed.
OFF
Automatic statistics updating are enabled.
To restore automatic statistics updating, set the STATISTICS_NORECOMPUTE to OFF, or execute UPDATE
STATISTICS without the NORECOMPUTE clause.
IMPORTANT
Disabling automatic recomputation of distribution statistics may prevent the query optimizer from picking optimal
execution plans for queries that involve the table.
STATISTICS_INCREMENTAL = { ON | OFF }
When ON, the statistics created are per partition statistics. When OFF, the statistics tree is dropped and SQL
Server re-computes the statistics. The default is OFF.
If per partition statistics are not supported the option is ignored and a warning is generated. Incremental stats
are not supported for following statistics types:
Statistics created with indexes that are not partition-aligned with the base table.
Statistics created on Always On readable secondary databases.
Statistics created on read-only databases.
Statistics created on filtered indexes.
Statistics created on views.
Statistics created on internal tables.
Statistics created with spatial indexes or XML indexes.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
ONLINE = { ON | OFF } <as applies to rebuild_index_option>
Specifies whether underlying tables and associated indexes are available for queries and data modification during
the index operation. The default is OFF.
For an XML index or spatial index, only ONLINE = OFF is supported, and if ONLINE is set to ON an error is
raised.
NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016 (13.x) and Editions and Supported
Features for SQL Server 2017.
ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. This allows queries or updates to the
underlying table and indexes to continue. At the start of the operation, a Shared (S ) lock is very briefly held on
the source object. At the end of the operation, an S lock is very briefly held on the source if a nonclustered index
is being created, or an SCH-M (Schema Modification) lock is acquired when a clustered index is created or
dropped online, or when a clustered or nonclustered index is being rebuilt. ONLINE cannot be set to ON when
an index is being created on a local temporary table.
OFF
Table locks are applied for the duration of the index operation. An offline index operation that creates, rebuilds, or
drops a clustered, spatial, or XML index, or rebuilds or drops a nonclustered index, acquires a Schema
modification (Sch-M ) lock on the table. This prevents all user access to the underlying table for the duration of
the operation. An offline index operation that creates a nonclustered index acquires a Shared (S ) lock on the table.
This prevents updates to the underlying table but allows read operations, such as SELECT statements.
For more information, see How Online Index Operations Work.
Indexes, including indexes on global temp tables, can be rebuilt online with the following exceptions:
XML indexes
Indexes on local temp tables
A subset of a partitioned index (An entire partitioned index can be rebuilt online.)
SQL Database prior to V12, and SQL Server prior to SQL Server 2012 (11.x), do not permit the ONLINE
option for clustered index build or rebuild operations when the base table contains varchar(max) or
varbinary(max) columns.
RESUMABLE = { ON | OFF}
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Specifies whether an online index operation is resumable.
ON Index operation is resumable.
OFF Index operation is not resumable.
MAX_DURATION = time [MINUTES ] used with RESUMABLE = ON (requires ONLINE = ON ).
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Indicates time (an integer value specified in minutes) that a resumable online index operation is executed before
being paused.
ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when you access the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
NOTE
An index cannot be reorganized when ALLOW_PAGE_LOCKS is set to OFF.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Overrides the max degree of parallelism configuration option for the duration of the index operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to
limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
IMPORTANT
Although the MAXDOP option is syntactically supported for all XML indexes, for a spatial index or a primary XML index,
ALTER INDEX currently uses only a single processor.
NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016 (13.x).
REBUILD WITH
(
DATA_COMPRESSION = NONE ON PARTITIONS (1),
DATA_COMPRESSION = ROW ON PARTITIONS (2, 4, 6 TO 8),
DATA_COMPRESSION = PAGE ON PARTITIONS (3, 5)
);
NOTE
Online index rebuild can set the low_priority_lock_wait options described later in this section.
OFF
Table locks are applied for the duration of the index operation. This prevents all user access to the underlying
table for the duration of the operation.
WAIT_AT_LOW_PRIORITY used with ONLINE=ON only.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
An online index rebuild has to wait for blocking operations on this table. WAIT_AT_LOW_PRIORITY indicates
that the online index rebuild operation will wait for low priority locks, allowing other operations to proceed while
the online index build operation is waiting. Omitting the WAIT AT LOW PRIORITY option is equivalent to
WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE) . For more information, see
WAIT_AT_LOW_PRIORITY.
MAX_DURATION = time [MINUTES ]
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
The wait time (an integer value specified in minutes) that the online index rebuild locks will wait with low priority
when executing the DDL command. If the operation is blocked for the MAX_DURATION time, one of the
ABORT_AFTER_WAIT actions will be executed. MAX_DURATION time is always in minutes, and the word
MINUTES can be omitted.
ABORT_AFTER_WAIT = [NONE | SELF | BLOCKERS } ]
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
NONE
Continue waiting for the lock with normal (regular) priority.
SELF
Exit the online index rebuild DDL operation currently being executed without taking any action.
BLOCKERS
Kill all user transactions that block the online index rebuild DDL operation so that the operation can continue. The
BLOCKERS option requires the login to have ALTER ANY CONNECTION permission.
RESUME
Applies to: Starting with SQL Server 2017 (14.x)
Resume an index operation that is paused manually or due to a failure.
MAX_DURATION used with RESUMABLE=ON
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
The time (an integer value specified in minutes) the resumable online index operation is executed after being
resumed. Once the time expires, the resumable operation is paused if it is still running.
WAIT_AT_LOW_PRIORITY used with RESUMABLE=ON and ONLINE = ON.
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Resuming an online index rebuild after a pause has to wait for blocking operations on this table.
WAIT_AT_LOW_PRIORITY indicates that the online index rebuild operation will wait for low priority locks,
allowing other operations to proceed while the online index build operation is waiting. Omitting the WAIT AT
LOW PRIORITY option is equivalent to
WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE) . For more information, see
WAIT_AT_LOW_PRIORITY.
PAUSE
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Pause a resumable online index rebuild operation.
ABORT
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Abort a running or paused index operation that was declared as resumable. You have to explicitly execute an
ABORT command to terminate a resumable index rebuild operation. Failure or pausing a resumable index
operation does not terminate its execution; rather, it leaves the operation in an indefinite pause state.
Remarks
ALTER INDEX cannot be used to repartition an index or move it to a different filegroup. This statement cannot be
used to modify the index definition, such as adding or deleting columns or changing the column order. Use
CREATE INDEX with the DROP_EXISTING clause to perform these operations.
When an option is not explicitly specified, the current setting is applied. For example, if a FILLFACTOR setting is
not specified in the REBUILD clause, the fill factor value stored in the system catalog will be used during the
rebuild process. To view the current index option settings, use sys.indexes.
The values for ONLINE , MAXDOP , and SORT_IN_TEMPDB are not stored in the system catalog. Unless specified in the
index statement, the default value for the option is used.
On multiprocessor computers, just like other queries do, ALTER INDEX ... REBUILD automatically uses more
processors to perform the scan and sort operations that are associated with modifying the index. When you run
ALTER INDEX ... REORGANIZE , with or without LOB_COMPACTION , the max degree of parallelism value is a single
threaded operation. For more information, see Configure Parallel Index Operations.
IMPORTANT
An index cannot be reorganized or rebuilt if the filegroup in which it is located is offline or set to read-only. When the
keyword ALL is specified and one or more indexes are in an offline or read-only filegroup, the statement fails.
Rebuilding Indexes
Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by
compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in
contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction.
Foreign key constraints do not have to be dropped in advance. When indexes with 128 extents or more are
rebuilt, the Database Engine defers the actual page deallocations, and their associated locks, until after the
transaction commits.
For more information, see Reorganize and Rebuild Indexes.
NOTE
Rebuilding or reorganizing small indexes often does not reduce fragmentation. The pages of small indexes are sometimes
stored on mixed extents. Mixed extents are shared by up to eight objects, so the fragmentation in a small index might not
be reduced after reorganizing or rebuilding it.
IMPORTANT
When an index is created or rebuilt in SQL Server, statistics are created or updated by scanning all the rows in the table.
However, starting with SQL Server 2012 (11.x), statistics are not created by scanning all the rows in the table when a
partitioned index is created or rebuilt. Instead, the query optimizer uses the default sampling algorithm to generate these
statistics. To obtain statistics on partitioned indexes by scanning all the rows in the table, use CREATE STATISTICS or
UPDATE STATISTICS with the FULLSCAN clause.
In earlier versions of SQL Server, you could sometimes rebuild a nonclustered index to correct inconsistencies
caused by hardware failures.
In SQL Server 2008 and later, you may still be able to repair such inconsistencies between the index and the
clustered index by rebuilding a nonclustered index offline. However, you cannot repair nonclustered index
inconsistencies by rebuilding the index online, because the online rebuild mechanism will use the existing
nonclustered index as the basis for the rebuild and thus persist the inconsistency. Rebuilding the index offline can
sometimes force a scan of the clustered index (or heap) and so remove the inconsistency. To assure a rebuild
from the clustered index, drop and recreate the non-clustered index. As with earlier versions, we recommend
recovering from inconsistencies by restoring the affected data from a backup; however, you may be able to repair
the index inconsistencies by rebuilding the nonclustered index offline. For more information, see DBCC
CHECKDB (Transact-SQL ).
To rebuild a clustered columnstore index, SQL Server:
1. Acquires an exclusive lock on the table or partition while the rebuild occurs. The data is “offline” and
unavailable during the rebuild.
2. Defragments the columnstore by physically deleting rows that have been logically deleted from the table;
the deleted bytes are reclaimed on the physical media.
3. Reads all data from the original columnstore index, including the deltastore. It combines the data into new
rowgroups, and compresses the rowgroups into the columnstore.
4. Requires space on the physical media to store two copies of the columnstore index while the rebuild is
taking place. When the rebuild is finished, SQL Server deletes the original clustered columnstore index.
Reorganizing Indexes
Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and nonclustered
indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order
of the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor
value. To view the fill factor setting, use sys.indexes.
When ALL is specified, relational indexes, both clustered and nonclustered, and XML indexes on the table are
reorganized. Some restrictions apply when specifying ALL , refer to the definition for ALL in the Arguments
section of this article.
For more information, see Reorganize and Rebuild Indexes.
IMPORTANT
When an index is reorganized in SQL Server, statistics are not updated.
Disabling Indexes
Disabling an index prevents user access to the index, and for clustered indexes, to the underlying table data. The
index definition remains in the system catalog. Disabling a nonclustered index or clustered index on a view
physically deletes the index data. Disabling a clustered index prevents access to the data, but the data remains
unmaintained in the B -tree until the index is dropped or rebuilt. To view the status of an enabled or disabled
index, query the is_disabled column in the sys.indexes catalog view.
If a table is in a transactional replication publication, you cannot disable any indexes that are associated with
primary key columns. These indexes are required by replication. To disable an index, you must first drop the table
from the publication. For more information, see Publish Data and Database Objects.
Use the ALTER INDEX REBUILD statement or the CREATE INDEX WITH DROP_EXISTING statement to enable
the index. Rebuilding a disabled clustered index cannot be performed with the ONLINE option set to ON. For
more information, see Disable Indexes and Constraints.
Setting Options
You can set the options ALLOW_ROW_LOCKS , ALLOW_PAGE_LOCKS , IGNORE_DUP_KEY and STATISTICS_NORECOMPUTE for a
specified index without rebuilding or reorganizing that index. The modified values are immediately applied to the
index. To view these settings, use sys.indexes. For more information, see Set Index Options.
Row and Page Locks Options
When ALLOW_ROW_LOCKS = ON and ALLOW_PAGE_LOCK = ON , row -level, page-level, and table-level locks are allowed
when you access the index. The Database Engine chooses the appropriate lock and can escalate the lock from a
row or page lock to a table lock.
When ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCK = OFF , only a table-level lock is allowed when you access the
index.
If ALL is specified when the row or page lock options are set, the settings are applied to all indexes. When the
underlying table is a heap, the settings are applied in the following ways:
ALLOW_PAGE_LOCKS = OFF Fully to the nonclustered indexes. This means that all page
locks are not allowed on the nonclustered indexes. On the
heap, only the shared (S), update (U) and exclusive (X) locks
for the page are not allowed. The Database Engine can still
acquire an intent page lock (IS, IU or IX) for internal purposes.
NOTE
The DDL command runs until it completes, pauses or fails. In case the command pauses, an error will be issued indicating
that the operation was paused and that the index creation did not complete. More information about the current index
status can be obtained from sys.index_resumable_operations. As before in case of a failure an error will be issued as well.
Data Compression
For a more information about data compression, see Data Compression.
To evaluate how changing PAGE and ROW compression will affect a table, an index, or a partition, use the
sp_estimate_data_compression_savings stored procedure.
The following restrictions apply to partitioned indexes:
When you use ALTER INDEX ALL ... , you cannot change the compression setting of a single partition if the
table has nonaligned indexes.
The ALTER INDEX <index> ... REBUILD PARTITION ... syntax rebuilds the specified partition of the index.
The ALTER INDEX <index> ... REBUILD WITH ... syntax rebuilds all partitions of the index.
Statistics
When you execute ALTER INDEX ALL … on a table, only the statistics associates with indexes are updated.
Automatic or manual statistics created on the table (instead of an index) are not updated.
Permissions
To execute ALTER INDEX, at a minimum, ALTER permission on the table or view is required.
Version Notes
SQL Database does not use filegroup and filestream options.
Columnstore indexes are not available prior to SQL Server 2012 (11.x).
Resumable index operations are available Starting with SQL Server 2017 (14.x) and SQL Database
SELECT @loop = 0
BEGIN TRAN
WHILE (@loop < 300000)
BEGIN
SELECT @AccountKey = CAST (RAND()*10000000 as int);
SELECT @AccountDescription = 'accountdesc ' + CONVERT(varchar(20), @AccountKey);
SELECT @AccountType = 'AccountType ' + CONVERT(varchar(20), @AccountKey);
SELECT @AccountCode = CAST (RAND()*10000000 as int);
Use the TABLOCK option to insert rows in parallel. Starting with SQL Server 2016 (13.x), the INSERT INTO
operation can run in parallel when TABLOCK is used.
Run this command to see the OPEN delta rowgroups. The number of rowgroups depends on the degree of
parallelism.
SELECT *
FROM sys.dm_db_column_store_row_group_physical_stats
WHERE object_id = object_id('cci_target');
Run this command to force all CLOSED and OPEN rowgroups into the columnstore.
ALTER INDEX idxcci_cci_target ON cci_target REORGANIZE WITH (COMPRESS_ALL_ROW_GROUPS = ON);
Run this command again and you will see that smaller rowgroups are merged into one compressed rowgroup.
-- Uses AdventureWorksDW
-- REORGANIZE all partitions
ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE;
C. Compress all OPEN AND CLOSED delta rowgroups into the columnstore
Applies to: SQL Server (Starting with SQL Server 2016 (13.x)) and SQL Database
The command REORGANIZE WITH ( COMPRESS_ALL_ROW_GROUPS = ON ) compreses each OPEN and
CLOSED delta rowgroup into the columnstore as a compressed rowgroup. This empties the deltastore and forces
all rows to get compressed into the columnstore. This is useful especially after performing many insert
operations since these operations store the rows in one or more delta rowgroups.
REORGANIZE combines rowgroups to fill rowgroups up to a maximum number of rows <= 1,024,576.
Therefore, when you compress all OPEN and CLOSED rowgroups you won't end up with lots of compressed
rowgroups that only have a few rows in them. You want rowgroups to be as full as possible to reduce the
compressed size and improve query performance.
-- Uses AdventureWorksDW2016
-- Move all OPEN and CLOSED delta rowgroups into the columnstore.
ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE WITH (COMPRESS_ALL_ROW_GROUPS = ON);
-- For a specific partition, move all OPEN AND CLOSED delta rowgroups into the columnstore
ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE PARTITION = 0 WITH
(COMPRESS_ALL_ROW_GROUPS = ON);
-- Uses AdventureWorks
-- Defragment by physically removing rows that have been logically deleted from the table, and merging
rowgroups.
ALTER INDEX cci_FactInternetSales2 ON FactInternetSales2 REORGANIZE;
TIP
Starting with SQL Server 2016 (13.x) and in Azure SQL Database, we recommend using ALTER INDEX REORGANIZE instead
of ALTER INDEX REBUILD.
NOTE
In SQL Server 2012 (11.x) and SQL Server 2014 (12.x), REORGANIZE is only used to compress CLOSED rowgroups into the
columnstore. The only way to perform defragmentation operations and to force all delta rowgroups into the columnstore is
to rebuild the index.
This example shows how to rebuild a clustered columnstore index and force all delta rowgroups into the
columnstore. This first step prepares a table FactInternetSales2 with a clustered columnstore index and inserts
data from the first four columns.
-- Uses AdventureWorksDW
The results show there is one OPEN rowgroup, which means SQL Server will wait for more rows to be added
before it closes the rowgroup and moves the data to the columnstore. This next statement rebuilds the clustered
columnstore index, which forces all rows into the columnstore.
The following example adds the ONLINE option including the low priority lock option, and adds the row
compression option.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
E. Disabling an index
The following example disables a nonclustered index on the Employee table in the AdventureWorks2012
database.
F. Disabling constraints
The following example disables a PRIMARY KEY constraint by disabling the PRIMARY KEY index in the
AdventureWorks2012 database. The FOREIGN KEY constraint on the underlying table is automatically disabled
and warning message is displayed.
G. Enabling constraints
The following example enables the PRIMARY KEY and FOREIGN KEY constraints that were disabled in Example
F.
The PRIMARY KEY constraint is enabled by rebuilding the PRIMARY KEY index.
2. Executing the same command again (see above) after an index operation was paused, resumes
automatically the index rebuild operation.
3. Execute an online index rebuild as resumable operation with MAX_DURATION set to 240 minutes.
5. Resume an online index rebuild for an index rebuild that was executed as resumable operation specifying a
new value for MAXDOP set to 4.
6. Resume an online index rebuild operation for an index online rebuild that was executed as resumable. Set
MAXDOP to 2, set the execution time for the index being running as resmumable to 240 minutes and in
case of an index being blocked on the lock wait 10 minutes and after that kill all blockers.
See Also
CREATE INDEX (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE XML INDEX (Transact-SQL )
DROP INDEX (Transact-SQL )
Disable Indexes and Constraints
XML Indexes (SQL Server)
Perform Index Operations Online
Reorganize and Rebuild Indexes
sys.dm_db_index_physical_stats (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER INDEX (Selective XML Indexes)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies an existing selective XML index. The ALTER INDEX statement changes one or more of the following
items:
The list of indexed paths (FOR clause).
The list of namespaces (WITH XMLNAMESPACES clause).
The index options (WITH clause).
You cannot alter secondary selective XML indexes. For more information, see Create, Alter, and Drop
Secondary Selective XML Indexes.
Transact-SQL Syntax Conventions
Syntax
ALTER INDEX index_name
ON <table_object>
[WITH XMLNAMESPACES ( <xmlnamespace_list> )]
FOR ( <promoted_node_path_action_list> )
[WITH ( <index_options> )]
<table_object> ::=
{ [database_name. [schema_name ] . | schema_name. ] table_name }
<promoted_node_path_action_list> ::=
<promoted_node_path_action_item> [, <promoted_node_path_action_list>]
<promoted_node_path_action_item>::=
<add_node_path_item_action> | <remove_node_path_item_action>
<add_node_path_item_action> ::=
ADD <path_name> = <promoted_node_path_item>
<promoted_node_path_item>::=
<xquery_node_path_item> | <sql_values_node_path_item>
<path_name_or_typed_node_path>::=
<path_name> | <typed_node_path>
<typed_node_path> ::=
<node_path> [[AS XQUERY <xsd_type_ext>] | [AS SQL <sql_type>]]
<xquery_node_path_item> ::=
<node_path> [AS XQUERY <xsd_type_or_node_hint>] [SINGLETON]
<xsd_type_or_node_hint> ::=
[<xsd_type>] [MAXLENGTH(x)] | 'node()'
<sql_values_node_path_item> ::=
<node_path> AS SQL <sql_type> [SINGLETON]
<node_path> ::=
character_string_literal
character_string_literal
<xsd_type_ext> ::=
character_string_literal
<sql_type> ::=
identifier
<path_name> ::=
identifier
<xmlnamespace_list> ::=
<xmlnamespace_item> [, <xmlnamespace_list>]
<xmlnamespace_item> ::=
<xmlnamespace_uri> AS <xmlnamespace_prefix>
<index_options> ::=
(
| PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY =OFF
| DROP_EXISTING = { ON | OFF }
| ONLINE =OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
)
Arguments
index_name
Is the name of the existing index to alter.
<table_object>
Is the table that contains the XML column to index. Use one of the following formats:
database_name.schema_name.table_name
database_name..table_name
schema_name.table_name
table_name
Remarks
IMPORTANT
When you run an ALTER INDEX statement, the selective XML index is always rebuilt. Be sure to consider the impact of this
process on server resources.
Security
Permissions
ALTER permission on the table or view is required to run ALTER INDEX.
Examples
The following example shows an ALTER INDEX statement. This statement adds the path '/a/b/m' to the XQuery
part of the index and deletes the path '/a/b/e' from the SQL part of the index created in the example in the topic
CREATE SELECTIVE XML INDEX (Transact-SQL ). The path to delete is identified by the name that was given to it
when it was created.
The following example shows an ALTER INDEX statement that specifies index options. Index options are permitted
because the statement does not use a FOR clause to add or remove paths.
See Also
Selective XML Indexes (SXI)
Create, Alter, and Drop Selective XML Indexes
Specify Paths and Optimization Hints for Selective XML Indexes
ALTER LOGIN (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a SQL Server login account.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
<status_option> ::=
ENABLE | DISABLE
<set_option> ::=
PASSWORD = 'password' | hashed_password HASHED
[
OLD_PASSWORD = 'oldpassword'
| <password_option> [<password_option> ]
]
| DEFAULT_DATABASE = database
| DEFAULT_LANGUAGE = language
| NAME = login_name
| CHECK_POLICY = { ON | OFF }
| CHECK_EXPIRATION = { ON | OFF }
| CREDENTIAL = credential_name
| NO CREDENTIAL
<password_option> ::=
MUST_CHANGE | UNLOCK
<cryptographic_credentials_option> ::=
ADD CREDENTIAL credential_name
| DROP CREDENTIAL credential_name
-- Syntax for Azure SQL Database and Azure SQL Data Warehouse
<status_option> ::=
ENABLE | DISABLE
<set_option> ::=
PASSWORD ='password'
[
OLD_PASSWORD ='oldpassword'
]
| NAME = login_name
<set_option> ::=
PASSWORD ='password'
[
OLD_PASSWORD ='oldpassword'
| <password_option> [<password_option> ]
]
| NAME = login_name
| CHECK_POLICY = { ON | OFF }
| CHECK_EXPIRATION = { ON | OFF }
<password_option> ::=
MUST_CHANGE | UNLOCK
Arguments
login_name
Specifies the name of the SQL Server login that is being changed. Domain logins must be enclosed in brackets in
the format [domain\user].
ENABLE | DISABLE
Enables or disables this login. Disabling a login does not affect the behavior of logins that are already connected.
(Use the KILL statement to terminate an existing connections.) Disabled logins retain their permissions and can
still be impersonated.
PASSWORD ='password'
Applies only to SQL Server logins. Specifies the password for the login that is being changed. Passwords are case-
sensitive.
Continuously active connections to SQL Database require reauthorization (performed by the Database Engine) at
least every 10 hours. The Database Engine attempts reauthorization using the originally submitted password and
no user input is required. For performance reasons, when a password is reset in SQL Database, the connection
will not be re-authenticated, even if the connection is reset due to connection pooling. This is different from the
behavior of on-premises SQL Server. If the password has been changed since the connection was initially
authorized, the connection must be terminated and a new connection made using the new password. A user with
the KILL DATABASE CONNECTION permission can explicitly terminate a connection to SQL Database by using
the KILL command. For more information, see KILL (Transact-SQL ).
PASSWORD =hashed_password
Applies to: SQL Server 2008 through SQL Server 2017.
Applies to the HASHED keyword only. Specifies the hashed value of the password for the login that is being
created.
IMPORTANT
When a login (or a contained database user) connects and is authenticated, the connection caches identity information
about the login. For a Windows Authentication login, this includes information about membership in Windows groups. The
identity of the login remains authenticated as long as the connection is maintained. To force changes in the identity, such as
a password reset or change in Windows group membership, the login must logoff from the authentication authority
(Windows or SQL Server), and log in again. A member of the sysadmin fixed server role or any login with the ALTER ANY
CONNECTION permission can use the KILL command to end a connection and force a login to reconnect. SQL Server
Management Studio can reuse connection information when opening multiple connections to Object Explorer and Query
Editor windows. Close all connections to force reconnection.
HASHED
Applies to: SQL Server 2008 through SQL Server 2017.
Applies to SQL Server logins only. Specifies that the password entered after the PASSWORD argument is already
hashed. If this option is not selected, the password is hashed before being stored in the database. This option
should only be used for login synchronization between two servers. Do not use the HASHED option to routinely
change passwords.
OLD_PASSWORD ='oldpassword'
Applies only to SQL Server logins. The current password of the login to which a new password will be assigned.
Passwords are case-sensitive.
MUST_CHANGE
Applies to: SQL Server 2008 through SQL Server 2017, and Parallel Data Warehouse.
Applies only to SQL Server logins. If this option is included, SQL Server will prompt for an updated password the
first time the altered login is used.
DEFAULT_DATABASE =database
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a default database to be assigned to the login.
DEFAULT_L ANGUAGE =language
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a default language to be assigned to the login. The default language for all SQL Database logins is
English and cannot be changed. The default language of the sa login on SQL Server on Linux, is English but it
can be changed.
NAME = login_name
The new name of the login that is being renamed. If this is a Windows login, the SID of the Windows principal
corresponding to the new name must match the SID associated with the login in SQL Server. The new name of a
SQL Server login cannot contain a backslash character (\).
CHECK_EXPIRATION = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, and Parallel Data Warehouse.
Applies only to SQL Server logins. Specifies whether password expiration policy should be enforced on this login.
The default value is OFF.
CHECK_POLICY = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, and Parallel Data Warehouse.
Applies only to SQL Server logins. Specifies that the Windows password policies of the computer on which SQL
Server is running should be enforced on this login. The default value is ON.
CREDENTIAL = credential_name
Applies to: SQL Server 2008 through SQL Server 2017.
The name of a credential to be mapped to a SQL Server login. The credential must already exist in the server. For
more information see Credentials (Database Engine). A credential cannot be mapped to the sa login.
NO CREDENTIAL
Applies to: SQL Server 2008 through SQL Server 2017.
Removes any existing mapping of the login to a server credential. For more information see Credentials (Database
Engine).
UNLOCK
Applies to: SQL Server 2008 through SQL Server 2017, and Parallel Data Warehouse.
Applies only to SQL Server logins. Specifies that a login that is locked out should be unlocked.
ADD CREDENTIAL
Applies to: SQL Server 2008 through SQL Server 2017.
Adds an Extensible Key Management (EKM ) provider credential to the login. For more information, see Extensible
Key Management (EKM ).
DROP CREDENTIAL
Applies to: SQL Server 2008 through SQL Server 2017.
Removes an Extensible Key Management (EKM ) provider credential from the login. For more information see
Extensible Key Management (EKM ).
Remarks
When CHECK_POLICY is set to ON, the HASHED argument cannot be used.
When CHECK_POLICY is changed to ON, the following behavior occurs:
The password history is initialized with the value of the current password hash.
When CHECK_POLICY is changed to OFF, the following behavior occurs:
CHECK_EXPIRATION is also set to OFF.
The password history is cleared.
The value of lockout_time is reset.
If MUST_CHANGE is specified, CHECK_EXPIRATION and CHECK_POLICY must be set to ON. Otherwise, the
statement will fail.
If CHECK_POLICY is set to OFF, CHECK_EXPIRATION cannot be set to ON. An ALTER LOGIN statement that
has this combination of options will fail.
You cannot use ALTER_LOGIN with the DISABLE argument to deny access to a Windows group. For example,
ALTER_LOGIN [domain\group] DISABLE will return the following error message:
"Msg 15151, Level 16, State 1, Line 1
"Cannot alter the login 'Domain\Group', because it does not exist or you do not have permission."
This is by design.
In SQL Database, login data required to authenticate a connection and server-level firewall rules are temporarily
cached in each database. This cache is periodically refreshed. To force a refresh of the authentication cache and
make sure that a database has the latest version of the logins table, execute DBCC FLUSHAUTHCACHE (Transact-
SQL ).
Permissions
Requires ALTER ANY LOGIN permission.
If the CREDENTIAL option is used, also requires ALTER ANY CREDENTIAL permission.
If the login that is being changed is a member of the sysadmin fixed server role or a grantee of CONTROL
SERVER permission, also requires CONTROL SERVER permission when making the following changes:
Resetting the password without supplying the old password.
Enabling MUST_CHANGE, CHECK_POLICY, or CHECK_EXPIRATION.
Changing the login name.
Enabling or disabling the login.
Mapping the login to a different credential.
A principal can change the password, default language, and default database for its own login.
Examples
A. Enabling a disabled login
The following example enables the login Mary5 .
F. Unlocking a login
To unlock a SQL Server login, execute the following statement, replacing **** with the desired account password.
GO
To unlock a login without changing the password, turn the check policy off and then on again.
See Also
Credentials (Database Engine)
CREATE LOGIN (Transact-SQL )
DROP LOGIN (Transact-SQL )
CREATE CREDENTIAL (Transact-SQL )
EVENTDATA (Transact-SQL )
Extensible Key Management (EKM )
ALTER MASTER KEY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a database master key.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
<alter_option> ::=
<regenerate_option> | <encryption_option>
<regenerate_option> ::=
[ FORCE ] REGENERATE WITH ENCRYPTION BY PASSWORD = 'password'
<encryption_option> ::=
ADD ENCRYPTION BY { SERVICE MASTER KEY | PASSWORD = 'password' }
|
DROP ENCRYPTION BY { SERVICE MASTER KEY | PASSWORD = 'password' }
<alter_option> ::=
<regenerate_option> | <encryption_option>
<regenerate_option> ::=
[ FORCE ] REGENERATE WITH ENCRYPTION BY PASSWORD = 'password'
<encryption_option> ::=
ADD ENCRYPTION BY { SERVICE MASTER KEY | PASSWORD = 'password' }
|
DROP ENCRYPTION BY { PASSWORD = 'password' }
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
<alter_option> ::=
<regenerate_option> | <encryption_option>
<regenerate_option> ::=
[ FORCE ] REGENERATE WITH ENCRYPTION BY PASSWORD ='password'<encryption_option> ::=
ADD ENCRYPTION BY SERVICE MASTER KEY
|
DROP ENCRYPTION BY SERVICE MASTER KEY
Arguments
PASSWORD ='password'
Specifies a password with which to encrypt or decrypt the database master key. password must meet the
Windows password policy requirements of the computer that is running the instance of SQL Server.
Remarks
The REGENERATE option re-creates the database master key and all the keys it protects. The keys are first
decrypted with the old master key, and then encrypted with the new master key. This resource-intensive operation
should be scheduled during a period of low demand, unless the master key has been compromised.
SQL Server 2012 (11.x) uses the AES encryption algorithm to protect the service master key (SMK) and the
database master key (DMK). AES is a newer encryption algorithm than 3DES used in earlier versions. After
upgrading an instance of the Database Engine to SQL Server 2012 (11.x) the SMK and DMK should be
regenerated in order to upgrade the master keys to AES. For more information about regenerating the SMK, see
ALTER SERVICE MASTER KEY (Transact-SQL ).
When the FORCE option is used, key regeneration will continue even if the master key is unavailable or the
server cannot decrypt all the encrypted private keys. If the master key cannot be opened, use the RESTORE
MASTER KEY statement to restore the master key from a backup. Use the FORCE option only if the master key is
irretrievable or if decryption fails. Information that is encrypted only by an irretrievable key will be lost.
The DROP ENCRYPTION BY SERVICE MASTER KEY option removes the encryption of the database master key
by the service master key.
ADD ENCRYPTION BY SERVICE MASTER KEY causes a copy of the master key to be encrypted using the
service master key and stored in both the current database and in master.
Permissions
Requires CONTROL permission on the database. If the database master key has been encrypted with a password,
knowledge of that password is also required.
Examples
The following example creates a new database master key for AdventureWorks and reencrypts the keys below it in
the encryption hierarchy.
USE AdventureWorks2012;
ALTER MASTER KEY REGENERATE WITH ENCRYPTION BY PASSWORD = 'dsjdkflJ435907NnmM#sX003';
GO
USE master;
ALTER MASTER KEY REGENERATE WITH ENCRYPTION BY PASSWORD = 'dsjdkflJ435907NnmM#sX003';
GO
See Also
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
BACKUP MASTER KEY (Transact-SQL )
RESTORE MASTER KEY (Transact-SQL )
DROP MASTER KEY (Transact-SQL )
Encryption Hierarchy
CREATE DATABASE (SQL Server Transact-SQL )
Database Detach and Attach (SQL Server)
ALTER MESSAGE TYPE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a message type.
Transact-SQL Syntax Conventions
Syntax
ALTER MESSAGE TYPE message_type_name
VALIDATION =
{ NONE
| EMPTY
| WELL_FORMED_XML
| VALID_XML WITH SCHEMA COLLECTION schema_collection_name }
[ ; ]
Arguments
message_type_name
The name of the message type to change. Server, database, and schema names cannot be specified.
VALIDATION
Specifies how Service Broker validates the message body for messages of this type.
NONE
No validation is performed. The message body might contain any data, or might be NULL.
EMPTY
The message body must be NULL.
WELL_FORMED_XML
The message body must contain well-formed XML.
VALID_XML_WITH_SCHEMA = schema_collection_name
The message body must contain XML that complies with a schema in the specified schema collection. The
schema_collection_name must be the name of an existing XML schema collection.
Remarks
Changing the validation of a message type does not affect messages that have already been delivered to a queue.
To change the AUTHORIZATION for a message type, use the ALTER AUTHORIZATION statement.
Permissions
Permission for altering a message type defaults to the owner of the message type, members of the db_ddladmin
or db_owner fixed database roles, and members of the sysadmin fixed server role.
When the ALTER MESSAGE TYPE statement specifies a schema collection, the user executing the statement must
have REFERENCES permission on the schema collection specified.
Examples
The following example changes the message type //Adventure-Works.com/Expenses/SubmitExpense to require that
the message body contain a well-formed XML document.
See Also
ALTER AUTHORIZATION (Transact-SQL )
CREATE MESSAGE TYPE (Transact-SQL )
DROP MESSAGE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a partition function by splitting or merging its boundary values. By executing ALTER PARTITION
FUNCTION, one partition of any table or index that uses the partition function can be split into two partitions, or
two partitions can be merged into one less partition.
Cau t i on
More than one table or index can use the same partition function. ALTER PARTITION FUNCTION affects all of
them in a single transaction.
Transact-SQL Syntax Conventions
Syntax
ALTER PARTITION FUNCTION partition_function_name()
{
SPLIT RANGE ( boundary_value )
| MERGE RANGE ( boundary_value )
} [ ; ]
Arguments
partition_function_name
Is the name of the partition function to be modified.
SPLIT RANGE ( boundary_value )
Adds one partition to the partition function. boundary_value determines the range of the new partition, and must
differ from the existing boundary ranges of the partition function. Based on boundary_value, the Database Engine
splits one of the existing ranges into two. Of these two, the one where the new boundary_value resides is
considered the new partition.
A filegroup must exist online and be marked by the partition scheme that uses the partition function as NEXT
USED to hold the new partition. Filegroups are allocated to partitions in a CREATE PARTITION SCHEME
statement. If a CREATE PARTITION SCHEME statement allocates more filegroups than necessary (fewer
partitions are created in the CREATE PARTITION FUNCTION statement than filegroups to hold them), then there
are unassigned filegroups, and one of them is marked NEXT USED by the partition scheme. This filegroup will
hold the new partition. If there are no filegroups marked NEXT USED by the partition scheme, you must use
ALTER PARTITION SCHEME to either add a filegroup, or designate an existing one, to hold the new partition. A
filegroup that already holds partitions can be designated to hold additional partitions. Because a partition function
can participate in more than one partition scheme, all the partition schemes that use the partition function to
which you are adding partitions must have a NEXT USED filegroup. Otherwise, ALTER PARTITION FUNCTION
fails with an error that displays the partition scheme or schemes that lack a NEXT USED filegroup.
If you create all the partitions in the same filegroup, that filegroup is initially assigned to be the NEXT USED
filegroup automatically. However, after a split operation is performed, there is no longer a designated NEXT USED
filegroup. You must explicitly assign the filegroup to be the NEXT USED filegroup by using ALTER PARITION
SCHEME or a subsequent split operation will fail.
NOTE
Limitations with columnstore index: Only empty partitions can be split in when a columnstore index exists on the table. You
will need to drop or disable the columnstore index before performing this operation
NOTE
Limitations with columnstore index: Two nonempty partitions containing a columnstore index cannot be merged. You will
need to drop or disable the columnstore index before performing this operation
Best Practices
Always keep empty partitions at both ends of the partition range to guarantee that the partition split (before
loading new data) and partition merge (after unloading old data) do not incur any data movement. Avoid splitting
or merging populated partitions. This can be extremely inefficient, as this may cause as much as four times more
log generation, and may also cause severe locking.
NOTE
Dropping a partitioned clustered index results in a partitioned heap.
Drop and rebuild an existing partitioned index by using the Transact-SQL CREATE INDEX statement with
the DROP EXISTING = ON clause.
Perform a sequence of ALTER PARTITION FUNCTION statements.
All filegroups that are affected by ALTER PARITITION FUNCTION must be online.
ALTER PARTITION FUNCTION fails when there is a disabled clustered index on any tables that use the
partition function.
SQL Server does not provide replication support for modifying a partition function. Changes to a partition
function in the publication database must be manually applied in the subscription database.
Permissions
Any one of the following permissions can be used to execute ALTER PARTITION FUNCTION:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition function was created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition function was created.
Examples
A. Splitting a partition of a partitioned table or index into two partitions
The following example creates a partition function to partition a table or index into four partitions.
ALTER PARTITION FUNCTION splits one of the partitions into two to create a total of five partitions.
See Also
Partitioned Tables and Indexes
CREATE PARTITION FUNCTION (Transact-SQL )
DROP PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
DROP PARTITION SCHEME (Transact-SQL )
CREATE INDEX (Transact-SQL )
ALTER INDEX (Transact-SQL )
CREATE TABLE (Transact-SQL )
sys.partition_functions (Transact-SQL )
sys.partition_parameters (Transact-SQL )
sys.partition_range_values (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds a filegroup to a partition scheme or alters the designation of the NEXT USED filegroup for the partition
scheme.
NOTE
In Azure SQL Database only primary filegroups are supported.
Syntax
ALTER PARTITION SCHEME partition_scheme_name
NEXT USED [ filegroup_name ] [ ; ]
Arguments
partition_scheme_name
Is the name of the partition scheme to be altered.
filegroup_name
Specifies the filegroup to be marked by the partition scheme as NEXT USED. This means the filegroup will accept
a new partition that is created by using an ALTER PARTITION FUNCTION statement.
In a partition scheme, only one filegroup can be designated NEXT USED. A filegroup that is not empty can be
specified. If filegroup_name is specified and there currently is no filegroup marked NEXT USED, filegroup_name
is marked NEXT USED. If filegroup_name is specified, and a filegroup with the NEXT USED property already
exists, the NEXT USED property transfers from the existing filegroup to filegroup_name.
If filegroup_name is not specified and a filegroup with the NEXT USED property already exists, that filegroup
loses its NEXT USED state so that there are no NEXT USED filegroups in partition_scheme_name.
If filegroup_name is not specified, and there are no filegroups marked NEXT USED, ALTER PARTITION SCHEME
returns a warning.
Remarks
Any filegroup affected by ALTER PARTITION SCHEME must be online.
Permissions
The following permissions can be used to execute ALTER PARTITION SCHEME:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition scheme was created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition scheme was created.
Examples
The following example assumes the partition scheme MyRangePS1 and the filegroup test5fg exist in the current
database.
Filegroup test5fg will receive any additional partition of a partitioned table or index as a result of an ALTER
PARTITION FUNCTION statement.
See Also
CREATE PARTITION SCHEME (Transact-SQL )
DROP PARTITION SCHEME (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL )
DROP PARTITION FUNCTION (Transact-SQL )
CREATE TABLE (Transact-SQL )
CREATE INDEX (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.partition_schemes (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.destination_data_spaces (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
ALTER PROCEDURE (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies a previously created procedure that was created by executing the CREATE PROCEDURE statement in
SQL Server.
Transact-SQL Syntax Conventions (Transact-SQL )
Syntax
-- Syntax for SQL Server and Azure SQL Database
<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE AS Clause ]
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
schema_name
The name of the schema to which the procedure belongs.
procedure_name
The name of the procedure to change. Procedure names must comply with the rules for identifiers.
; number
An existing optional integer that is used to group procedures of the same name so that they can be dropped
together by using one DROP PROCEDURE statement.
NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.
@ parameter
A parameter in the procedure. Up to 2,100 parameters can be specified.
[ type_schema_name. ] data_type
Is the data type of the parameter and the schema it belongs to.
For information about data type restrictions, see CREATE PROCEDURE (Transact-SQL ).
VARYING
Specifies the result set supported as an output parameter. This parameter is constructed dynamically by the stored
procedure and its contents can vary. Applies only to cursor parameters. This option is not valid for CLR
procedures.
default
Is a default value for the parameter.
OUT | OUTPUT
Indicates that the parameter is a return parameter.
READONLY
Indicates that the parameter cannot be updated or modified within the body of the procedure. If the parameter
type is a table-value type, READONLY must be specified.
RECOMPILE
Indicates that the Database Engine does not cache a plan for this procedure and the procedure is recompiled at run
time.
ENCRYPTION
Applies to: SQL Server ( SQL Server 2008 through SQL Server 2017) and Azure SQL Database.
Indicates that the Database Engine will convert the original text of the ALTER PROCEDURE statement to an
obfuscated format. The output of the obfuscation is not directly visible in any of the catalog views in SQL Server.
Users that have no access to system tables or database files cannot retrieve the obfuscated text. However, the text
will be available to privileged users that can either access system tables over the DAC port or directly access
database files. Also, users that can attach a debugger to the server process can retrieve the original procedure from
memory at runtime. For more information about accessing system metadata, see Metadata Visibility
Configuration.
Procedures created with this option cannot be published as part of SQL Server replication.
This option cannot be specified for common language runtime (CLR ) stored procedures.
NOTE
During an upgrade, the Database Engine uses the obfuscated comments stored in sys.sql_modules to re-create procedures.
EXECUTE AS
Specifies the security context under which to execute the stored procedure after it is accessed.
For more information, see EXECUTE AS Clause (Transact-SQL ).
FOR REPLICATION
Specifies that stored procedures that are created for replication cannot be executed on the Subscriber. A stored
procedure created with the FOR REPLICATION option is used as a stored procedure filter and only executed
during replication. Parameters cannot be declared if FOR REPLICATION is specified. This option is not valid for
CLR procedures. The RECOMPILE option is ignored for procedures created with FOR REPLICATION.
NOTE
This option is not available in a contained database.
NOTE
CLR procedures are not supported in a contained database.
General Remarks
Transact-SQL stored procedures cannot be modified to be CLR stored procedures and vice versa.
ALTER PROCEDURE does not change permissions and does not affect any dependent stored procedures or
triggers. However, the current session settings for QUOTED_IDENTIFIER and ANSI_NULLS are included in the
stored procedure when it is modified. If the settings are different from those in effect when stored procedure was
originally created, the behavior of the stored procedure may change.
If a previous procedure definition was created using WITH ENCRYPTION or WITH RECOMPILE, these options
are enabled only if they are included in ALTER PROCEDURE.
For more information about stored procedures, see CREATE PROCEDURE (Transact-SQL ).
Security
Permissions
Requires ALTER permission on the procedure or requires membership in the db_ddladmin fixed database role.
Examples
The following example creates the uspVendorAllInfo stored procedure. This procedure returns the names of all the
vendors that supply Adventure Works Cycles, the products they supply, their credit ratings, and their availability.
After this procedure is created, it is then modified to return a different result set.
The following example alters the uspVendorAllInfo stored procedure. It removes the EXECUTE AS CALLER clause
and modifies the body of the procedure to return only those vendors that supply the specified product. The LEFT
and CASE functions customize the appearance of the result set.
USE AdventureWorks2012;
GO
ALTER PROCEDURE Purchasing.uspVendorAllInfo
@Product varchar(25)
AS
SET NOCOUNT ON;
SELECT LEFT(v.Name, 25) AS Vendor, LEFT(p.Name, 25) AS 'Product name',
'Rating' = CASE v.CreditRating
WHEN 1 THEN 'Superior'
WHEN 2 THEN 'Excellent'
WHEN 3 THEN 'Above average'
WHEN 4 THEN 'Average'
WHEN 5 THEN 'Below average'
ELSE 'No rating'
END
, Availability = CASE v.ActiveFlag
WHEN 1 THEN 'Yes'
ELSE 'No'
END
FROM Purchasing.Vendor AS v
INNER JOIN Purchasing.ProductVendor AS pv
ON v.BusinessEntityID = pv.BusinessEntityID
INNER JOIN Production.Product AS p
ON pv.ProductID = p.ProductID
WHERE p.Name LIKE @Product
ORDER BY v.Name ASC;
GO
See Also
CREATE PROCEDURE (Transact-SQL )
DROP PROCEDURE (Transact-SQL )
EXECUTE (Transact-SQL )
EXECUTE AS (Transact-SQL )
EVENTDATA (Transact-SQL )
Stored Procedures (Database Engine)
sys.procedures (Transact-SQL )
ALTER QUEUE (Transact-SQL)
5/4/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a queue.
Transact-SQL Syntax Conventions
Syntax
ALTER QUEUE <object>
queue_settings
| queue_action
[ ; ]
<object> : :=
{
[ database_name. [ schema_name ] . | schema_name. ]
queue_name
}
<queue_settings> : :=
WITH
[ STATUS = { ON | OFF } [ , ] ]
[ RETENTION = { ON | OFF } [ , ] ]
[ ACTIVATION (
{ [ STATUS = { ON | OFF } [ , ] ]
[ PROCEDURE_NAME = <procedure> [ , ] ]
[ MAX_QUEUE_READERS = max_readers [ , ] ]
[ EXECUTE AS { SELF | 'user_name' | OWNER } ]
| DROP }
) [ , ]]
[ POISON_MESSAGE_HANDLING (
STATUS = { ON | OFF } )
]
<queue_action> : :=
REBUILD [ WITH <query_rebuild_options> ]
| REORGANIZE [ WITH (LOB_COMPACTION = { ON | OFF } ) ]
| MOVE TO { file_group | "default" }
<procedure> : :=
{
[ database_name. [ schema_name ] . | schema_name. ]
stored_procedure_name
}
<queue_rebuild_options> : :=
{
( MAXDOP = max_degree_of_parallelism )
}
Arguments
database_name (object)
Is the name of the database that contains the queue to be changed. When no database_name is provided, this
defaults to the current database.
schema_name (object)
Is the name of the schema to which the new queue belongs. When no schema_name is provided, this defaults to
the default schema for the current user.
queue_name
Is the name of the queue to be changed.
STATUS (Queue)
Specifies whether the queue is available (ON ) or unavailable (OFF ). When the queue is unavailable, no messages
can be added to the queue or removed from the queue.
RETENTION
Specifies the retention setting for the queue. If RETENTION = ON, all messages sent or received on conversations
using this queue are retained in the queue until the conversations have ended. This allows you to retain messages
for auditing purposes, or to perform compensating transactions if an error occurs
NOTE
Setting RETENTION = ON can reduce performance. This setting should only be used if required to meet the service level
agreement for the application.
ACTIVATION
Specifies information about the stored procedure that is activated to process messages that arrive in this queue.
STATUS (Activation)
Specifies whether or not the queue activates the stored procedure. When STATUS = ON, the queue starts the
stored procedure specified with PROCEDURE_NAME when the number of procedures currently running is less
than MAX_QUEUE_READERS and when messages arrive on the queue faster than the stored procedures receive
messages. When STATUS = OFF, the queue does not activate the stored procedure.
REBUILD [ WITH <queue_rebuild_options> ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Rebuilds all indexes on the queue internal table. Use this capability when you are experiencing fragmentation
problems due to high load. MAXDOP is the only supported queue rebuild option. REBUILD is always an offline
operation.
REORGANIZE [ WITH ( LOB_COMPACTION = { ON | OFF } ) ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Reorganize all indexes on the queue internal table.
Unlike REORGANIZE on user tables, REORGANIZE on a queue is always performed as an offline operation
because page level locks are explicitly disabled on queues.
TIP
For general guidance regarding index fragmentation, when fragmentation is between 5% and 30%, reorganize the index.
When fragmentation is above 30%, rebuild the index. However, these numbers are only for general guidance as a starting
point for your environment. To determine the amount of index fragmentation, use sys.dm_db_index_physical_stats (Transact-
SQL) - see example G in that article for examples.
Remarks
When a queue with a specified activation stored procedure contains messages, changing the activation status from
OFF to ON immediately activates the activation stored procedure. Altering the activation status from ON to OFF
stops the broker from activating instances of the stored procedure, but does not stop instances of the stored
procedure that are currently running.
Altering a queue to add an activation stored procedure does not change the activation status of the queue.
Changing the activation stored procedure for the queue does not affect instances of the activation stored
procedure that are currently running.
Service Broker checks the maximum number of queue readers for a queue as part of the activation process.
Therefore, altering a queue to increase the maximum number of queue readers allows Service Broker to
immediately start more instances of the activation stored procedure. Altering a queue to decrease the maximum
number of queue readers does not affect instances of the activation stored procedure currently running. However,
Service Broker does not start a new instance of the stored procedure until the number of instances for the
activation stored procedure falls below the configured maximum number.
When a queue is unavailable, Service Broker holds messages for services that use the queue in the transmission
queue for the database. The sys.transmission_queue catalog view provides a view of the transmission queue.
If a RECEIVE statement or a GET CONVERSATION GROUP statement specifies an unavailable queue, that
statement fails with a Transact-SQL error.
Permissions
Permission for altering a queue defaults to the owner of the queue, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.
Examples
A. Making a queue unavailable
The following example makes the ExpenseQueue queue unavailable to receive messages.
See Also
CREATE QUEUE (Transact-SQL )
DROP QUEUE (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.dm_db_index_physical_stats (Transact-SQL )
ALTER REMOTE SERVICE BINDING (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the user associated with a remote service binding, or changes the anonymous authentication setting for
the binding.
Transact-SQL Syntax Conventions
Syntax
ALTER REMOTE SERVICE BINDING binding_name
WITH [ USER = <user_name> ] [ , ANONYMOUS = { ON | OFF } ]
[ ; ]
Arguments
binding_name
The name of the remote service binding to change. Server, database, and schema names cannot be specified.
WITH USER = <user_name>
Specifies the database user that holds the certificate associated with the remote service for this binding. The public
key from this certificate is used for encryption and authentication of messages exchanged with the remote service.
ANONYMOUS
Specifies whether anonymous authentication is used when communicating with the remote service. If
ANONYMOUS = ON, anonymous authentication is used and the credentials of the local user are not transferred
to the remote service. If ANONYMOUS = OFF, user credentials are transferred. If this clause is not specified, the
default is OFF.
Remarks
The public key in the certificate associated with user_name is used to authenticate messages sent to the remote
service and to encrypt a session key that is then used to encrypt the conversation. The certificate for user_name
must correspond to the certificate for a login in the database that hosts the remote service.
Permissions
Permission for altering a remote service binding defaults to the owner of the remote service binding, members of
the db_owner fixed database role, and members of the sysadmin fixed server role.
The user that executes the ALTER REMOTE SERVICE BINDING statement must have impersonate permission for
the user specified in the statement.
To alter the AUTHORIZATION for a remote service binding, use the ALTER AUTHORIZATION statement.
Examples
The following example changes the remote service binding APBinding to encrypt messages by using the
certificates from the account SecurityAccount .
See Also
CREATE REMOTE SERVICE BINDING (Transact-SQL )
DROP REMOTE SERVICE BINDING (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This statement is used to perform the following Resource Governor actions in SQL Server:
Apply the configuration changes specified when the CREATE|ALTER|DROP WORKLOAD GROUP or
CREATE|ALTER|DROP RESOURCE POOL or CREATE|ALTER|DROP EXTERNAL RESOURCE POOL
statements are issued.
Enable or disable Resource Governor.
Configure classification for incoming requests.
Reset workload group and resource pool statistics.
Sets the maximum I/O operations per disk volume.
Transact-SQL Syntax Conventions
Syntax
ALTER RESOURCE GOVERNOR
{ DISABLE | RECONFIGURE }
| WITH ( CLASSIFIER_FUNCTION = { schema_name.function_name | NULL } )
| RESET STATISTICS
| WITH ( MAX_OUTSTANDING_IO_PER_VOLUME = value )
[ ; ]
Arguments
DISABLE
Disables Resource Governor. Disabling Resource Governor has the following results:
The classifier function is not executed.
All new connections are automatically classified into the default group.
System-initiated requests are classified into the internal workload group.
All existing workload group and resource pool settings are reset to their default values. In this case, no
events are fired when limits are reached.
Normal system monitoring is not affected.
Configuration changes can be made, but the changes do not take effect until Resource Governor is
enabled.
Upon restarting SQL Server, the Resource Governor will not load its configuration, but instead will have
only the default and internal groups and pools.
RECONFIGURE
When the Resource Governor is not enabled, RECONFIGURE enables the Resource Governor. Enabling
Resource Governor has the following results:
The classifier function is executed for new connections so that their workload can be assigned to workload
groups.
The resource limits that are specified in the Resource Governor configuration are honored and enforced.
Requests that existed before enabling Resource Governor are affected by any configuration changes that
were made when Resource Governor was disabled.
When Resource Governor is running, RECONFIGURE applies any configuration changes requested when
the CREATE|ALTER|DROP WORKLOAD GROUP or CREATE|ALTER|DROP RESOURCE POOL or
CREATE|ALTER|DROP EXTERNAL RESOURCE POOL statements are executed.
IMPORTANT
ALTER RESOURCE GOVERNOR RECONFIGURE must be issued in order for any configuration changes to take effect.
Remarks
ALTER RESOURCE GOVERNOR DISABLE, ALTER RESOURCE GOVERNOR RECONFIGURE, and ALTER
RESOURCE GOVERNOR RESET STATISTICS cannot be used inside a user transaction.
The RECONFIGURE parameter is part of the Resource Governor syntax and should not be confused with
RECONFIGURE, which is a separate DDL statement.
We recommend being familiar with Resource Governor states before you execute DDL statements. For more
information, see Resource Governor.
Permissions
Requires CONTROL SERVER permission.
Examples
A. Starting the Resource Governor
When SQL Server is first installed Resource Governor is disabled. The following example starts Resource
Governor. After the statement executes, Resource Governor is running and can use the predefined workload
groups and resource pools.
D. Resetting Statistics
The following example resets all workload group and resource pool statistics.
See Also
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
CREATE EXTERNAL RESOURCE POOL (Transact-SQL )
DROP EXTERNAL RESOURCE POOL (Transact-SQL )
ALTER EXTERNAL RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
Resource Governor
sys.dm_resource_governor_workload_groups (Transact-SQL )
sys.dm_resource_governor_resource_pools (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes an existing Resource Governor resource pool configuration in SQL Server.
Transact-SQL Syntax Conventions.
Syntax
ALTER RESOURCE POOL { pool_name | "default" }
[WITH
( [ MIN_CPU_PERCENT = value ]
[ [ , ] MAX_CPU_PERCENT = value ]
[ [ , ] CAP_CPU_PERCENT = value ]
[ [ , ] AFFINITY {
SCHEDULER = AUTO
| ( <scheduler_range_spec> )
| NUMANODE = ( <NUMA_node_range_spec> )
}]
[ [ , ] MIN_MEMORY_PERCENT = value ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MIN_IOPS_PER_VOLUME = value ]
[ [ , ] MAX_IOPS_PER_VOLUME = value ]
)]
[;]
<scheduler_range_spec> ::=
{SCHED_ID | SCHED_ID TO SCHED_ID}[,…n]
<NUMA_node_range_spec> ::=
{NUMA_node_ID | NUMA_node_ID TO NUMA_node_ID}[,…n]
Arguments
{ pool_name | "default" }
Is the name of an existing user-defined resource pool or the default resource pool that is created when SQL
Server is installed.
"default" must be enclosed by quotation marks ("") or brackets ([]) when used with ALTER RESOURCE POOL to
avoid conflict with DEFAULT, which is a system reserved word. For more information, see Database Identifiers.
NOTE
Predefined workload groups and resource pools all use lowercase names, such as "default". This should be taken into
account for servers that use case-sensitive collation. Servers with case-insensitive collation, such as
SQL_Latin1_General_CP1_CI_AS, will treat "default" and "Default" as the same.
MIN_CPU_PERCENT =value
Specifies the guaranteed average CPU bandwidth for all requests in the resource pool when there is CPU
contention. value is an integer with a default setting of 0. The allowed range for value is from 0 through 100.
MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in the resource pool will receive when there is
CPU contention. value is an integer with a default setting of 100. The allowed range for value is from 1 through
100.
CAP_CPU_PERCENT =value
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the target maximum CPU capacity for requests in the resource pool. value is an integer with a default
setting of 100. The allowed range for value is from 1 through 100.
NOTE
Due to the statistical nature of CPU governance, you may notice occasional spikes exceeding the value specified in
CAP_CPU_PERCENT.
MIN_MEMORY_PERCENT =value
Specifies the minimum amount of memory reserved for this resource pool that can not be shared with other
resource pools. value is an integer with a default setting of 0. The allowed range for value is from 0 through 100.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this resource pool. value is an integer with a
default setting of 100. The allowed range for value is from 1 through 100.
MIN_IOPS_PER_VOLUME =value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Specifies the minimum I/O operations per second (IOPS ) per disk volume to reserve for the resource pool. The
allowed range for value is from 0 through 2^31-1 (2,147,483,647). Specify 0 to indicate no minimum threshold
for the pool.
MAX_IOPS_PER_VOLUME =value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Specifies the maximum I/O operations per second (IOPS ) per disk volume to allow for the resource pool. The
allowed range for value is from 0 through 2^31-1 (2,147,483,647). Specify 0 to set an unlimited threshold for the
pool. The default is 0.
If the MAX_IOPS_PER_VOLUME for a pool is set to 0, the pool is not governed at all and can take all the IOPS in
the system even if other pools have MIN_IOPS_PER_VOLUME set. For this case, we recommend that you set the
MAX_IOPS_PER_VOLUME value for this pool to a high number (for example, the maximum value 2^31-1) if you
want this pool to be governed for IO.
Remarks
MAX_CPU_PERCENT and MAX_MEMORY_PERCENT must be greater than or equal to MIN_CPU_PERCENT
and MIN_MEMORY_PERCENT, respectively.
MAX_CPU_PERCENT can use CPU capacity above the value of MAX_CPU_PERCENT if it is available. Although
there may be periodic spikes above CAP_CPU_PERCENT, workloads should not exceed CAP_CPU_PERCENT for
extended periods of time, even when additional CPU capacity is available.
The total CPU percentage for each affinitized component (scheduler(s) or NUMA node(s)) should not exceed
100%.
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states. For
more information, see Resource Governor.
When changing a plan affecting setting, the new setting will only take effect in previously cached plans after
executing DBCC FREEPROCCACHE (pool_name), where pool_name is the name of a Resource Governor
resource pool.
If you are changing AFFINITY from multiple schedulers to a single scheduler, executing DBCC
FREEPROCCACHE is not required because parallel plans can run in serial mode. However, it may not be as
efficient as a plan compiled as a serial plan.
If you are changing AFFINITY from a single scheduler to multiple schedulers, executing DBCC
FREEPROCCACHE is not required. However, serial plans cannot run in parallel, so clearing the respective
cache will allow new plans to potentially be compiled using parallelism.
Cau t i on
Clearing cached plans from a resource pool that is associated with more than one workload group will affect all
workload groups with the user-defined resource pool identified by pool_name.
Permissions
Requires CONTROL SERVER permission.
Examples
The following example keeps all the default resource pool settings on the default pool except for
MAX_CPU_PERCENT , which is changed to 25 .
In the following example, the CAP_CPU_PERCENT sets the hard cap to 80% and AFFINITY SCHEDULER is set to an
individual value of 8 and a range of 12 to 16.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
ALTER RESOURCE POOL Pool25
WITH(
MIN_CPU_PERCENT = 5,
MAX_CPU_PERCENT = 10,
CAP_CPU_PERCENT = 80,
AFFINITY SCHEDULER = (8, 12 TO 16),
MIN_MEMORY_PERCENT = 5,
MAX_MEMORY_PERCENT = 15
);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
See Also
Resource Governor
CREATE RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
ALTER ROLE (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds or removes members to or from a database role, or changes the name of a user-defined database role.
NOTE
To alter roles adding or dropping members in SQL Data Warehouse or Parallel Data Warehouse, use sp_addrolemember
(Transact-SQL) and sp_droprolemember (Transact-SQL).
Syntax
-- Syntax for SQL Server (starting with 2012) and Azure SQL Database
-- Syntax for SQL Server 2008, Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
role_name
APPLIES TO: SQL Server (starting with 2008), Azure SQL Database
Specifies the database role to change.
ADD MEMBER database_principall
APPLIES TO: SQL Server (starting with 2012), Azure SQL Database
Specifies to add the database principal to the membership of a database role.
database_principal is a database user or a user-defined database role.
database_principal cannot be a fixed database role or a server principal.
DROP MEMBER database_principal
APPLIES TO: SQL Server (starting with 2012), Azure SQL Database
Specifies to remove a database principal from the membership of a database role.
database_principal is a database user or a user-defined database role.
database_principal cannot be a fixed database role or a server principal.
WITH NAME = new_name
APPLIES TO: SQL Server (starting with 2008), Azure SQL Database
Specifies to change the name of a user-defined database role. The new name must not already exist in the
database.
Changing the name of a database role does not change ID number, owner, or permissions of the role.
Permissions
To run this command you need one or more of these permissions or memberships:
ALTER permission on the role
ALTER ANY ROLE permission on the database
Membership in the db_securityadmin fixed database role
Additionally, to change the membership in a fixed database role you need:
Membership in the db_owner fixed database role
Metadata
These system views contain information about database roles and database principals.
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
Examples
A. Change the name of a database role
APPLIES TO: SQL Server (starting with 2008), SQL Database
The following example changes the name of role buyers to purchasing . This example can be executed in the
AdventureWorks sample database.
See Also
CREATE ROLE (Transact-SQL )
Principals (Database Engine)
DROP ROLE (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
ALTER ROUTE (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Modifies route information for an existing route in SQL Server.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
ALTER ROUTE route_name
WITH
[ SERVICE_NAME = 'service_name' [ , ] ]
[ BROKER_INSTANCE = 'broker_instance' [ , ] ]
[ LIFETIME = route_lifetime [ , ] ]
[ ADDRESS = 'next_hop_address' [ , ] ]
[ MIRROR_ADDRESS = 'next_hop_mirror_address' ]
[ ; ]
Arguments
route_name
Is the name of the route to change. Server, database, and schema names cannot be specified.
WITH
Introduces the clauses that define the route being altered.
SERVICE_NAME ='service_name'
Specifies the name of the remote service that this route points to. The service_name must exactly match the name
the remote service uses. Service Broker uses a byte-by-byte comparison to match the service_name. In other
words, the comparison is case sensitive and does not consider the current collation. A route with a service name of
'SQL/ServiceBroker/BrokerConfiguration' is a route to a Broker Configuration Notice service. A route to this
service might not specify a broker instance.
If the SERVICE_NAME clause is omitted, the service name for the route is unchanged.
BROKER_INSTANCE ='broker_instance'
Specifies the database that hosts the target service. The broker_instance parameter must be the broker instance
identifier for the remote database, which can be obtained by running the following query in the selected database:
SELECT service_broker_guid
FROM sys.databases
WHERE database_id = DB_ID();
When the BROKER_INSTANCE clause is omitted, the broker instance for the route is unchanged.
NOTE
This option is not available in a contained database.
LIFETIME =route_lifetime
Specifies the time, in seconds, that SQL Server retains the route in the routing table. At the end of the lifetime, the
route expires, and SQL Server no longer considers the route when choosing a route for a new conversation. If this
clause is omitted, the lifetime of the route is unchanged.
ADDRESS ='next_hop_address'
For SQL Database Managed Instance, ADDRESS must be local.
Specifies the network address for this route. The next_hop_address specifies a TCP/IP address in the following
format:
TCP:// { dns_name | netbios_name |ip_address } : port_number
The specified port_number must match the port number for the Service Broker endpoint of an instance of SQL
Server at the specified computer. This can be obtained by running the following query in the selected database:
SELECT tcpe.port
FROM sys.tcp_endpoints AS tcpe
INNER JOIN sys.service_broker_endpoints AS ssbe
ON ssbe.endpoint_id = tcpe.endpoint_id
WHERE ssbe.name = N'MyServiceBrokerEndpoint';
When a route specifies 'LOCAL' for the next_hop_address, the message is delivered to a service within the current
instance of SQL Server.
When a route specifies 'TRANSPORT' for the next_hop_address, the network address is determined based on the
network address in the name of the service. A route that specifies 'TRANSPORT' can specify a service name or
broker instance.
When the next_hop_address is the principal server for a database mirror, you must also specify the
MIRROR_ADDRESS for the mirror server. Otherwise, this route does not automatically failover to the mirror
server.
NOTE
This option is not available in a contained database.
MIRROR_ADDRESS ='next_hop_mirror_address'
Specifies the network address for the mirror server of a mirrored pair whose principal server is at the
next_hop_address. The next_hop_mirror_address specifies a TCP/IP address in the following format:
TCP://{ dns_name | netbios_name | ip_address } : port_number
The specified port_number must match the port number for the Service Broker endpoint of an instance of SQL
Server at the specified computer. This can be obtained by running the following query in the selected database:
SELECT tcpe.port
FROM sys.tcp_endpoints AS tcpe
INNER JOIN sys.service_broker_endpoints AS ssbe
ON ssbe.endpoint_id = tcpe.endpoint_id
WHERE ssbe.name = N'MyServiceBrokerEndpoint';
When the MIRROR_ADDRESS is specified, the route must specify the SERVICE_NAME clause and the
BROKER_INSTANCE clause. A route that specifies 'LOCAL' or 'TRANSPORT' for the next_hop_address might
not specify a mirror address.
NOTE
This option is not available in a contained database.
Remarks
The routing table that stores the routes is a meta-data table that can be read through the sys.routes catalog view.
The routing table can only be updated through the CREATE ROUTE, ALTER ROUTE, and DROP ROUTE
statements.
Clauses that are not specified in the ALTER ROUTE command remain unchanged. Therefore, you cannot ALTER a
route to specify that the route does not time out, that the route matches any service name, or that the route
matches any broker instance. To change these characteristics of a route, you must drop the existing route and
create a new route with the new information.
When a route specifies 'TRANSPORT' for the next_hop_address, the network address is determined based on the
name of the service. SQL Server can successfully process service names that begin with a network address in a
format that is valid for a next_hop_address. Services with names that contain valid network addresses will route to
the network address in the service name.
The routing table can contain any number of routes that specify the same service, network address, and/or broker
instance identifier. In this case, Service Broker chooses a route using a procedure designed to find the most exact
match between the information specified in the conversation and the information in the routing table.
To alter the AUTHORIZATION for a service, use the ALTER AUTHORIZATION statement.
Permissions
Permission for altering a route defaults to the owner of the route, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.
Examples
A. Changing the service for a route
The following example modifies the ExpenseRoute route to point to the remote service
//Adventure-Works.com/Expenses .
See Also
CREATE ROUTE (Transact-SQL )
DROP ROUTE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER SCHEMA (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Transfers a securable between schemas.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<entity_type> ::=
{
Object | Type | XML Schema Collection
}
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
schema_name
Is the name of a schema in the current database, into which the securable will be moved. Cannot be SYS or
INFORMATION_SCHEMA.
<entity_type>
Is the class of the entity for which the owner is being changed. Object is the default.
securable_name
Is the one-part or two-part name of a schema-scoped securable to be moved into the schema.
Remarks
Users and schemas are completely separate.
ALTER SCHEMA can only be used to move securables between schemas in the same database. To change or drop
a securable within a schema, use the ALTER or DROP statement specific to that securable.
If a one-part name is used for securable_name, the name-resolution rules currently in effect will be used to locate
the securable.
All permissions associated with the securable will be dropped when the securable is moved to the new schema. If
the owner of the securable has been explicitly set, the owner will remain unchanged. If the owner of the securable
has been set to SCHEMA OWNER, the owner will remain SCHEMA OWNER ; however, after the move SCHEMA
OWNER will resolve to the owner of the new schema. The principal_id of the new owner will be NULL.
Moving a stored procedure, function, view, or trigger will not change the schema name, if present, of the
corresponding object either in the definition column of the sys.sql_modules catalog view or obtained using the
OBJECT_DEFINITION built-in function. Therefore, we recommend that ALTER SCHEMA not be used to move
these object types. Instead, drop and re-create the object in its new schema.
Moving an object such as a table or synonym will not automatically update references to that object. You must
modify any objects that reference the transferred object manually. For example, if you move a table and that table
is referenced in a trigger, you must modify the trigger to reflect the new schema name. Use
sys.sql_expression_dependencies to list dependencies on the object before moving it.
To change the schema of a table by using SQL Server Management Studio, in Object Explorer, right-click on the
table and then click Design. Press F4 to open the Properties window. In the Schema box, select a new schema.
Cau t i on
Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER AUTHORIZATION.
In such databases you must instead use the new catalog views. The new catalog views take into account the
separation of principals and schemas that was introduced in SQL Server 2005. For more information about
catalog views, see Catalog Views (Transact-SQL ).
Permissions
To transfer a securable from another schema, the current user must have CONTROL permission on the securable
(not schema) and ALTER permission on the target schema.
If the securable has an EXECUTE AS OWNER specification on it and the owner is set to SCHEMA OWNER, the
user must also have IMPERSONATE permission on the owner of the target schema.
All permissions associated with the securable that is being transferred are dropped when it is moved.
Examples
A. Transferring ownership of a table
The following example modifies the schema HumanResources by transferring the table Address from schema
Person into the schema.
USE AdventureWorks2012;
GO
ALTER SCHEMA HumanResources TRANSFER Person.Address;
GO
See Also
CREATE SCHEMA (Transact-SQL )
DROP SCHEMA (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER SEARCH PROPERTY LIST (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds a specified search property to, or drops it from the specified search property list.
Syntax
ALTER SEARCH PROPERTY LIST list_name
{
ADD 'property_name'
WITH
(
PROPERTY_SET_GUID = 'property_set_guid'
, PROPERTY_INT_ID = property_int_id
[ , PROPERTY_DESCRIPTION = 'property_description' ]
)
| DROP 'property_name'
}
;
Arguments
list_name
Is the name of the property list being altered. list_name is an identifier.
To view the names of the existing property lists, use the sys.registered_search_property_lists catalog view, as
follows:
ADD
Adds a specified search property to the property list specified by list_name. The property is registered for the
search property list . Before newly added properties can be used for property searching, the associated full-text
index or indexes must be repopulated. For more information, see ALTER FULLTEXT INDEX (Transact-SQL ).
NOTE
To add a given search property to a search property list, you must provide its property-set GUID (property_set_guid) and
property int ID (property_int_id). For more information, see "Obtaining Property Set GUIDS and Identifiers," later in this
topic.
property_name
Specifies the name to be used to identify the property in full-text queries. property_name must uniquely identify
the property within the property set. A property name can contain internal spaces. The maximum length of
property_name is 256 characters. This name can be a user-friendly name, such as Author or Home Address, or it
can be the Windows canonical name of the property, such as System.Author or System.Contact.HomeAddress.
Developers will need to use the value you specify for property_name to identify the property in the CONTAINS
predicate. Therefore, when adding a property it is important to specify a value that meaningfully represents the
property defined by the specified property set GUID (property_set_guid) and property identifier (property_int_id).
For more information about property names, see "Remarks," later in this topic.
To view the names of properties that currently exist in a search property list of the current database, use the
sys.registered_search_properties catalog view, as follows:
PROPERTY_SET_GUID ='property_set_guid'
Specifies the identifier of the property set to which the property belongs. This is a globally unique identifier
(GUID ). For information about obtaining this value, see "Remarks," later in this topic.
To view the property set GUID of any property that exists in a search property list of the current database, use the
sys.registered_search_properties catalog view, as follows:
PROPERTY_INT_ID =property_int_id
Specifies the integer that identifies the property within its property set. For information about obtaining this value,
see "Remarks."
To view the integer identifier of any property that exists in a search property list of the current database, use the
sys.registered_search_properties catalog view, as follows:
NOTE
A given combination of property_set_guid and property_int_id must be unique in a search property list. If you try to add an
existing combination, the ALTER SEARCH PROPERTY LIST operation fails and issues an error. This means that you can define
only one name for a given property.
PROPERTY_DESCRIPTION ='property_description'
Specifies a user-defined description of the property. property_description is a string of up to 512 characters. This
option is optional.
DROP
Drops the specified property from the property list specified by list_name. Dropping a property unregisters it, so it
is no longer searchable.
Remarks
Each full-text index can have only one search property list.
To enable querying on a given search property, you must add it to the search property list of the full-text index and
then repopulate the index.
When specifying a property you can arrange the PROPERTY_SET_GUID, PROPERTY_INT_ID, and
PROPERTY_DESCRIPTION clauses in any order, as a comma-separated list within parentheses, for example:
ALTER SEARCH PROPERTY LIST CVitaProperties
ADD 'System.Author'
WITH (
PROPERTY_DESCRIPTION = 'Author or authors of a given document.',
PROPERTY_SET_GUID = 'F29F85E0-4FF9-1068-AB91-08002B27B3D9',
PROPERTY_INT_ID = 4
);
NOTE
This example uses the property name, System.Author , which is similar to the concept of canonical property names
introduced in Windows Vista (Windows canonical name).
SELECT column_name
FROM table_name
WHERE CONTAINS( PROPERTY( column_name, 'new_search_property' ),
'contains_search_condition');
GO
To start a full population, use the following ALTER FULLTEXT INDEX (Transact-SQL ) statement:
USE database_name;
GO
ALTER FULLTEXT INDEX ON table_name START FULL POPULATION;
GO
NOTE
Repopulation is not needed after a property is dropped from a property list, because only the properties that remain in the
search property list are available for full-text querying.
Related References
To create a property list
CREATE SEARCH PROPERTY LIST (Transact-SQL )
To drop a property list
DROP SEARCH PROPERTY LIST (Transact-SQL )
To add or remove a property list on a full-text index
ALTER FULLTEXT INDEX (Transact-SQL )
To run a population on a full-text index
ALTER FULLTEXT INDEX (Transact-SQL )
Permissions
Requires CONTROL permission on the property list.
Examples
A. Adding a property
The following example adds several properties— Title , Author , and Tags —to a property list named
DocumentPropertyList .
NOTE
For an example that creates DocumentPropertyList property list, see CREATE SEARCH PROPERTY LIST (Transact-SQL).
NOTE
You must associate a given search property list with a full-text index before using it for property-scoped queries. To do so,
use an ALTER FULLTEXT INDEX statement and specify the SET SEARCH PROPERTY LIST clause.
B. Dropping a property
The following example drops the Comments property from the DocumentPropertyList property list.
See Also
CREATE SEARCH PROPERTY LIST (Transact-SQL )
DROP SEARCH PROPERTY LIST (Transact-SQL )
sys.registered_search_properties (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
sys.dm_fts_index_keywords_by_property (Transact-SQL )
Search Document Properties with Search Property Lists
Find Property Set GUIDs and Property Integer IDs for Search Properties
ALTER SECURITY POLICY (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a security policy.
Transact-SQL Syntax Conventions
Syntax
ALTER SECURITY POLICY schema_name.security_policy_name
(
{ ADD { FILTER | BLOCK } PREDICATE tvf_schema_name.security_predicate_function_name
( { column_name | arguments } [ , …n ] ) ON table_schema_name.table_name
[ <block_dml_operation> ] }
| { ALTER { FILTER | BLOCK } PREDICATE tvf_schema_name.new_security_predicate_function_name
( { column_name | arguments } [ , …n ] ) ON table_schema_name.table_name
[ <block_dml_operation> ] }
| { DROP { FILTER | BLOCK } PREDICATE ON table_schema_name.table_name }
| [ <additional_add_alter_drop_predicate_statements> [ , ...n ] ]
) [ WITH ( STATE = { ON | OFF } ]
[ NOT FOR REPLICATION ]
[;]
<block_dml_operation>
[ { AFTER { INSERT | UPDATE } }
| { BEFORE { UPDATE | DELETE } } ]
Arguments
security_policy_name
The name of the security policy. Security policy names must comply with the rules for identifiers and must be
unique within the database and to its schema.
schema_name
Is the name of the schema to which the security policy belongs. schema_name is required because of schema
binding.
[ FILTER | BLOCK ]
The type of security predicate for the function being bound to the target table. FILTER predicates silently filter the
rows that are available to read operations. BLOCK predicates explicitly block write operations that violate the
predicate function.
tvf_schema_name.security_predicate_function_name
Is the inline table value function that will be used as a predicate and that will be enforced upon queries against a
target table. At most one security predicate can be defined for a particular DML operation against a particular
table. The inline table value function must have been created using the SCHEMABINDING option.
{ column_name | arguments }
The column name or expression used as parameters for the security predicate function. Any columns on the target
table can be used as arguments for the predicate function. Expressions that include literals, builtins, and
expressions that use arithmetic operators can be used.
table_schema_name.table_name
Is the target table to which the security predicate will be applied. Multiple disabled security policies can target a
single table for a particular DML operation, but only one can be enabled at any given time.
<block_dml_operation>
The particular DML operation for which the block predicate will be applied. AFTER specifies that the predicate will
be evaluated on the values of the rows after the DML operation was performed (INSERT or UPDATE ). BEFORE
specifies that the predicate will be evaluated on the values of the rows before the DML operation is performed
(UPDATE or DELETE ). If no operation is specified, the predicate will apply to all operations.
You cannot ALTER the operation for which a block predicate will be applied, because the operation is used to
uniquely identify the predicate. Instead, you must drop the predicate and add a new one for the new operation.
WITH ( STATE = { ON | OFF } )
Enables or disables the security policy from enforcing its security predicates against the target tables. If not
specified the security policy being created is disabled.
NOT FOR REPLICATION
Indicates that the security policy should not be executed when a replication agent modifies the target object. For
more information, see Control the Behavior of Triggers and Constraints During Synchronization (Replication
Transact-SQL Programming).
table_schema_name.table_name
Is the target table to which the security predicate will be applied. Multiple disabled security policies can target a
single table, but only one can be enabled at any given time.
Remarks
The ALTER SECURITY POLICY statement is in a transaction's scope. If the transaction is rolled back, the statement
is also rolled back.
When using predicate functions with memory-optimized tables, security policies must include
SCHEMABINDING and use the WITH NATIVE_COMPILATION compilation hint. The SCHEMABINDING
argument cannot be changed with the ALTER statement because it applies to all predicates. To change schema
binding you must drop and recreate the security policy.
Block predicates are evaluated after the corresponding DML operation is executed. Therefore, a READ
UNCOMMITTED query can see transient values that will be rolled back.
Permissions
Requires the ALTER ANY SECURITY POLICY permission.
Additionally the following permissions are required for each predicate that is added:
SELECT and REFERENCES permissions on the function being used as a predicate.
REFERENCES permission on the target table being bound to the policy.
REFERENCES permission on every column from the target table used as arguments.
Examples
The following examples demonstrate the use of the ALTER SECURITY POLICY syntax. For an example of a
complete security policy scenario, see Row -Level Security.
A. Adding an additional predicate to a policy
The following syntax alters a security policy, adding a filter predicate on the mytable table.
ALTER SECURITY POLICY pol1
ADD FILTER PREDICATE schema_preds.SecPredicate(column1)
ON myschema.mytable;
See Also
Row -Level Security
CREATE SECURITY POLICY (Transact-SQL )
DROP SECURITY POLICY (Transact-SQL )
sys.security_policies (Transact-SQL )
sys.security_predicates (Transact-SQL )
ALTER SEQUENCE (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the arguments of an existing sequence object. If the sequence was created with the CACHE option,
altering the sequence will recreate the cache.
Sequences objects are created by using the CREATE SEQUENCE statement. Sequences are integer values and can
be of any data type that returns an integer. The data type cannot be changed by using the ALTER SEQUENCE
statement. To change the data type, drop and create the sequence object.
A sequence is a user-defined schema bound object that generates a sequence of numeric values according to a
specification. New values are generated from a sequence by calling the NEXT VALUE FOR function. Use
sp_sequence_get_range to get multiple sequence numbers at once. For information and scenarios that use both
CREATE SEQUENCE, sp_sequence_get_range, and the NEXT VALUE FOR function, see Sequence Numbers.
Transact-SQL Syntax Conventions
Syntax
ALTER SEQUENCE [schema_name. ] sequence_name
[ RESTART [ WITH <constant> ] ]
[ INCREMENT BY <constant> ]
[ { MINVALUE <constant> } | { NO MINVALUE } ]
[ { MAXVALUE <constant> } | { NO MAXVALUE } ]
[ CYCLE | { NO CYCLE } ]
[ { CACHE [ <constant> ] } | { NO CACHE } ]
[ ; ]
Arguments
sequence_name
Specifies the unique name by which the sequence is known in the database. Type is sysname.
RESTART [ WITH <constant> ]
The next value that will be returned by the sequence object. If provided, the RESTART WITH value must be an
integer that is less than or equal to the maximum and greater than or equal to the minimum value of the sequence
object. If the WITH value is omitted, the sequence numbering restarts based on the original CREATE SEQUENCE
options.
INCREMENT BY <constant>
The value that is used to increment (or decrement if negative) the sequence object’s base value for each call to the
NEXT VALUE FOR function. If the increment is a negative value the sequence object is descending, otherwise, it is
ascending. The increment can not be 0.
[ MINVALUE <constant> | NO MINVALUE ]
Specifies the bounds for sequence object. If NO MINVALUE is specified, the minimum possible value of the
sequence data type is used.
[ MAXVALUE <constant> | NO MAXVALUE
Specifies the bounds for sequence object. If NO MAXVALUE is specified, the maximum possible value of the
sequence data type is used.
[ CYCLE | NO CYCLE ]
This property specifies whether the sequence object should restart from the minimum value (or maximum for
descending sequence objects) or throw an exception when its minimum or maximum value is exceeded.
NOTE
After cycling the next value is the minimum or maximum value, not the START VALUE of the sequence.
Remarks
For information about how sequences are created and how the sequence cache is managed, see CREATE
SEQUENCE (Transact-SQL ).
The MINVALUE for ascending sequences and the MAXVALUE for descending sequences cannot be altered to a
value that does not permit the START WITH value of the sequence. To change the MINVALUE of an ascending
sequence to a number larger than the START WITH value or to change the MAXVALUE of a descending sequence
to a number smaller than the START WITH value, include the RESTART WITH argument to restart the sequence at
a desired point that falls within the minimum and maximum range.
Metadata
Security
Permissions
Requires ALTER permission on the sequence or ALTER permission on the schema. To grant ALTER permission
on the sequence, use ALTER ON OBJECT in the following format:
The ownership of a sequence object can be transferred by using the ALTER AUTHORIZATION statement.
Audit
To audit ALTER SEQUENCE, monitor the SCHEMA_OBJECT_CHANGE_GROUP.
Examples
For examples of both creating sequences and using the NEXT VALUE FOR function to generate sequence
numbers, see Sequence Numbers.
A. Altering a sequence
The following example creates a schema named Test and a sequence named TestSeq using the int data type,
having a range from 0 to 255. The sequence starts with 125 and increments by 25 every time that a number is
generated. Because the sequence is configure to cycle, when the value exceeds the maximum value of 200, the
sequence restarts at the minimum value of 100.
The following example alters the TestSeq sequence to have a range from 0 to 255. The sequence restarts the
numbering series with 100 and increments by 50 every time that a number is generated.
Because the sequence will not cycle, the NEXT VALUE FOR function will result in an error when the sequence
exceeds 200.
B. Restarting a sequence
The following example creates a sequence named CountBy1. The sequence uses the default values.
To generate a sequence value, the owner then executes the following statement:
The value returned of -9,223,372,036,854,775,808 is the lowest possible value for the bigint data type. The owner
realizes he wanted the sequence to start with 1, but did not indicate the START WITH clause when he created the
sequence. To correct this error, the owner executes the following statement.
Then the owner executes the following statement again to generate a sequence number.
Now when the sequence object reaches 9,223,372,036,854,775,807 it will cycle, and the next number after cycling
will be the minimum of the data type, -9,223,372,036,854,775,808.
The owner realized that the bigint data type uses 8 bytes each time it is used. The int data type that uses 4 bytes is
sufficient. However the data type of a sequence object cannot be altered. To change to an int data type, the owner
must drop the sequence object and recreate the object with the correct data type.
See Also
CREATE SEQUENCE (Transact-SQL )
DROP SEQUENCE (Transact-SQL )
NEXT VALUE FOR (Transact-SQL )
Sequence Numbers
sp_sequence_get_range (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database (Managed Instance only) Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a server audit object using the SQL Server Audit feature. For more information, see SQL Server Audit
(Database Engine).
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
ALTER SERVER AUDIT audit_name
{
[ TO { { FILE ( <file_options> [, ...n] ) } | APPLICATION_LOG | SECURITY_LOG } ]
[ WITH ( <audit_options> [ , ...n] ) ]
[ WHERE <predicate_expression> ]
}
| REMOVE WHERE
| MODIFY NAME = new_audit_name
[ ; ]
<file_options>::=
{
FILEPATH = 'os_file_path'
| MAXSIZE = { max_size { MB | GB | TB } | UNLIMITED }
| MAX_ROLLOVER_FILES = { integer | UNLIMITED }
| MAX_FILES = integer
| RESERVE_DISK_SPACE = { ON | OFF }
}
<audit_options>::=
{
QUEUE_DELAY = integer
| ON_FAILURE = { CONTINUE | SHUTDOWN | FAIL_OPERATION }
| STATE = = { ON | OFF }
}
<predicate_expression>::=
{
[NOT ] <predicate_factor>
[ { AND | OR } [NOT ] { <predicate_factor> } ]
[,...n ]
}
<predicate_factor>::=
event_field_name { = | < > | ! = | > | > = | < | < = } { number | ' string ' }
Arguments
TO { FILE | APPLICATION_LOG | SECURITY }
Determines the location of the audit target. The options are a binary file, the Windows application log, or the
Windows security log.
FILEPATH = 'os_file_path'
The path of the audit trail. The file name is generated based on the audit name and audit GUID.
MAXSIZE =max_size
Specifies the maximum size to which the audit file can grow. The max_size value must be an integer followed by
MB, GB, TB, or UNLIMITED. The minimum size that you can specify for max_size is 2 MB and the maximum is
2,147,483,647 TB. When UNLIMITED is specified, the file grows until the disk is full. Specifying a value lower
than 2 MB raises MSG_MAXSIZE_TOO_SMALL the error. The default value is UNLIMITED.
MAX_ROLLOVER_FILES =integer | UNLIMITED
Specifies the maximum number of files to retain in the file system. When the setting of
MAX_ROLLOVER_FILES=0, there is no limit imposed on the number of rollover files that are created. The default
value is 0. The maximum number of files that can be specified is 2,147,483,647.
MAX_FILES =integer
Specifies the maximum number of audit files that can be created. Does not roll over to the first file when the limit
is reached. When the MAX_FILES limit is reached, any action that causes additional audit events to be generated
fails with an error.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
RESERVE_DISK_SPACE = { ON | OFF }
This option pre-allocates the file on the disk to the MAXSIZE value. Only applies if MAXSIZE is not equal to
UNLIMITED. The default value is OFF.
QUEUE_DEL AY =integer
Determines the time in milliseconds that can elapse before audit actions are forced to be processed. A value of 0
indicates synchronous delivery. The minimum settable query delay value is 1000 (1 second), which is the default.
The maximum is 2,147,483,647 (2,147,483.647 seconds or 24 days, 20 hours, 31 minutes, 23.647 seconds).
Specifying an invalid number, raises the error MSG_INVALID_QUEUE_DEL AY.
ON_FAILURE = { CONTINUE | SHUTDOWN | FAIL_OPERATION }
Indicates whether the instance writing to the target should fail, continue, or stop if SQL Server cannot write to the
audit log.
CONTINUE
SQL Server operations continue. Audit records are not retained. The audit continues to attempt to log events and
resumes if the failure condition is resolved. Selecting the continue option can allow unaudited activity, which could
violate your security policies. Use this option, when continuing operation of the Database Engine is more
important than maintaining a complete audit.
SHUTDOWN
Forces the instance of SQL Server to shut down, if SQL Server fails to write data to the audit target for any
reason. The login executing the ALTER statement must have the SHUTDOWN permission within SQL Server. The
shutdown behavior persists even if the SHUTDOWN permission is later revoked from the executing login. If the user
does not have this permission, then the statement will fail and the audit will not be modified. Use the option when
an audit failure could compromise the security or integrity of the system. For more information, see
SHUTDOWN.
FAIL_OPERATION
Database actions fail if they cause audited events. Actions, which do not cause audited events can continue, but no
audited events can occur. The audit continues to attempt to log events and resumes if the failure condition is
resolved. Use this option when maintaining a complete audit is more important than full access to the Database
Engine.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
STATE = { ON | OFF }
Enables or disables the audit from collecting records. Changing the state of a running audit (from ON to OFF )
creates an audit entry that the audit was stopped, the principal that stopped the audit, and the time the audit was
stopped.
MODIFY NAME = new_audit_name
Changes the name of the audit. Cannot be used with any other option.
predicate_expression
Specifies the predicate expression used to determine if an event should be processed or not. Predicate expressions
are limited to 3000 characters, which limits string arguments.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
event_field_name
Is the name of the event field that identifies the predicate source. Audit fields are described in sys.fn_get_audit_file
(Transact-SQL ). All fields can be audited except file_name and audit_file_offset .
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
number
Is any numeric type including decimal. Limitations are the lack of available physical memory or a number that is
too large to be represented as a 64-bit integer.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
' string '
Either an ANSI or Unicode string as required by the predicate compare. No implicit string type conversion is
performed for the predicate compare functions. Passing the wrong type results in an error.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Remarks
You must specify at least one of the TO, WITH, or MODIFY NAME clauses when you call ALTER AUDIT.
You must set the state of an audit to the OFF option in order to make changes to an audit. If ALTER AUDIT is run
when an audit is enabled with any options other than STATE=OFF, you receive a MSG_NEED_AUDIT_DISABLED
error message.
You can add, alter, and remove audit specifications without stopping an audit.
You cannot change an audit’s GUID after the audit has been created.
Permissions
To create, alter, or drop a server audit principal, you must have ALTER ANY SERVER AUDIT or the CONTROL
SERVER permission.
Examples
A. Changing a server audit name
The following example changes the name of the server audit HIPPA_Audit to HIPAA_Audit_Old .
USE master
GO
ALTER SERVER AUDIT HIPAA_Audit
WITH (STATE = OFF);
GO
ALTER SERVER AUDIT HIPAA_Audit
MODIFY NAME = HIPAA_Audit_Old;
GO
ALTER SERVER AUDIT HIPAA_Audit_Old
WITH (STATE = ON);
GO
USE master
GO
ALTER SERVER AUDIT HIPAA_Audit
WITH (STATE = OFF);
GO
ALTER SERVER AUDIT HIPAA_Audit
TO FILE (FILEPATH ='\\SQLPROD_1\Audit\',
MAXSIZE = 1000 MB,
RESERVE_DISK_SPACE=OFF)
WITH (QUEUE_DELAY = 1000,
ON_FAILURE = CONTINUE);
GO
ALTER SERVER AUDIT HIPAA_Audit
WITH (STATE = ON);
GO
See Also
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
ALTER SERVER AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters a server audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions
Syntax
ALTER SERVER AUDIT SPECIFICATION audit_specification_name
{
[ FOR SERVER AUDIT audit_name ]
[ { { ADD | DROP } ( audit_action_group_name )
} [, ...n] ]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]
Arguments
audit_specification_name
The name of the audit specification.
audit_name
The name of the audit to which this specification is applied.
audit_action_group_name
Name of a group of server-level auditable actions. For a list of Audit Action Groups, see SQL Server Audit Action
Groups and Actions.
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification.
Remarks
You must set the state of an audit specification to the OFF option to make changes to an audit specification. If
ALTER SERVER AUDIT SPECIFICATION is executed when an audit specification is enabled with any options
other than STATE=OFF, you will receive an error message.
Permissions
Users with the ALTER ANY SERVER AUDIT permission can alter server audit specifications and bind them to any
audit.
After a server audit specification is created, it can be viewed by principals with the CONTROL SERVER, or ALTER
ANY SERVER AUDIT permissions, the sysadmin account, or principals having explicit access to the audit.
Examples
The following example creates a server audit specification called HIPPA_Audit_Specification . It drops the audit
action group for failed logins, and adds an audit action group for Database Object Access for a SQL Server audit
called HIPPA_Audit .
For a full example about how to create an audit, see SQL Server Audit (Database Engine).
See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
ALTER SERVER CONFIGURATION (Transact-SQL)
5/3/2018 • 12 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies global configuration settings for the current server in SQL Server.
Transact-SQL Syntax Conventions
Syntax
ALTER SERVER CONFIGURATION
SET <optionspec>
[;]
<optionspec> ::=
{
<process_affinity>
| <diagnostic_log>
| <failover_cluster_property>
| <hadr_cluster_context>
| <buffer_pool_extension>
| <soft_numa>
}
<process_affinity> ::=
PROCESS AFFINITY
{
CPU = { AUTO | <CPU_range_spec> }
| NUMANODE = <NUMA_node_range_spec>
}
<CPU_range_spec> ::=
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]
<NUMA_node_range_spec> ::=
{ NUMA_node_ID | NUMA_node_ID TO NUMA_node_ID } [ ,...n ]
<diagnostic_log> ::=
DIAGNOSTICS LOG
{
ON
| OFF
| PATH = { 'os_file_path' | DEFAULT }
| MAX_SIZE = { 'log_max_size' MB | DEFAULT }
| MAX_FILES = { 'max_file_count' | DEFAULT }
}
<failover_cluster_property> ::=
FAILOVER CLUSTER PROPERTY <resource_property>
<resource_property> ::=
{
VerboseLogging = { 'logging_detail' | DEFAULT }
| SqlDumperDumpFlags = { 'dump_file_type' | DEFAULT }
| SqlDumperDumpPath = { 'os_file_path'| DEFAULT }
| SqlDumperDumpTimeOut = { 'dump_time-out' | DEFAULT }
| FailureConditionLevel = { 'failure_condition_level' | DEFAULT }
| HealthCheckTimeout = { 'health_check_time-out' | DEFAULT }
}
<hadr_cluster_context> ::=
HADR CLUSTER CONTEXT = { 'remote_windows_cluster' | LOCAL }
<buffer_pool_extension>::=
BUFFER POOL EXTENSION
{ ON ( FILENAME = 'os_file_path_and_name' , SIZE = <size_spec> )
| OFF }
<size_spec> ::=
{ size [ KB | MB | GB ] }
<soft_numa> ::=
SET SOFTNUMA
{ ON | OFF }
Arguments
<process_affinity> ::=
PROCESS AFFINITY
Enables hardware threads to be associated with CPUs.
CPU = { AUTO | <CPU_range_spec> }
Distributes SQL Server worker threads to each CPU within the specified range. CPUs outside the specified range
will not have assigned threads.
AUTO
Specifies that no thread is assigned a CPU. The operating system can freely move threads among CPUs based on
the server workload. This is the default and recommended setting.
<CPU_range_spec> ::=
Specifies the CPU or range of CPUs to assign threads to.
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]
Is the list of one or more CPUs. CPU IDs begin at 0 and are integer values.
NUMANODE = <NUMA_node_range_spec>
Assigns threads to all CPUs that belong to the specified NUMA node or range of nodes.
<NUMA_node_range_spec> ::=
Specifies the NUMA node or range of NUMA nodes.
{ NUMA_node_ID | NUMA_node_ID TO NUMA_node_ID } [ ,...n ]
Is the list of one or more NUMA nodes. NUMA node IDs begin at 0 and are integer values.
<diagnostic_log> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
DIAGNOSTICS LOG
Starts or stops logging diagnostic data captured by the sp_server_diagnostics procedure, and sets SQLDIAG log
configuration parameters such as the log file rollover count, log file size, and file location. For more information,
see View and Read Failover Cluster Instance Diagnostics Log.
ON
Starts SQL Server logging diagnostic data in the location specified in the PATH file option. This is the default.
OFF
Stops logging diagnostic data.
PATH = { 'os_file_path' | DEFAULT }
Path indicating the location of the diagnostic logs. The default location is <\MSSQL\Log> within the installation
folder of the SQL Server failover cluster instance.
MAX_SIZE = { 'log_max_size' MB | DEFAULT }
Maximum size in megabytes to which each diagnostic log can grow. The default is 100 MB.
MAX_FILES = { 'max_file_count' | DEFAULT }
Maximum number of diagnostic log files that can be stored on the computer before they are recycled for new
diagnostic logs.
<failover_cluster_property> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
FAILOVER CLUSTER PROPERTY
Modifies the SQL Server resource private failover cluster properties.
VERBOSE LOGGING = { 'logging_detail' | DEFAULT }
Sets the logging level for SQL Server Failover Clustering. It can be turned on to provide additional details in the
error logs for troubleshooting.
0 – Logging is turned off (default)
1 - Errors only
2 – Errors and warnings
SQLDUMPEREDUMPFL AGS
Determines the type of dump files generated by SQL Server SQLDumper utility. The default setting is 0. For more
information, see SQL Server Dumper Utility Knowledgebase article.
SQLDUMPERDUMPPATH = { 'os_file_path' | DEFAULT }
The location where the SQLDumper utility stores the dump files. For more information, see SQL Server Dumper
Utility Knowledgebase article.
SQLDUMPERDUMPTIMEOUT = { 'dump_time-out' | DEFAULT }
The time-out value in milliseconds for the SQLDumper utility to generate a dump in case of a SQL Server failure.
The default value is 0, which means there is no time limit to complete the dump. For more information, see SQL
Server Dumper Utility Knowledgebase article.
FAILURECONDITIONLEVEL = { 'failure_condition_level' | DEFAULT }
Tthe conditions under which the SQL Server failover cluster instance should failover or restart. The default value is
3, which means that the SQL Server resource will failover or restart on critical server errors. For more information
about this and other failure condition levels, see Configure FailureConditionLevel Property Settings.
HEALTHCHECKTIMEOUT = { 'health_check_time-out' | DEFAULT }
The time-out value for how long the SQL Server Database Engine resource DLL should wait for the server health
information before it considers the instance of SQL Server as unresponsive. The time-out value is expressed in
milliseconds. The default is 60000 milliseconds (60 seconds).
<hadr_cluster_context> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
HADR CLUSTER CONTEXT = { 'remote_windows_cluster' | LOCAL }
Switches the HADR cluster context of the server instance to the specified Windows Server Failover Cluster
(WSFC ). The HADR cluster context determines what WSFC manages the metadata for availability replicas hosted
by the server instance. Use the SET HADR CLUSTER CONTEXT option only during a cross-cluster migration of
Always On availability groups to an instance of SQL Server 2012 SP1 (11.0.3x) or higher version on a new WSFC
r.
You can switch the HADR cluster context only from the local WSFC to a remote WSFC and then back from the
remote WSFC to the local WSFC. The HADR cluster context can be switched to a remote cluster only when the
instance of SQL Server is not hosting any availability replicas.
A remote HADR cluster context can be switched back to the local cluster at any time. However, the context cannot
be switched again as long as the server instance is hosting any availability replicas.
To identify the destination cluster, specify one of the following values:
windows_cluster
The netwirj name of a WSFC. You can specify either the short name or the full domain name. To find the target IP
address of a short name, ALTER SERVER CONFIGURATION uses DNS resolution. Under some situations, a short
name could cause confusion, and DNS could return the wrong IP address. Therefore, we recommend that you
specify the full domain name.
NOTE
A cross-cluster migration using this setting is no longer supported. To perform a cross-cluster migration, use a Distributed
Availability Group or some other method such as log shipping.
LOCAL
The local WSFC.
For more information, see Change the HADR Cluster Context of Server Instance (SQL Server).
<buffer_pool_extension>::=
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
ON
Enables the buffer pool extension option. This option extends the size of the buffer pool by using nonvolatile
storage such as solid-state drives (SSD ) to persist clean data pages in the pool. For more information about this
feature, see Buffer Pool Extension.The buffer pool extension is not available in every SQL Server edition. For more
information, see Editions and Supported Features for SQL Server 2016.
FILENAME = 'os_file_path_and_name'
Defines the directory path and name of the buffer pool extension cache file. The file extension must be specified as
.BPE. You must turn off BUFFER POOL EXTENSION before you can modify FILENAME.
SIZE = size [ KB | MB | GB ]
Defines the size of the cache. The default size specification is KB. The minimum size is the size of Max Server
Memory. The maximum limit is 32 times the size of Max Server Memory. For more information about Max Server
Memory, see sp_configure (Transact-SQL ).
You must turn BUFFER POOL EXTENSION off before you can modify the size of the file. To specify a size that is
smaller than the current size, the instance of SQL Server must be restarted to reclaim memory. Otherwise, the
specified size must be the same as or larger than the current size.
OFF
Disables the buffer pool extension option. You must disable the buffer pool extension option before you modify any
associated parameters such as the size or name of the file. When this option is disabled, all related configuration
information is removed from the registry.
WARNING
Disabling the buffer pool extension might have a negative impact server performance because the buffer pool is significantly
reduced in size.
<soft_numa>
Applies to: SQL Server 2017 through SQL Server 2017.
ON
Enables automatic partitioning to split large NUMA hardware nodes into smaller NUMA nodes. Changing the
running value requires a restart of the database engine.
OFF
Disables automatic software partitioning of large NUMA hardware nodes into smaller NUMA nodes. Changing
the running value requires a restart of the database engine.
WARNING
There are known issues with the behavior of the ALTER SERVER CONFIGURATION statement with the SOFT NUMA option
and SQL Server Agent. The following is the recommended sequence of operations:
1) Stop the instance of SQL Server Agent.
2) Execute your ALTER SERVER CONFGURATION SOFT NUMA option.
3) Re-start the SQL Server instance.
4) Start the instance of SQL Server Agent.
More Information: If an ALTER SERVER CONFIGURATION with SET SOFTNUMA command is executed
before the SQL Server service is restarted, then when the SQL Server Agent service is stopped, it will execute a T-
SQL RECONFIGURE command that will revert the SOFTNUMA settings back to what they were before the
ALTER SERVER CONFIGURATION.
General Remarks
This statement does not require a restart of SQL Server, unless explicitly stated otherwise. In the case of a SQL
Server failover cluster instance, it does not require a restart of the SQL Server cluster resource.
Permissions
Requires ALTER SETTINGS permissions for the process affinity option. ALTER SETTINGS and VIEW SERVER
STATE permissions for the diagnostic log and failover cluster property options, and CONTROL SERVER
permission for the HADR cluster context option.
Requires ALTER SERVER STATE permission for the buffer pool entension option.
The SQL Server Database Engine resource DLL runs under the Local System account. Therefore, the Local System
account must have read and write access to the specified path in the Diagnostic Log option.
Examples
CATEGORY FEATURED SYNTAX ELEMENTS
See Also
Soft-NUMA (SQL Server)
Change the HADR Cluster Context of Server Instance (SQL Server)
sys.dm_os_schedulers (Transact-SQL )
sys.dm_os_memory_nodes (Transact-SQL )
sys.dm_os_buffer_pool_extension_configuration (Transact-SQL )
Buffer Pool Extension
ALTER SERVER ROLE (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the membership of a server role or changes name of a user-defined server role. Fixed server roles
cannot be renamed.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
Arguments
server_role_name
Is the name of the server role to be changed.
ADD MEMBER server_principal
Adds the specified server principal to the server role. server_principal can be a login or a user-defined server role.
server_principal cannot be a fixed server role, a database role, or sa.
DROP MEMBER server_principal
Removes the specified server principal from the server role. server_principal can be a login or a user-defined
server role. server_principal cannot be a fixed server role, a database role, or sa.
WITH NAME =new_server_role_name
Specifies the new name of the user-defined server role. This name cannot already exist in the server.
Remarks
Changing the name of a user-defined server role does not change ID number, owner, or permissions of the role.
For changing role membership, ALTER SERVER ROLE replaces sp_addsrvrolemember and sp_dropsrvrolemember.
These stored procedures are deprecated.
You can view server roles by querying the sys.server_role_members and sys.server_principals catalog views.
To change the owner of a user-defined server role, use ALTER AUTHORIZATION (Transact-SQL ).
Permissions
Requires ALTER ANY SERVER ROLE permission on the server to change the name of a user-defined server role.
Fixed server roles
To add a member to a fixed server role, you must be a member of that fixed server role, or be a member of the
sysadmin fixed server role.
NOTE
The CONTROL SERVER and ALTER ANY SERVER ROLE permissions are not sufficient to execute ALTER SERVER ROLE for a
fixed server role, and ALTER permission cannot be granted on a fixed server role.
NOTE
Unlike fixed server roles, members of a user-defined server role do not inherently have permission to add members to that
same role.
Examples
A. Changing the name of a server role
The following example creates a server role named Product , and then changes the name of server role to
Production .
See Also
CREATE SERVER ROLE (Transact-SQL )
DROP SERVER ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
ALTER ROLE (Transact-SQL )
DROP ROLE (Transact-SQL )
Security Stored Procedures (Transact-SQL )
Security Functions (Transact-SQL )
Principals (Database Engine)
sys.server_role_members (Transact-SQL )
sys.server_principals (Transact-SQL )
ALTER SERVICE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes an existing service.
Transact-SQL Syntax Conventions
Syntax
ALTER SERVICE service_name
[ ON QUEUE [ schema_name . ]queue_name ]
[ ( < opt_arg > [ , ...n ] ) ]
[ ; ]
<opt_arg> ::=
ADD CONTRACT contract_name | DROP CONTRACT contract_name
Arguments
service_name
Is the name of the service to change. Server, database, and schema names cannot be specified.
ON QUEUE [ schema_name. ] queue_name
Specifies the new queue for this service. Service Broker moves all messages for this service from the current
queue to the new queue.
ADD CONTRACT contract_name
Specifies a contract to add to the contract set exposed by this service.
DROP CONTRACT contract_name
Specifies a contract to delete from the contract set exposed by this service. Service Broker sends an error message
on any existing conversations with this service that use this contract.
Remarks
When the ALTER SERVICE statement deletes a contract from a service, the service can no longer be a target for
conversations that use that contract. Therefore, Service Broker does not allow new conversations to the service on
that contract. Existing conversations that use the contract are unaffected.
To alter the AUTHORIZATION for a service, use the ALTER AUTHORIZATION statement.
Permissions
Permission for altering a service defaults to the owner of the service, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.
Examples
A. Changing the queue for a service
The following example changes the //Adventure-Works.com/Expenses service to use the queue NewQueue .
See Also
CREATE SERVICE (Transact-SQL )
DROP SERVICE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER SERVICE MASTER KEY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the service master key of an instance of SQL Server.
Transact-SQL Syntax Conventions
Syntax
ALTER SERVICE MASTER KEY
[ { <regenerate_option> | <recover_option> } ] [;]
<regenerate_option> ::=
[ FORCE ] REGENERATE
<recover_option> ::=
{ WITH OLD_ACCOUNT = 'account_name' , OLD_PASSWORD = 'password' }
|
{ WITH NEW_ACCOUNT = 'account_name' , NEW_PASSWORD = 'password' }
Arguments
FORCE
Indicates that the service master key should be regenerated, even at the risk of data loss. For more information,
see Changing the SQL Server Service Account later in this topic.
REGENERATE
Indicates that the service master key should be regenerated.
OLD_ACCOUNT ='account_name'
Specifies the name of the old Windows service account.
WARNING
This option is obsolete. Do not use. Use SQL Server Configuration Manager instead.
OLD_PASSWORD ='password'
Specifies the password of the old Windows service account.
WARNING
This option is obsolete. Do not use. Use SQL Server Configuration Manager instead.
NEW_ACCOUNT ='account_name'
Specifies the name of the new Windows service account.
WARNING
This option is obsolete. Do not use. Use SQL Server Configuration Manager instead.
NEW_PASSWORD ='password'
Specifies the password of the new Windows service account.
WARNING
This option is obsolete. Do not use. Use SQL Server Configuration Manager instead.
Remarks
The service master key is automatically generated the first time it is needed to encrypt a linked server password,
credential, or database master key. The service master key is encrypted using the local machine key or the
Windows Data Protection API. This API uses a key that is derived from the Windows credentials of the SQL
Server service account.
SQL Server 2012 (11.x) uses the AES encryption algorithm to protect the service master key (SMK) and the
database master key (DMK). AES is a newer encryption algorithm than 3DES used in earlier versions. After
upgrading an instance of the Database Engine to SQL Server 2012 (11.x) the SMK and DMK should be
regenerated in order to upgrade the master keys to AES. For more information about regenerating the DMK, see
ALTER MASTER KEY (Transact-SQL ).
The service master key is the root of the SQL Server encryption hierarchy. The service master key directly or
indirectly protects all other keys and secrets in the tree. If a dependent key cannot be decrypted during a forced
regeneration, the data the key secures will be lost.
If you move SQL to another machine, then you have to use the same service account to decrypt the SMK – SQL
Server will fix the Machine account encryption automatically.
Permissions
Requires CONTROL SERVER permission on the server.
Examples
The following example regenerates the service master key.
See Also
RESTORE SERVICE MASTER KEY (Transact-SQL )
BACKUP SERVICE MASTER KEY (Transact-SQL )
Encryption Hierarchy
ALTER SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes the properties of a symmetric key.
Transact-SQL Syntax Conventions
Syntax
ALTER SYMMETRIC KEY Key_name <alter_option>
<alter_option> ::=
ADD ENCRYPTION BY <encrypting_mechanism> [ , ... n ]
|
DROP ENCRYPTION BY <encrypting_mechanism> [ , ... n ]
<encrypting_mechanism> ::=
CERTIFICATE certificate_name
|
PASSWORD = 'password'
|
SYMMETRIC KEY Symmetric_Key_Name
|
ASYMMETRIC KEY Asym_Key_Name
Arguments
Key_name
Is the name by which the symmetric key to be changed is known in the database.
ADD ENCRYPTION BY
Adds encryption by using the specified method.
DROP ENCRYPTION BY
Drops encryption by the specified method. You cannot remove all the encryptions from a symmetric key.
CERTIFICATE Certificate_name
Specifies the certificate that is used to encrypt the symmetric key. This certificate must already exist in the
database.
PASSWORD ='password'
Specifies the password that is used to encrypt the symmetric key. password must meet the Windows password
policy requirements of the computer that is running the instance of SQL Server.
SYMMETRIC KEY Symmetric_Key_Name
Specifies the symmetric key that is used to encrypt the symmetric key that is being changed. This symmetric key
must already exist in the database and must be open.
ASYMMETRIC KEY Asym_Key_Name
Specifies the asymmetric key that is used to encrypt the symmetric key that is being changed. This asymmetric key
must already exist in the database.
Remarks
Cau t i on
When a symmetric key is encrypted with a password instead of with the public key of the database master key, the
TRIPLE_DES encryption algorithm is used. Because of this, keys that are created with a strong encryption
algorithm, such as AES, are themselves secured by a weaker algorithm.
To change the encryption of the symmetric key, use the ADD ENCRYPTION and DROP ENCRYPTION phrases. It
is never possible for a key to be entirely without encryption. For this reason, the best practice is to add the new
form of encryption before removing the old form of encryption.
To change the owner of a symmetric key, use ALTER AUTHORIZATION.
NOTE
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or RC4_128
when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of the AES
algorithms instead. In SQL Server 2012 (11.x) material encrypted using RC4 or RC4_128 can be decrypted in any
compatibility level.
Permissions
Requires ALTER permission on the symmetric key. If adding encryption by a certificate or asymmetric key,
requires VIEW DEFINITION permission on the certificate or asymmetric key. If dropping encryption by a
certificate or asymmetric key, requires CONTROL permission on the certificate or asymmetric key.
Examples
The following example changes the encryption method that is used to protect a symmetric key. The symmetric key
JanainaKey043 is encrypted using certificate Shipping04 when the key was created. Because the key can never be
stored unencrypted, in this example, encryption is added by password, and then encryption is removed by
certificate.
See Also
CREATE SYMMETRIC KEY (Transact-SQL )
OPEN SYMMETRIC KEY (Transact-SQL )
CLOSE SYMMETRIC KEY (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
ALTER TABLE (Transact-SQL)
5/3/2018 • 64 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies a table definition by altering, adding, or dropping columns and constraints, reassigning and
rebuilding partitions, or disabling or enabling constraints and triggers.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
-- Syntax for SQL Server and Azure SQL Database
| ADD
{
<column_definition>
| <computed_column_definition>
| <table_constraint>
| <column_set_definition>
} [ ,...n ]
| [ system_start_time_column_name datetime2 GENERATED ALWAYS AS ROW START
[ HIDDEN ] [ NOT NULL ] [ CONSTRAINT constraint_name ]
DEFAULT constant_expression [WITH VALUES] ,
system_end_time_column_name datetime2 GENERATED ALWAYS AS ROW END
[ HIDDEN ] [ NOT NULL ] [ CONSTRAINT constraint_name ]
DEFAULT constant_expression [WITH VALUES] ,
]
PERIOD FOR SYSTEM_TIME ( system_start_time_column_name, system_end_time_column_name )
| DROP
[ {
[ CONSTRAINT ] [ IF EXISTS ]
[ CONSTRAINT ] [ IF EXISTS ]
{
constraint_name
[ WITH
( <drop_clustered_constraint_option> [ ,...n ] )
]
} [ ,...n ]
| COLUMN [ IF EXISTS ]
{
column_name
} [ ,...n ]
| PERIOD FOR SYSTEM_TIME
} [ ,...n ]
| [ WITH { CHECK | NOCHECK } ] { CHECK | NOCHECK } CONSTRAINT
{ ALL | constraint_name [ ,...n ] }
| <table_option>
| <filetable_option>
| <stretch_configuration>
}
[ ; ]
<column_set_definition> ::=
column_set_name XML COLUMN_SET FOR ALL_SPARSE_COLUMNS
<drop_clustered_constraint_option> ::=
{
{
MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
| MOVE TO
{ partition_scheme_name ( column_name ) | filegroup | "default" }
}
<table_option> ::=
{
SET ( LOCK_ESCALATION = { AUTO | TABLE | DISABLE } )
}
<filetable_option> ::=
{
[ { ENABLE | DISABLE } FILETABLE_NAMESPACE ]
[ SET ( FILETABLE_DIRECTORY = directory_name ) ]
}
<stretch_configuration> ::=
{
SET (
REMOTE_DATA_ARCHIVE
{
= ON ( <table_stretch_options> )
| = OFF_WITHOUT_DATA_RECOVERY ( MIGRATION_STATE = PAUSED )
| ( <table_stretch_options> [, ...n] )
}
)
}
<table_stretch_options> ::=
{
[ FILTER_PREDICATE = { null | table_predicate_function } , ]
MIGRATION_STATE = { OUTBOUND | INBOUND | PAUSED }
}
<single_partition_rebuild__option> ::=
{
SORT_IN_TEMPDB = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE} }
| ONLINE = { ON [( <low_priority_lock_wait> ) ] | OFF }
}
<low_priority_lock_wait>::=
{
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = <time> [ MINUTES ],
ABORT_AFTER_WAIT = { NONE | SELF | BLOCKERS } )
}
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
<column_definition>::=
{
column_name
type_name [ ( precision [ , scale ] ) ]
[ <column_constraint> ]
[ COLLATE Windows_collation_name ]
[ NULL | NOT NULL ]
}
<column_constraint>::=
[ CONSTRAINT constraint_name ] DEFAULT constant_expression
Arguments
database_name
Is the name of the database in which the table was created.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to be altered. If the table is not in the current database or is not contained by the
schema owned by the current user, the database and schema must be explicitly specified.
ALTER COLUMN
Specifies that the named column is to be changed or altered.
The modified column cannot be any one of the following:
A column with a timestamp data type.
The ROWGUIDCOL for the table.
A computed column or used in a computed column.
Used in statistics generated by the CREATE STATISTICS statement unless the column is a varchar,
nvarchar, or varbinary data type, the data type is not changed, and the new size is equal to or greater
than the old size, or if the column is changed from not null to null. First, remove the statistics using the
DROP STATISTICS statement. Statistics that are automatically generated by the query optimizer are
automatically dropped by ALTER COLUMN.
Used in a PRIMARY KEY or [FOREIGN KEY ] REFERENCES constraint.
Used in a CHECK or UNIQUE constraint. However, changing the length of a variable-length column
used in a CHECK or UNIQUE constraint is allowed.
Associated with a default definition. However, the length, precision, or scale of a column can be
changed if the data type is not changed.
The data type of text, ntext and image columns can be changed only in the following ways:
text to varchar(max), nvarchar(max), or xml
ntext to varchar(max), nvarchar(max), or xml
image to varbinary(max)
Some data type changes may cause a change in the data. For example, changing an nchar or nvarchar
column to char or varchar may cause the conversion of extended characters. For more information, see
CAST and CONVERT (Transact-SQL ). Reducing the precision or scale of a column may cause data truncation.
NOTE
The data type of a column of a partitioned table cannot be changed.
The data type of columns included in an index cannot be changed unless the column is a varchar, nvarchar, or
varbinary data type, and the new size is equal to or larger than the old size.
A column included in a primary key constraint, cannot be changed from NOT NULL to NULL.
If the column being modified is encrypted using ENCRYPTED WITH , you can change the datatype to a
compatible datatype (such as INT to BIGINT) but you cannot change any encryption settings.
column_name
Is the name of the column to be altered, added, or dropped. column_name can be a maximum of 128
characters. For new columns, column_name can be omitted for columns created with a timestamp data type.
The name timestamp is used if no column_name is specified for a timestamp data type column.
[ type_schema_name. ] type_name
Is the new data type for the altered column, or the data type for the added column. type_name cannot be
specified for existing columns of partitioned tables. type_name can be any one of the following:
A SQL Server system data type.
An alias data type based on a SQL Server system data type. Alias data types are created with the
CREATE TYPE statement before they can be used in a table definition.
A .NET Framework user-defined type, and the schema to which it belongs. .NET Framework user-
defined types are created with the CREATE TYPE statement before they can be used in a table
definition.
The following are criteria for type_name of an altered column:
The previous data type must be implicitly convertible to the new data type.
type_name cannot be timestamp.
ANSI_NULL defaults are always on for ALTER COLUMN; if not specified, the column is nullable.
ANSI_PADDING padding is always ON for ALTER COLUMN.
If the modified column is an identity column, new_data_type must be a data type that supports the identity
property.
The current setting for SET ARITHABORT is ignored. ALTER TABLE operates as if ARITHABORT is set to
ON.
NOTE
If the COLLATE clause is not specified, changing the data type of a column will cause a collation change to the default
collation of the database.
precision
Is the precision for the specified data type. For more information about valid precision values, see Precision,
Scale, and Length (Transact-SQL ).
scale
Is the scale for the specified data type. For more information about valid scale values, see Precision, Scale, and
Length (Transact-SQL ).
max
Applies only to the varchar, nvarchar, and varbinary data types for storing 2^31-1 bytes of character,
binary data, and of Unicode data.
xml_schema_collection
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Applies only to the xml data type for associating an XML schema with the type. Before typing an xml column
to a schema collection, the schema collection must first be created in the database by using CREATE XML
SCHEMA COLLECTION.
COLL ATE < collation_name > Specifies the new collation for the altered column. If not specified, the column
is assigned the default collation of the database. Collation name can be either a Windows collation name or a
SQL collation name. For a list and more information, see Windows Collation Name (Transact-SQL ) and SQL
Server Collation Name (Transact-SQL ).
The COLL ATE clause can be used to change the collations only of columns of the char, varchar, nchar, and
nvarchar data types. To change the collation of a user-defined alias data type column, you must execute
separate ALTER TABLE statements to change the column to a SQL Server system data type and change its
collation, and then change the column back to an alias data type.
ALTER COLUMN cannot have a collation change if one or more of the following conditions exist:
If a CHECK constraint, FOREIGN KEY constraint, or computed columns reference the column changed.
If any index, statistics, or full-text index are created on the column. Statistics created automatically on the
column changed are dropped if the column collation is changed.
If a schema-bound view or function references the column.
For more information, see COLL ATE (Transact-SQL ).
NULL | NOT NULL
Specifies whether the column can accept null values. Columns that do not allow null values can be added with
ALTER TABLE only if they have a default specified or if the table is empty. NOT NULL can be specified for
computed columns only if PERSISTED is also specified. If the new column allows null values and no default is
specified, the new column contains a null value for each row in the table. If the new column allows null values
and a default definition is added with the new column, WITH VALUES can be used to store the default value
in the new column for each existing row in the table.
If the new column does not allow null values and the table is not empty, a DEFAULT definition must be added
with the new column, and the new column automatically loads with the default value in the new columns in
each existing row.
NULL can be specified in ALTER COLUMN to force a NOT NULL column to allow null values, except for
columns in PRIMARY KEY constraints. NOT NULL can be specified in ALTER COLUMN only if the column
contains no null values. The null values must be updated to some value before the ALTER COLUMN NOT
NULL is allowed, for example:
When you create or alter a table with the CREATE TABLE or ALTER TABLE statements, the database and
session settings influence and possibly override the nullability of the data type that is used in a column
definition. We recommend that you always explicitly define a column as NULL or NOT NULL for
noncomputed columns.
If you add a column with a user-defined data type, we recommend that you define the column with the same
nullability as the user-defined data type and specify a default value for the column. For more information, see
CREATE TABLE (Transact-SQL ).
NOTE
If NULL or NOT NULL is specified with ALTER COLUMN, new_data_type [(precision [, scale ])] must also be specified. If
the data type, precision, and scale are not changed, specify the current column values.
NOTE
Dropping a column does not reclaim the disk space of the column. You may have to reclaim the disk space of a
dropped column when the row size of a table is near, or has exceeded, its limit. Reclaim space by creating a clustered
index on the table or rebuilding an existing clustered index by using ALTER INDEX. For information about the impact of
dropping LOB data types, see this CSS blog entry.
NOTE
Parallel index operations are not available in every edition of SQL Server. For more information, see Editions and
Supported Features for SQL Server 2016.
NOTE
Online index operations are not available in every edition of SQL Server. For more information, see Editions and
Supported Features for SQL Server 2016.
NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in MOVE
TO "default" or MOVE TO [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the
current session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).
{ CHECK | NOCHECK } CONSTRAINT
Specifies that constraint_name is enabled or disabled. This option can only be used with FOREIGN KEY and
CHECK constraints. When NOCHECK is specified, the constraint is disabled and future inserts or updates to
the column are not validated against the constraint conditions. DEFAULT, PRIMARY KEY, and UNIQUE
constraints cannot be disabled.
ALL
Specifies that all constraints are either disabled with the NOCHECK option or enabled with the CHECK
option.
{ ENABLE | DISABLE } TRIGGER
Specifies that trigger_name is enabled or disabled. When a trigger is disabled it is still defined for the table;
however, when INSERT, UPDATE, or DELETE statements are executed against the table, the actions in the
trigger are not performed until the trigger is re-enabled.
ALL
Specifies that all triggers in the table are enabled or disabled.
trigger_name
Specifies the name of the trigger to disable or enable.
{ ENABLE | DISABLE } CHANGE_TRACKING
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether change tracking is enabled disabled for the table. By default, change tracking is disabled.
This option is available only when change tracking is enabled for the database. For more information, see
ALTER DATABASE SET Options (Transact-SQL ).
To enable change tracking, the table must have a primary key.
WITH ( TRACK_COLUMNS_UPDATED = { ON | OFF } )
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether the Database Engine tracks which change tracked columns were updated. The default value
is OFF.
SWITCH [ PARTITION source_partition_number_expression ] TO [ schema_name. ] target_table [ PARTITION
target_partition_number_expression ]
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Switches a block of data in one of the following ways:
Reassigns all data of a table as a partition to an already-existing partitioned table.
Switches a partition from one partitioned table to another.
Reassigns all data in one partition of a partitioned table to an existing non-partitioned table.
If table is a partitioned table, source_partition_number_expression must be specified. If target_table is
partitioned, target_partition_number_expression must be specified. If reassigning a table's data as a partition
to an already-existing partitioned table, or switching a partition from one partitioned table to another, the
target partition must exist and it must be empty.
If reassigning one partition's data to form a single table, the target table must already be created and it must
be empty. Both the source table or partition, and the target table or partition, must reside in the same
filegroup. The corresponding indexes, or index partitions, must also reside in the same filegroup. Many
additional restrictions apply to switching partitions. table and target_table cannot be the same. target_table
can be a multi-part identifier.
source_partition_number_expression and target_partition_number_expression are constant expressions that
can reference variables and functions. These include user-defined type variables and user-defined functions.
They cannot reference Transact-SQL expressions.
A partitioned table with a clustered columstore index behaves like a partitioned heap:
The primary key must include the partition key.
A unique index must include the partition key. Note that including the partition key to an existing
unique index can change the uniqueness.
In order to switch partitions, all non-clustered indexes must include the partition key.
For SWITCH restriction when using replication, see Replicate Partitioned Tables and Indexes.
Nonclustered columnstore indexes built for SQL Server 2016 CTP1, and for SQL Database before version
V12 were in a read-only format. Nonclustered columnstore indexes must be rebuilt to the current format
(which is updatable) before any PARTITION operations can be performed.
SET ( FILESTREAM_ON = { partition_scheme_name | filestream_filegroup_name | "default" | "NULL" })
Applies to: SQL Server 2008 through SQL Server 2017.|
Specifies where FILESTREAM data is stored.
ALTER TABLE with the SET FILESTREAM_ON clause will succeed only if the table has no FILESTREAM
columns. The FILESTREAM columns can be added by using a second ALTER TABLE statement.
If partition_scheme_name is specified, the rules for CREATE TABLE apply. The table should already be
partitioned for row data, and its partition scheme must use the same partition function and columns as the
FILESTREAM partition scheme.
filestream_filegroup_name specifies the name of a FILESTREAM filegroup. The filegroup must have one file
that is defined for the filegroup by using a CREATE DATABASE or ALTER DATABASE statement, or an error
is raised.
"default" specifies the FILESTREAM filegroup with the DEFAULT property set. If there is no FILESTREAM
filegroup, an error is raised.
"NULL" specifies that all references to FILESTREAM filegroups for the table will be removed. All
FILESTREAM columns must be dropped first. You must use SET FILESTREAM_ON="NULL" to delete all
FILESTREAM data that is associated with a table.
SET ( SYSTEM_VERSIONING = { OFF | ON [ ( HISTORY_TABLE = schema_name . history_table_name [ ,
DATA_CONSISTENCY_CHECK = { ON | OFF } ] ) ] } )
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Either disables system versioning of a table or enables system versioning of a table. To enable system
versioning of a table, the system verifies that the datatype, nullability constraint, and primary key constraint
requirements for system versioning are met. If the HISTORY_TABLE argument is not used, the system
generates a new history table matching the schema of the current table, creating a link between the two tables
and enables the system to record the history of each record in the current table in the history table. The name
of this history table will be MSSQL_TemporalHistoryFor<primary_table_object_id> . If the HISTORY_TABLE
argument is used to create a link to and use an existing history table, the link is created between the current
table and the specified table. When creating a link to an existing history table, you can choose to perform a
data consistency check. This data consistency check ensures that existing records do not overlap. Performing
the data consistency check is the default. For more information, see Temporal Tables.
HISTORY_RETENTION_PERIOD = { INFINITE | number {DAY | DAYS | WEEK | WEEKS | MONTH |
MONTHS | YEAR | YEARS } } Applies to: Azure SQL Database.
Specifies finite or infinte retention for historical data in temporal table. If omitted, infinite retention is
assumed.
SET ( LOCK_ESCAL ATION = { AUTO | TABLE | DISABLE } )
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the allowed methods of lock escalation for a table.
AUTO
This option allows SQL Server Database Engine to select the lock escalation granularity that is appropriate
for the table schema.
If the table is partitioned, lock escalation will be allowed to partition. After the lock is escalated to the
partition level, the lock will not be escalated later to TABLE granularity.
If the table is not partitioned, the lock escalation will be done to the TABLE granularity.
TABLE
Lock escalation will be done at table-level granularity regardless whether the table is partitioned or not
partitioned. TABLE is the default value.
DISABLE
Prevents lock escalation in most cases. Table-level locks are not completely disallowed. For example, when
you are scanning a table that has no clustered index under the serializable isolation level, Database Engine
must take a table lock to protect data integrity.
REBUILD
Use the REBUILD WITH syntax to rebuild an entire table including all the partitions in a partitioned table. If
the table has a clustered index, the REBUILD option rebuilds the clustered index. REBUILD can be performed
as an ONLINE operation.
Use the REBUILD PARTITION syntax to rebuild a single partition in a partitioned table.
PARTITION = ALL
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Rebuilds all partitions when changing the partition compression settings.
REBUILD WITH ( <rebuild_option> )
All options apply to a table with a clustered index. If the table does not have a clustered index, the heap
structure is only affected by some of the options.
When a specific compression setting is not specified with the REBUILD operation, the current compression
setting for the partition is used. To return the current setting, query the data_compression column in the
sys.partitions catalog view.
For complete descriptions of the rebuild options, see index_option (Transact-SQL ).
DATA_COMPRESSION
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the data compression option for the specified table, partition number, or range of partitions. The
options are as follows:
NONE
Table or specified partitions are not compressed. This does not apply to columnstore tables.
ROW
Table or specified partitions are compressed by using row compression. This does not apply to columnstore
tables.
PAGE
Table or specified partitions are compressed by using page compression. This does not apply to columnstore
tables.
COLUMNSTORE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Applies only to columnstore tables. COLUMNSTORE specifies to decompress a partition that was
compressed with the COLUMNSTORE_ARCHIVE option. When the data is restored, it will continue to be
compressed with the columnstore compression that is used for all columnstore tables.
COLUMNSTORE_ARCHIVE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Applies only to columnstore tables, which are tables stored with a clustered columnstore index.
COLUMNSTORE_ARCHIVE will further compress the specified partition to a smaller size. This can be used
for archival, or for other situations that require less storage and can afford more time for storage and retrieval
To rebuild multiple partitions at the same time, see index_option (Transact-SQL ). If the table does not have a
clustered index, changing the data compression rebuilds the heap and the nonclustered indexes. For more
information about compression, see Data Compression.
ONLINE = { ON | OFF } <as applies to single_partition_rebuild_option>
Specifies whether a single partition of the underlying tables and associated indexes are available for queries
and data modification during the index operation. The default is OFF. REBUILD can be performed as an
ONLINE operation.
ON
Long-term table locks are not held for the duration of the index operation. A S -lock on the table is required in
the beginning of the index rebuild and a Sch-M lock on the table at the end of the online index rebuild.
Although both locks are short metadata locks, especially the Sch-M lock must wait for all blocking
transactions to be completed. During the wait time the Sch-M lock blocks all other transactions that wait
behind this lock when accessing the same table.
NOTE
Online index rebuild can set the low_priority_lock_wait options described later in this section.
OFF
Table locks are applied for the duration of the index operation. This prevents all user access to the underlying
table for the duration of the operation.
column_set_name XML COLUMN_SET FOR ALL_SPARSE_COLUMNS
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Is the name of the column set. A column set is an untyped XML representation that combines all of the sparse
columns of a table into a structured output. A column set cannot be added to a table that contains sparse
columns. For more information about column sets, see Use Column Sets.
{ ENABLE | DISABLE } FILETABLE_NAMESPACE
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Enables or disables the system-defined constraints on a FileTable. Can only be used with a FileTable.
SET ( FILETABLE_DIRECTORY = directory_name )
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the Windows-compatible FileTable directory name. This name should be unique among all the
FileTable directory names in the database. Uniqueness comparison is case-insensitive, regardless of SQL
collation settings. Can only be used with a FileTable.
SET (
REMOTE_DATA_ARCHIVE
{
= ON ( <table_stretch_options> )
| = OFF_WITHOUT_DATA_RECOVERY
( MIGRATION_STATE = PAUSED ) | ( <table_stretch_options> [, ...n] )
} )
This operation incurs data transfer costs, and it can't be canceled. For more info, see Data Transfers
Pricing Details.
After all the remote data has been copied from Azure back to SQL Server, Stretch is disabled for the
table.
To disable Stretch for a table and abandon the remote data, run the following command.
After you disable Stretch Database for a table, data migration stops and query results no longer
include results from the remote table.
Disabling Stretch does not remove the remote table. If you want to delete the remote table, you have to
drop it by using the Azure management portal.
[ FILTER_PREDICATE = { null | predicate } ]
Applies to: SQL Server 2017.
Optionally specifies a filter predicate to select rows to migrate from a table that contains both historical and
current data. The predicate must call a deterministic inline table-valued function. For more info, see Enable
Stretch Database for a table and Select rows to migrate by using a filter function (Stretch Database).
IMPORTANT
If you provide a filter predicate that performs poorly, data migration also performs poorly. Stretch Database applies the
filter predicate to the table by using the CROSS APPLY operator.
Remarks
To add new rows of data, use INSERT. To remove rows of data, use DELETE or TRUNCATE TABLE. To change
the values in existing rows, use UPDATE.
If there are any execution plans in the procedure cache that reference the table, ALTER TABLE marks them to
be recompiled on their next execution.
Partitioned Tables
In addition to performing SWITCH operations that involve partitioned tables, ALTER TABLE can be used to
change the state of the columns, constraints, and triggers of a partitioned table just like it is used for
nonpartitioned tables. However, this statement cannot be used to change the way the table itself is
partitioned. To repartition a partitioned table, use ALTER PARTITION SCHEME and ALTER PARTITION
FUNCTION. Additionally, you cannot change the data type of a column of a partitioned table.
NOTE
The options listed under <drop_clustered_constraint_option> apply to clustered indexes on tables and cannot be
applied to clustered indexes on views or nonclustered indexes.
Data Compression
System tables cannot be enabled for compression. If the table is a heap, the rebuild operation for ONLINE
mode will be single threaded. Use OFFLINE mode for a multi-threaded heap rebuild operation. For a more
information about data compression, seeData Compression.
To evaluate how changing the compression state will affect a table, an index, or a partition, use the
sp_estimate_data_compression_savings stored procedure.
The following restrictions apply to partitioned tables:
You cannot change the compression setting of a single partition if the table has nonaligned indexes.
The ALTER TABLE <table> REBUILD PARTITION ... syntax rebuilds the specified partition.
The ALTER TABLE <table> REBUILD WITH ... syntax rebuilds all partitions.
Permissions
Requires ALTER permission on the table.
ALTER TABLE permissions apply to both tables involved in an ALTER TABLE SWITCH statement. Any data
that is switched inherits the security of the target table.
If any columns in the ALTER TABLE statement are defined to be of a common language runtime (CLR ) user-
defined type or alias data type, REFERENCES permission on the type is required.
Adding a column that updates the rows of the table requires UPDATE permission on the table. For example,
adding a NOT NULL column with a default value or adding an identity column when the table is not empty.
Examples
CATEGORY FEATURED SYNTAX ELEMENTS
Adding columns and constraints ADD • PRIMARY KEY with index options • sparse columns
and column sets •
Altering a column definition change data type • change column size • collation
Disabling and enabling constraints and triggers CHECK • NO CHECK • ENABLE TRIGGER • DISABLE
TRIGGER
CREATE TABLE T1
(C1 int PRIMARY KEY,
C2 varchar(50) SPARSE NULL,
C3 int SPARSE NULL,
C4 int ) ;
GO
ALTER TABLE T1
ADD C5 char(100) SPARSE NULL ;
GO
To convert the C4 non-sparse column to a sparse column, execute the following statement.
ALTER TABLE T1
ALTER COLUMN C4 ADD SPARSE ;
GO
To convert the C4 sparse column to a nonsparse column, execute the following statement.
ALTER TABLE T1
ALTER COLUMN C4 DROP SPARSE;
GO
CREATE TABLE T2
(C1 int PRIMARY KEY,
C2 varchar(50) NULL,
C3 int NULL,
C4 int ) ;
GO
The following three statements add a column set named CS , and then modify columns C2 and C3 to
SPARSE .
ALTER TABLE T2
ADD CS XML COLUMN_SET FOR ALL_SPARSE_COLUMNS ;
GO
ALTER TABLE T2
ALTER COLUMN C2 ADD SPARSE ;
GO
ALTER TABLE T2
ALTER COLUMN C3 ADD SPARSE ;
GO
Examples
Altering a Column Definition
A. Changing the data type of a column
The following example changes a column of a table from INT to DECIMAL .
CREATE TABLE dbo.doc_exy (column_a INT ) ;
GO
INSERT INTO dbo.doc_exy (column_a) VALUES (10) ;
GO
ALTER TABLE dbo.doc_exy ALTER COLUMN column_a DECIMAL (5, 2) ;
GO
DROP TABLE dbo.doc_exy ;
GO
CREATE TABLE T3
(C1 int PRIMARY KEY,
C2 varchar(50) NULL,
C3 int NULL,
C4 int ) ;
GO
Next, column C2 collation is changed to Latin1_General_BIN. Note that the data type is required, even
though it is not changed.
ALTER TABLE T3
ALTER COLUMN C2 varchar(50) COLLATE Latin1_General_BIN;
GO
ALTER TABLE T1
REBUILD WITH (DATA_COMPRESSION = PAGE);
The following example changes the compression of a partitioned table. The REBUILD PARTITION = 1 syntax
causes only partition number 1 to be rebuilt.
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
The same operation using the following alternate syntax causes all partitions in the table to be rebuilt.
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
The following example decompresses a columnstore table partition that was compressed with
COLUMNSTORE_ARCHIVE option. When the data is restored, it will continue to be compressed with the
columnstore compression that is used for all columnstore tables.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
USE AdventureWorks2012;
ALTER TABLE Person.Person
ENABLE CHANGE_TRACKING;
The following example enables change tracking and enables the tracking of the columns that are updated
during a change.
Applies to: SQL Server 2008 through SQL Server 2017.
USE AdventureWorks2012;
GO
ALTER TABLE Person.Person
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)
USE AdventureWorks2012;
Go
ALTER TABLE Person.Person
DISABLE CHANGE_TRACKING;
-- Valid inserts
INSERT INTO dbo.cnst_example VALUES (1,'Joe Brown',65000);
INSERT INTO dbo.cnst_example VALUES (2,'Mary Smith',75000);
-- Re-enable the constraint and try another insert; this will fail.
ALTER TABLE dbo.cnst_example CHECK CONSTRAINT salary_cap;
INSERT INTO dbo.cnst_example VALUES (4,'Eric James',110000) ;
Online Operations
A. Online index rebuild using low priority wait options
The following example shows how to perform an online index rebuild specifying the low priority wait options.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
ALTER TABLE T1
REBUILD WITH
(
PAD_INDEX = ON,
ONLINE = ON ( WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 4 MINUTES,
ABORT_AFTER_WAIT = BLOCKERS ) )
)
;
System Versioning
The following four examples will help you become familiar with the syntax for using system versioning. For
additional assistance, see Getting Started with System-Versioned Temporal Tables.
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
A. Add System Versioning to Existing Tables
The following example shows how to add system versioning to an existing table and create a future history
table. This example assumes that there is an existing table called InsurancePolicy with a primary key defined.
This example populates the newly created period columns for system versioning using default values for the
start and end times because these values cannot be null. This example uses the HIDDEN clause to ensure no
impact on existing applications interacting with the current table. It also uses
HISTORY_RETENTION_PERIOD that is available on SQL Database only.
ALTER TABLE ProjectTaskHistory ALTER COLUMN [Changed Date] datetime2 NOT NULL;
ALTER TABLE ProjectTaskHistory ALTER COLUMN [Revised Date] datetime2 NOT NULL;
-- Add SYSTEM_TIME period and set system versioning with linking two existing tables
-- (a certain set of data checks happen in the background)
ALTER TABLE ProjectTaskCurrent
ADD PERIOD FOR SYSTEM_TIME ([Changed Date], [Revised Date])
BEGIN TRAN
/* Takes schema lock on both tables */
ALTER TABLE Department
SET (SYSTEM_VERSIONING = OFF);
/* expand table schema for temporal table */
ALTER TABLE Department
ADD Col5 int NOT NULL DEFAULT 0;
/* Expand table schema for history table */
ALTER TABLE DepartmentHistory
ADD Col5 int NOT NULL DEFAULT 0;
/* Re-establish versioning again */
ALTER TABLE Department
SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE=dbo.DepartmentHistory,
DATA_CONSISTENCY_CHECK = OFF));
COMMIT
E. Splitting a partition
The following example splits a partition on a table.
The Customer table has the following DDL:
The following command creates a new partition bound by the value 75, between 50 and 100.
In this example, the Orders table has the following partitions. Each partition contains data.
Although the columns and column names must be the same, the partition boundaries do not need to be the
same. In this example, the OrdersHistory table has the following two partitions and both partitions are
empty:
Partition 1 (no data): OrderDate < '2004-01-01'
Partition 2 (empty): '2004-01-01' <= OrderDate
For the previous two tables, the following command moves all rows with OrderDate < '2004-01-01' from the
Orders table to the OrdersHistory table.
As a result, the first partition in Orders is empty and the first partition in OrdersHistory contains data. The
tables now appear as follows:
Orders table
Partition 1 (empty): OrderDate < '2004-01-01'
Partition 2 (has data): '2004-01-01' <= OrderDate < '2005-01-01'
Partition 3 (has data): '2005-01-01' <= OrderDate< '2006-01-01'
Partition 4 (has data): '2006-01-01'<= OrderDate < '2007-01-01'
Partition 5 (has data): '2007-01-01' <= OrderDate
OrdersHistory table
Partition 1 (has data): OrderDate < '2004-01-01'
Partition 2 (empty): '2004-01-01' <= OrderDate
To clean up the Orders table, you can remove the empty partition by merging partitions 1 and 2 as follows:
After the merge, the Orders table has the following partitions:
Orders table
Partition 1 (has data): OrderDate < '2005-01-01'
Partition 2 (has data): '2005-01-01' <= OrderDate< '2006-01-01'
Partition 3 (has data): '2006-01-01'<= OrderDate < '2007-01-01'
Partition 4 (has data): '2007-01-01' <= OrderDate
Suppose another year passes and you are ready to archive the year 2005. You can allocate an empty partition
for the year 2005 in the OrdersHistory table by splitting the empty partition as follows:
After the split, the OrdersHistory table has the following partitions:
OrdersHistory table
Partition 1 (has data): OrderDate < '2004-01-01'
Partition 2 (empty): '2004-01-01' < '2005-01-01'
Partition 3 (empty): '2005-01-01' <= OrderDate
See Also
sys.tables (Transact-SQL )
sp_rename (Transact-SQL )
CREATE TABLE (Transact-SQL )
DROP TABLE (Transact-SQL )
sp_help (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER TABLE column_constraint (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the properties of a PRIMARY KEY, FOREIGN KEY, UNIQUE, or CHECK constraint that is part of a new
column definition added to a table by using ALTER TABLE.
Transact-SQL Syntax Conventions
Syntax
[ CONSTRAINT constraint_name ]
{
[ NULL | NOT NULL ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor ]
[ WITH ( index_option [, ...n ] ) ]
[ ON { partition_scheme_name (partition_column_name)
| filegroup | "default" } ]
| [ FOREIGN KEY ]
REFERENCES [ schema_name . ] referenced_table_name
[ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )
}
Arguments
CONSTRAINT
Specifies the start of the definition for a PRIMARY KEY, UNIQUE, FOREIGN KEY, or CHECK constraint.
constraint_name
Is the name of the constraint. Constraint names must follow the rules for identifiers, except that the name cannot
start with a number sign (#). If constraint_name is not supplied, a system-generated name is assigned to the
constraint.
NULL | NOT NULL
Specifies whether the column can accept null values. Columns that do not allow null values can be added only if
they have a default specified. If the new column allows null values and no default is specified, the new column
contains NULL for each row in the table. If the new column allows null values and a default definition is added
with the new column, the WITH VALUES option can be used to store the default value in the new column for each
existing row in the table.
If the new column does not allow null values, a DEFAULT definition must be added with the new column. The new
column automatically loads with the default value in the new columns in each existing row.
When the addition of a column requires physical changes to the data rows of a table, such as adding DEFAULT
values to each row, locks are held on the table while ALTER TABLE runs. This affects the ability to change the
content of the table while the lock is in place. In contrast, adding a column that allows null values and does not
specify a default value is a metadata operation only, and involves no locks.
When you use CREATE TABLE or ALTER TABLE, database and session settings influence and possibly override the
nullability of the data type that is used in a column definition. We recommend that you always explicitly define
noncomputed columns as NULL or NOT NULL or, if you use a user-defined data type, that you allow the column
to use the default nullability of the data type. For more information, see CREATE TABLE (Transact-SQL ).
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column or columns by using a unique index. Only one
PRIMARY KEY constraint can be created for each table.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns by using a unique index.
CLUSTERED | NONCLUSTERED
Specifies that a clustered or nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints default to CLUSTERED. UNIQUE constraints default to NONCLUSTERED.
If a clustered constraint or index already exists on a table, CLUSTERED cannot be specified. If a clustered
constraint or index already exists on a table, PRIMARY KEY constraints default to NONCLUSTERED.
Columns that are of the ntext, text, varchar(max), nvarchar(max), varbinary(max), xml, or image data types
cannot be specified as columns for an index.
WITH FILLFACTOR =fillfactor
Specifies how full the Database Engine should make each index page used to store the index data. User-specified
fill factor values can be from 1 through 100. If a value is not specified, the default is 0.
IMPORTANT
Documenting WITH FILLFACTOR = fillfactor as the only index option that applies to PRIMARY KEY or UNIQUE constraints is
maintained for backward compatibility, but will not be documented in this manner in future releases. Other index options can
be specified in the index_option clause of ALTER TABLE.
Remarks
When FOREIGN KEY or CHECK constraints are added, all existing data is verified for constraint violations unless
the WITH NOCHECK option is specified. If any violations occur, ALTER TABLE fails and an error is returned. When
a new PRIMARY KEY or UNIQUE constraint is added to an existing column, the data in the column or columns
must be unique. If duplicate values are found, ALTER TABLE fails. The WITH NOCHECK option has no effect when
PRIMARY KEY or UNIQUE constraints are added.
Each PRIMARY KEY and UNIQUE constraint generates an index. The number of UNIQUE and PRIMARY KEY
constraints cannot cause the number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered
index. Foreign key constraints do not automatically generate an index. However, foreign key columns are
frequently used in join criteria in queries by matching the column or columns in the foreign key constraint of one
table with the primary or unique key column or columns in the other table. An index on the foreign key columns
enables the Database Engine to quickly find related data in the foreign key table.
Examples
For examples, see ALTER TABLE (Transact-SQL ).
See Also
ALTER TABLE (Transact-SQL )
column_definition (Transact-SQL )
ALTER TABLE column_definition (Transact-SQL)
5/3/2018 • 10 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the properties of a column that are added to a table by using ALTER TABLE.
Transact-SQL Syntax Conventions
Syntax
column_name <data_type>
[ FILESTREAM ]
[ COLLATE collation_name ]
[ NULL | NOT NULL ]
[
[ CONSTRAINT constraint_name ] DEFAULT constant_expression [ WITH VALUES ]
| IDENTITY [ ( seed , increment ) ] [ NOT FOR REPLICATION ]
]
[ ROWGUIDCOL ]
[ SPARSE ]
[ ENCRYPTED WITH
( COLUMN_ENCRYPTION_KEY = key_name ,
ENCRYPTION_TYPE = { DETERMINISTIC | RANDOMIZED } ,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
) ]
[ MASKED WITH ( FUNCTION = ' mask_function ') ]
[ <column_constraint> [ ...n ] ]
<column_constraint> ::=
[ CONSTRAINT constraint_name ]
{ { PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
]
[ ON { partition_scheme_name ( partition_column_name )
| filegroup | "default" } ]
| [ FOREIGN KEY ]
REFERENCES [ schema_name . ] referenced_table_name [ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )
}
Arguments
column_name
Is the name of the column to be altered, added, or dropped. column_name can consist of 1 through 128 characters.
For new columns, created with a timestamp data type, column_name can be omitted. If no column_name is
specified for a timestamp data type column, the name timestamp is used.
[ type_schema_name. ] type_name
Is the data type for the column that is added and the schema to which it belongs.
type_name can be:
A Microsoft SQL Server system data type.
An alias data type based on a SQL Server system data type. Alias data types must be created by using
CREATE TYPE before they can be used in a table definition.
A Microsoft .NET Framework user-defined type and the schema to which it belongs. A .NET Framework
user-defined type must be created by using CREATE TYPE before it can be used in a table definition.
If type_schema_name is not specified, the Microsoft Database Engine references type_name in the following
order:
The SQL Server system data type.
The default schema of the current user in the current database.
The dbo schema in the current database.
precision
Is the precision for the specified data type. For more information about valid precision values, see Precision, Scale,
and Length (Transact-SQL ).
scale
Is the scale for the specified data type. For more information about valid scale values, see Precision, Scale, and
Length (Transact-SQL ).
max
Applies only to the varchar, nvarchar, and varbinary data types. These are used for storing 2^31 bytes of
character and binary data, and 2^30 bytes of Unicode data.
CONTENT
Specifies that each instance of the xml data type in column_name can comprise multiple top-level elements.
CONTENT applies only to the xml data type and can be specified only if xml_schema_collection is also specified. If
this is not specified, CONTENT is the default behavior.
DOCUMENT
Specifies that each instance of the xml data type in column_name can comprise only one top-level element.
DOCUMENT applies only to the xml data type and can be specified only if xml_schema_collection is also specified.
xml_schema_collection
Applies to: SQL Server 2008 through SQL Server 2017.
Applies only to the xml data type for associating an XML schema collection with the type. Before typing an xml
column to a schema, the schema must first be created in the database by using CREATE XML SCHEMA
COLLECTION.
FILESTREAM
Optionally specifies the FILESTREAM storage attribute for column that has a type_name of varbinary(max).
When FILESTREAM is specified for a column, the table must also have a column of the uniqueidentifier data
type that has the ROWGUIDCOL attribute. This column must not allow null values and must have either a
UNIQUE or PRIMARY KEY single-column constraint. The GUID value for the column must be supplied either by
an application when data is being inserted, or by a DEFAULT constraint that uses the NEWID () function.
The ROWGUIDCOL column cannot be dropped and the related constraints cannot be changed while there is a
FILESTREAM column defined for the table. The ROWGUIDCOL column can be dropped only after the last
FILESTREAM column is dropped.
When the FILESTREAM storage attribute is specified for a column, all values for that column are stored in a
FILESTREAM data container on the file system.
For an example that shows how to use column definition, see FILESTREAM (SQL Server).
COLL ATE collation_name
Specifies the collation of the column. If not specified, the column is assigned the default collation of the database.
Collation name can be either a Windows collation name or an SQL collation name. For a list and more
information, see Windows Collation Name (Transact-SQL ) and SQL Server Collation Name (Transact-SQL ).
The COLL ATE clause can be used to specify the collations only of columns of the char, varchar, nchar, and
nvarchar data types.
For more information about the COLL ATE clause, see COLL ATE (Transact-SQL ).
NULL | NOT NULL
Determines whether null values are allowed in the column. NULL is not strictly a constraint but can be specified
just like NOT NULL.
[ CONSTRAINT constraint_name ]
Specifies the start of a DEFAULT value definition. To maintain compatibility with earlier versions of SQL Server, a
constraint name can be assigned to a DEFAULT. constraint_name must follow the rules for identifiers, except that
the name cannot start with a number sign (#). If constraint_name is not specified, a system-generated name is
assigned to the DEFAULT definition.
DEFAULT
Is a keyword that specifies the default value for the column. DEFAULT definitions can be used to provide values for
a new column in the existing rows of data. DEFAULT definitions cannot be applied to timestamp columns, or
columns with an IDENTITY property. If a default value is specified for a user-defined type column, the type must
support an implicit conversion from constant_expression to the user-defined type.
constant_expression
Is a literal value, a NULL, or a system function used as the default column value. If used in conjunction with a
column defined to be of a .NET Framework user-defined type, the implementation of the type must support an
implicit conversion from the constant_expression to the user-defined type.
WITH VALUES
Specifies that the value given in DEFAULT constant_expression is stored in a new column added to existing rows. If
the added column allows null values and WITH VALUES is specified, the default value is stored in the new column,
added to existing rows. If WITH VALUES is not specified for columns that allow nulls, the value NULL is stored in
the new column in existing rows. If the new column does not allow nulls, the default value is stored in new rows
regardless of whether WITH VALUES is specified.
IDENTITY
Specifies that the new column is an identity column. The SQL Server Database Engine provides a unique,
incremental value for the column. When you add identifier columns to existing tables, the identity numbers are
added to the existing rows of the table with the seed and increment values. The order in which the rows are
updated is not guaranteed. Identity numbers are also generated for any new rows that are added.
Identity columns are commonly used in conjunction with PRIMARY KEY constraints to serve as the unique row
identifier for the table. The IDENTITY property can be assigned to a tinyint, smallint, int, bigint, decimal(p,0),
or numeric(p,0) column. Only one identity column can be created per table. The DEFAULT keyword and bound
defaults cannot be used with an identity column. Either both the seed and increment must be specified, or neither.
If neither are specified, the default is (1,1).
NOTE
You cannot modify an existing table column to add the IDENTITY property.
Adding an identity column to a published table is not supported because it can result in nonconvergence when the
column is replicated to the Subscriber. The values in the identity column at the Publisher depend on the order in
which the rows for the affected table are physically stored. The rows might be stored differently at the Subscriber;
therefore, the value for the identity column can be different for the same rows..
To disable the IDENTITY property of a column by allowing values to be explicitly inserted, use SET
IDENTITY_INSERT.
seed
Is the value used for the first row loaded into the table.
increment
Is the incremental value added to the identity value of the previous row that is loaded.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Can be specified for the IDENTITY property. If this clause is specified for the IDENTITY property, values are not
incremented in identity columns when replication agents perform insert operations.
ROWGUIDCOL
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies that the column is a row globally unique identifier column. ROWGUIDCOL can only be assigned to a
uniqueidentifier column, and only one uniqueidentifier column per table can be designated as the
ROWGUIDCOL column. ROWGUIDCOL cannot be assigned to columns of user-defined data types.
ROWGUIDCOL does not enforce uniqueness of the values stored in the column. Also, ROWGUIDCOL does not
automatically generate values for new rows that are inserted into the table. To generate unique values for each
column, either use the NEWID function on INSERT statements or specify the NEWID function as the default for
the column. For more information, see NEWID (Transact-SQL )and INSERT (Transact-SQL ).
SPARSE
Indicates that the column is a sparse column. The storage of sparse columns is optimized for null values. Sparse
columns cannot be designated as NOT NULL. For additional restrictions and more information about sparse
columns, see Use Sparse Columns.
<column_constraint>
For the definitions of the column constraint arguments, see column_constraint (Transact-SQL ).
ENCRYPTED WITH
Specifies encrypting columns by using the Always Encrypted feature.
COLUMN_ENCRYPTION_KEY = key_name
Specifies the column encryption key. For more information, see CREATE COLUMN ENCRYPTION KEY (Transact-
SQL ).
ENCRYPTION_TYPE = { DETERMINISTIC | RANDOMIZED }
Deterministic encryption uses a method which always generates the same encrypted value for any given plain
text value. Using deterministic encryption allows searching using equality comparison, grouping, and joining tables
using equality joins based on encrypted values, but can also allow unauthorized users to guess information about
encrypted values by examining patterns in the encrypted column. Joining two tables on columns encrypted
deterministically is only possible if both columns are encrypted using the same column encryption key.
Deterministic encryption must use a column collation with a binary2 sort order for character columns.
Randomized encryption uses a method that encrypts data in a less predictable manner. Randomized encryption
is more secure, but prevents equality searches, grouping, and joining on encrypted columns. Columns using
randomized encryption cannot be indexed.
Use deterministic encryption for columns that will be search parameters or grouping parameters, for example a
government ID number. Use randomized encryption, for data such as a credit card number, which is not grouped
with other records, or used to join tables, and which is not searched for because you use other columns (such as a
transaction number) to find the row which contains the encrypted column of interest.
Columns must be of a qualifying data type.
ALGORITHM
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.
Must be 'AEAD_AES_256_CBC_HMAC_SHA_256'.
For more information including feature constraints, see Always Encrypted (Database Engine).
ADD MASKED WITH ( FUNCTION = ' mask_function ')
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.
Specifies a dynamic data mask. mask_function is the name of the masking function with the appropriate
parameters. The following functions are available:
default()
email()
partial()
random()
For function parameters, see Dynamic Data Masking.
Remarks
If a column is added having a uniqueidentifier data type, it can be defined with a default that uses the NEWID ()
function to supply the unique identifier values in the new column for each existing row in the table.
The Database Engine does not enforce an order for specifying DEFAULT, IDENTITY, ROWGUIDCOL, or column
constraints in a column definition.
ALTER TABLE statement will fail if adding the column will cause the data row size to exceed 8060 bytes.
Examples
For examples, see ALTER TABLE (Transact-SQL ).
See Also
ALTER TABLE (Transact-SQL )
ALTER TABLE computed_column_definition (Transact-
SQL)
5/3/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the properties of a computed column that is added to a table by using ALTER TABLE.
Transact-SQL Syntax Conventions
Syntax
column_name AS computed_column_expression
[ PERSISTED [ NOT NULL ] ]
[
[ CONSTRAINT constraint_name ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor ]
[ WITH ( <index_option> [, ...n ] ) ]
[ ON { partition_scheme_name ( partition_column_name ) | filegroup
| "default" } ]
| [ FOREIGN KEY ]
REFERENCES ref_table [ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE } ]
[ ON UPDATE { NO ACTION } ]
[ NOT FOR REPLICATION ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )
]
Arguments
column_name
Is the name of the column to be altered, added, or dropped. column_name can be 1 through 128 characters. For
new columns, column_name can be omitted for columns created with a timestamp data type. If no column_name
is specified for a timestamp data type column, the name timestamp is used.
computed_column_expression
Is an expression that defines the value of a computed column. A computed column is a virtual column that is not
physically stored in the table but is computed from an expression that uses other columns in the same table. For
example, a computed column could have the definition: cost AS price * qty. The expression can be a noncomputed
column name, constant, function, variable, and any combination of these connected by one or more operators. The
expression cannot be a subquery or include an alias data type.
Computed columns can be used in select lists, WHERE clauses, ORDER BY clauses, or any other locations where
regular expressions can be used, but with the following exceptions:
A computed column cannot be used as a DEFAULT or FOREIGN KEY constraint definition or with a NOT
NULL constraint definition. However, if the computed column value is defined by a deterministic expression
and the data type of the result is allowed in index columns, a computed column can be used as a key column
in an index or as part of any PRIMARY KEY or UNIQUE constraint.
For example, if the table has integer columns a and b, the computed column a + b may be indexed, but
computed column a + DATEPART(dd, GETDATE ()) cannot be indexed, because the value might change in
subsequent invocations.
A computed column cannot be the target of an INSERT or UPDATE statement.
NOTE
Because each row in a table can have different values for columns involved in a computed column, the computed
column may not have the same result for each row.
PERSISTED
Specifies that the Database Engine will physically store the computed values in the table, and update the values
when any other columns on which the computed column depends are updated. Marking a computed column as
PERSISTED allows an index to be created on a computed column that is deterministic, but not precise. For more
information, see Indexes on Computed Columns. Any computed columns used as partitioning columns of a
partitioned table must be explicitly marked PERSISTED. computed_column_expression must be deterministic when
PERSISTED is specified.
NULL | NOT NULL
Specifies whether null values are allowed in the column. NULL is not strictly a constraint but can be specified like
NOT NULL. NOT NULL can be specified for computed columns only if PERSISTED is also specified.
CONSTRAINT
Specifies the start of the definition for a PRIMARY KEY or UNIQUE constraint.
constraint_name
Is the new constraint. Constraint names must follow the rules for identifiers, except that the name cannot start with
a number sign (#). If constraint_name is not supplied, a system-generated name is assigned to the constraint.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column or columns by using a unique index. Only one
PRIMARY KEY constraint can be created for each table.
UNIQUE
Is a constraint that provides entity integrity for a specific column or columns by using a unique index.
CLUSTERED | NONCLUSTERED
Specifies that a clustered or nonclustered index is created for the PRIMARY KEY or UNIQUE constraint. PRIMARY
KEY constraints default to CLUSTERED. UNIQUE constraints default to NONCLUSTERED.
If a clustered constraint or index already exists on a table, CLUSTERED cannot be specified. If a clustered constraint
or index already exists on a table, PRIMARY KEY constraints default to NONCLUSTERED.
WITH FILLFACTOR =fillfactor
Specifies how full the SQL Server Database Engine should make each index page used to store the index data.
User-specified fillfactor values can be from 1 through 100. If a value is not specified, the default is 0.
IMPORTANT
Documenting WITH FILLFACTOR = fillfactor as the only index option that applies to PRIMARY KEY or UNIQUE constraints is
maintained for backward compatibility, but will not be documented in this manner in future releases. Other index options can
be specified in the index_option (Transact-SQL) clause of ALTER TABLE.
NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in ON "default"
or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current session. This is the
default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).
Remarks
Each PRIMARY KEY and UNIQUE constraint generates an index. The number of UNIQUE and PRIMARY KEY
constraints cannot cause the number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered
index.
See Also
ALTER TABLE (Transact-SQL )
ALTER TABLE index_option (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies a set of options that can be applied to an index that is part of a constraint definition that is created by
using ALTER TABLE.
Transact-SQL Syntax Conventions
Syntax
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| SORT_IN_TEMPDB = { ON | OFF }
| ONLINE = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE |ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]
| ONLINE = { ON [ ( <low_priority_lock_wait> ) ] | OFF }
}
<range> ::=
<partition_number_expression> TO <partition_number_expression>
<single_partition_rebuild__option> ::=
{
SORT_IN_TEMPDB = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = {NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE } }
| ONLINE = { ON [ ( <low_priority_lock_wait> ) ] | OFF }
}
<low_priority_lock_wait>::=
{
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = <time> [ MINUTES ] ,
ABORT_AFTER_WAIT = { NONE | SELF | BLOCKERS } )
}
Arguments
PAD_INDEX = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by FILLFACTOR is applied to the intermediate-level pages of the
index.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity, leaving enough space for at least one row of the
maximum size the index can have, given the set of keys on the intermediate pages.
FILLFACTOR =fillfactor
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or alteration. The value specified must be an integer value from 1 to 100. The default is 0.
NOTE
Fill factor values 0 and 100 are identical in all respects.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique index.
The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The option
has no effect when executing CREATE INDEX, ALTER INDEX, or UPDATE. The default is OFF.
ON
A warning message occurs when duplicate key values are inserted into a unique index. Only the rows violating the
uniqueness constraint fail.
OFF
An error message occurs when duplicate key values are inserted into a unique index. The entire INSERT operation
is rolled back.
IGNORE_DUP_KEY cannot be set to ON for indexes created on a view, non-unique indexes, XML indexes, spatial
indexes, and filtered indexes.
To view IGNORE_DUP_KEY, use sys.indexes.
In backward compatible syntax, WITH IGNORE_DUP_KEY is equivalent to WITH IGNORE_DUP_KEY = ON.
STATISTICS_NORECOMPUTE = { ON | OFF }
Specifies whether statistics are recomputed. The default is OFF.
ON
Out-of-date statistics are not automatically recomputed.
OFF
Automatic statistics updating are enabled.
ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when accessing the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
SORT_IN_TEMPDB = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies whether to store sort results in tempdb. The default is OFF.
ON
The intermediate sort results that are used to build the index are stored in tempdb. This may reduce the time
required to create an index if tempdb is on a different set of disks than the user database. However, this increases
the amount of disk space that is used during the index build.
OFF
The intermediate sort results are stored in the same database as the index.
ONLINE = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies whether underlying tables and associated indexes are available for queries and data modification during
the index operation. The default is OFF. REBUILD can be performed as an ONLINE operation.
NOTE
Unique nonclustered indexes cannot be created online. This includes indexes that are created due to a UNIQUE or PRIMARY
KEY constraint.
ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. This enables queries or updates to the
underlying table and indexes to proceed. At the start of the operation, a Shared (S ) lock is held on the source
object for a very short period of time. At the end of the operation, for a short period of time, an S (Shared) lock is
acquired on the source if a nonclustered index is being created; or an SCH-M (Schema Modification) lock is
acquired when a clustered index is created or dropped online and when a clustered or nonclustered index is being
rebuilt. Although the online index locks are short metadata locks, especially the Sch-M lock must wait for all
blocking transactions to be completed on this table. During the wait time the Sch-M lock blocks all other
transactions that wait behind this lock when accessing the same table. ONLINE cannot be set to ON when an
index is being created on a local temporary table.
NOTE
Online index rebuild can set the low_priority_lock_wait options described later in this section. low_priority_lock_wait
manages S and Sch-M lock priority during online index rebuild.
OFF
Table locks are applied for the duration of the index operation. This prevents all user access to the underlying table
for the duration of the operation. An offline index operation that creates, rebuilds, or drops a clustered index, or
rebuilds or drops a nonclustered index, acquires a Schema modification (Sch-M ) lock on the table. This prevents
all user access to the underlying table for the duration of the operation. An offline index operation that creates a
nonclustered index acquires a Shared (S ) lock on the table. This prevents updates to the underlying table but
allows read operations, such as SELECT statements.
For more information, see How Online Index Operations Work.
NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.
MAXDOP =max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017.
Overrides the max degree of parallelism configuration option for the duration of the index operation. For more
information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to limit the
number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism can be:
1 - Suppresses parallel plan generation.
>1 - Restricts the maximum number of processors used in a parallel index operation to the specified number.
0 (default) - Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.
NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.
DATA_COMPRESSION
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the data compression option for the specified table, partition number, or range of partitions. The options
are as follows:
NONE
Table or specified partitions are not compressed. Applies only to rowstore tables; does not apply to columnstore
tables.
ROW
Table or specified partitions are compressed by using row compression. Applies only to rowstore tables; does not
apply to columnstore tables.
PAGE
Table or specified partitions are compressed by using page compression. Applies only to rowstore tables; does not
apply to columnstore tables.
COLUMNSTORE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Applies only to columnstore tables. COLUMNSTORE specifies to decompress a partition that was compressed
with the COLUMNSTORE_ARCHIVE option. When the data is restored, the COLUMNSTORE index continues to
be compressed with the columnstore compression that is used for all columnstore tables.
COLUMNSTORE_ARCHIVE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Applies only to columnstore tables, which are tables stored with a clustered columnstore index.
COLUMNSTORE_ARCHIVE further compresses the specified partition to a smaller size. This can be used for
archival, or for other situations that require less storage and can afford more time for storage and retrieval
For more information about compression, see Data Compression.
ON PARTITIONS ( { <partition_number_expression> | <range> } [ ,...n ] ) Applies to: SQL Server 2008 through
SQL Server 2017.
Specifies the partitions to which the DATA_COMPRESSION setting applies. If the table is not partitioned, the ON
PARTITIONS argument generates an error. If the ON PARTITIONS clause is not provided, the
DATA_COMPRESSION option applies to all partitions of a partitioned table.
<partition_number_expression> can be specified in the following ways:
Provide the number a partition, for example: ON PARTITIONS (2).
Provide the partition numbers for several individual partitions separated by commas, for example: ON
PARTITIONS (1, 5).
Provide both ranges and individual partitions, for example: ON PARTITIONS (2, 4, 6 TO 8).
<range> can be specified as partition numbers separated by the word TO, for example: ON PARTITIONS (6 TO
8 ).
To set different types of data compression for different partitions, specify the DATA_COMPRESSION option more
than once, for example:
<single_partition_rebuild__option>
In most cases, rebuilding an index rebuilds all partitions of a partitioned index. The following options, when
applied to a single partition, do not rebuild all partitions.
SORT_IN_TEMPDB
MAXDOP
DATA_COMPRESSION
low_priority_lock_wait
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
A SWITCH or online index rebuild completes as soon as there are no blocking operations for this table.
WAIT_AT_LOW_PRIORITY indicates that if the SWITCH or online index rebuild operation cannot be completed
immediately, it waits. The operation holds low priority locks, allowing other operations that hold locks conflicting
with the DDL statement to proceed. Omitting the WAIT AT LOW PRIORITY option is equivalent to
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE) .
Remarks
For a complete description of index options, see CREATE INDEX (Transact-SQL ).
See Also
ALTER TABLE (Transact-SQL )
column_constraint (Transact-SQL )
computed_column_definition (Transact-SQL )
table_constraint (Transact-SQL )
ALTER TABLE table_constraint (Transact-SQL)
5/3/2018 • 10 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the properties of a PRIMARY KEY, UNIQUE, FOREIGN KEY, a CHECK constraint, or a DEFAULT
definition added to a table by using ALTER TABLE.
Transact-SQL Syntax Conventions
Syntax
[ CONSTRAINT constraint_name ]
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
(column [ ASC | DESC ] [ ,...n ] )
[ WITH FILLFACTOR = fillfactor
[ WITH ( <index_option>[ , ...n ] ) ]
[ ON { partition_scheme_name ( partition_column_name ... )
| filegroup | "default" } ]
| FOREIGN KEY
( column [ ,...n ] )
REFERENCES referenced_table_name [ ( ref_column [ ,...n ] ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]
| DEFAULT constant_expression FOR column [ WITH VALUES ]
| CHECK [ NOT FOR REPLICATION ] ( logical_expression )
}
Arguments
CONSTRAINT
Specifies the start of a definition for a PRIMARY KEY, UNIQUE, FOREIGN KEY, or CHECK constraint, or a
DEFAULT.
constraint_name
Is the name of the constraint. Constraint names must follow the rules for identifiers, except that the name cannot
start with a number sign (#). If constraint_name is not supplied, a system-generated name is assigned to the
constraint.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column or columns by using a unique index. Only one
PRIMARY KEY constraint can be created for each table.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns by using a unique index.
CLUSTERED | NONCLUSTERED
Specifies that a clustered or nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints default to CLUSTERED. UNIQUE constraints default to NONCLUSTERED.
If a clustered constraint or index already exists on a table, CLUSTERED cannot be specified. If a clustered
constraint or index already exists on a table, PRIMARY KEY constraints default to NONCLUSTERED.
Columns that are of the ntext, text, varchar(max), nvarchar(max), varbinary(max), xml, or image data types
cannot be specified as columns for an index.
column
Is a column or list of columns specified in parentheses that are used in a new constraint.
[ ASC | DESC ]
Specifies the order in which the column or columns participating in table constraints are sorted. The default is
ASC.
WITH FILLFACTOR =fillfactor
Specifies how full the Database Engine should make each index page used to store the index data. User-specified
fillfactor values can be from 1 through 100. If a value is not specified, the default is 0.
IMPORTANT
Documenting WITH FILLFACTOR = fillfactor as the only index option that applies to PRIMARY KEY or UNIQUE constraints is
maintained for backward compatibility, but will not be documented in this manner in future releases. Other index options can
be specified in the index_option clause of ALTER TABLE.
Remarks
When FOREIGN KEY or CHECK constraints are added, all existing data is verified for constraint violations unless
the WITH NOCHECK option is specified. If any violations occur, ALTER TABLE fails and an error is returned. When
a new PRIMARY KEY or UNIQUE constraint is added to an existing column, the data in the column or columns
must be unique. If duplicate values are found, ALTER TABLE fails. The WITH NOCHECK option has no effect when
PRIMARY KEY or UNIQUE constraints are added.
Each PRIMARY KEY and UNIQUE constraint generates an index. The number of UNIQUE and PRIMARY KEY
constraints cannot cause the number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered
index. Foreign key constraints do not automatically generate an index. However, foreign key columns are
frequently used in join criteria in queries by matching the column or columns in the foreign key constraint of one
table with the primary or unique key column or columns in the other table. An index on the foreign key columns
enables the Database Engine to quickly find related data in the foreign key table.
Examples
For examples, see ALTER TABLE (Transact-SQL ).
See Also
ALTER TABLE (Transact-SQL )
ALTER TRIGGER (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the definition of a DML, DDL, or logon trigger that was previously created by the CREATE TRIGGER
statement. Triggers are created by using CREATE TRIGGER. They can be created directly from Transact-SQL
statements or from methods of assemblies that are created in the Microsoft .NET Framework common language
runtime (CLR ) and uploaded to an instance of SQL Server. For more information about the parameters that are
used in the ALTER TRIGGER statement, see CREATE TRIGGER (Transact-SQL ).
Transact-SQL Syntax Conventions
Syntax
-- SQL Server Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)
<dml_trigger_option> ::=
[ ENCRYPTION ]
[ <EXECUTE AS Clause> ]
<method_specifier> ::=
assembly_name.class_name.method_name
<dml_trigger_option> ::=
[ NATIVE_COMPILATION ]
[ SCHEMABINDING ]
[ <EXECUTE AS Clause> ]
<method_specifier> ::=
assembly_name.class_name.method_name
<logon_trigger_option> ::=
[ ENCRYPTION ]
[ EXECUTE AS Clause ]
<method_specifier> ::=
assembly_name.class_name.method_name
<dml_trigger_option> ::=
[ <EXECUTE AS Clause> ]
-- Trigger on a CREATE, ALTER, DROP, GRANT, DENY, REVOKE, or UPDATE statement (DDL Trigger)
<ddl_trigger_option> ::=
[ <EXECUTE AS Clause> ]
Arguments
schema_name
Is the name of the schema to which a DML trigger belongs. DML triggers are scoped to the schema of the table or
view on which they are created. schema_name is optional only if the DML trigger and its corresponding table or
view belong to the default schema. schema_name cannot be specified for DDL or logon triggers.
trigger_name
Is the existing trigger to modify.
table | view
Is the table or view on which the DML trigger is executed. Specifying the fully-qualified name of the table or view
is optional.
DATABASE
Applies the scope of a DDL trigger to the current database. If specified, the trigger fires whenever event_type or
event_group occurs in the current database.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
Applies the scope of a DDL or logon trigger to the current server. If specified, the trigger fires whenever
event_type or event_group occurs anywhere in the current server.
WITH ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017.
Encrypts the sys.syscommentssys.sql_modules entries that contain the text of the ALTER TRIGGER statement.
Using WITH ENCRYPTION prevents the trigger from being published as part of SQL Server replication. WITH
ENCRYPTION cannot be specified for CLR triggers.
NOTE
If a trigger is created by using WITH ENCRYPTION, it must be specified again in the ALTER TRIGGER statement for this option
to remain enabled.
EXECUTE AS
Specifies the security context under which the trigger is executed. Enables you to control the user account the
instance of SQL Server uses to validate permissions on any database objects that are referenced by the trigger.
For more information, see EXECUTE AS Clause (Transact-SQL ).
NATIVE_COMPIL ATION
Indicates that the trigger is natively compiled.
This option is required for triggers on memory-optimized tables.
SCHEMABINDING
Ensures that tables that are referenced by a trigger cannot be dropped or altered.
This option is required for triggers on memory-optimized tables and is not supported for triggers on traditional
tables.
AFTER
Specifies that the trigger is fired only after the triggering SQL statement is executed successfully. All referential
cascade actions and constraint checks also must have been successful before this trigger fires.
AFTER is the default, if only the FOR keyword is specified.
DML AFTER triggers may be defined only on tables.
INSTEAD OF
Specifies that the DML trigger is executed instead of the triggering SQL statement, therefore, overriding the
actions of the triggering statements. INSTEAD OF cannot be specified for DDL or logon triggers.
At most, one INSTEAD OF trigger per INSERT, UPDATE, or DELETE statement can be defined on a table or view.
However, you can define views on views where each view has its own INSTEAD OF trigger.
INSTEAD OF triggers are not allowed on views created by using WITH CHECK OPTION. SQL Server raises an
error when an INSTEAD OF trigger is added to a view for which WITH CHECK OPTION was specified. The user
must remove that option using ALTER VIEW before defining the INSTEAD OF trigger.
{ [ DELETE ] [ , ] [ INSERT ] [ , ] [ UPDATE ] } | { [INSERT ] [ , ] [ UPDATE ] }
Specifies the data modification statements, when tried against this table or view, activate the DML trigger. At least
one option must be specified. Any combination of these in any order is allowed in the trigger definition. If more
than one option is specified, separate the options with commas.
For INSTEAD OF triggers, the DELETE option is not allowed on tables that have a referential relationship
specifying a cascade action ON DELETE. Similarly, the UPDATE option is not allowed on tables that have a
referential relationship specifying a cascade action ON UPDATE. For more information, see ALTER TABLE
(Transact-SQL ).
event_type
Is the name of a Transact-SQL language event that, after execution, causes a DDL trigger to fire. Valid events for
DDL triggers are listed in DDL Events.
event_group
Is the name of a predefined grouping of Transact-SQL language events. The DDL trigger fires after execution of
any Transact-SQL language event that belongs to event_group. Valid event groups for DDL triggers are listed in
DDL Event Groups. After ALTER TRIGGER has finished running, event_group also acts as a macro by adding the
event types it covers to the sys.trigger_events catalog view.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates that the trigger should not be executed when a replication agent modifies the table involved in the
trigger.
sql_statement
Is the trigger conditions and actions.
For triggers on memory-optimized tables, the only sql_statement allowed at the top level is an ATOMIC block. The
T-SQL allowed inside the ATOMIC block is limited by the T-SQL allowed inside native procs.
EXTERNAL NAME <method_specifier>
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the method of an assembly to bind with the trigger. The method must take no arguments and return
void. class_name must be a valid SQL Server identifier and must exist as a class in the assembly with assembly
visibility. The class cannot be a nested class.
Remarks
For more information about ALTER TRIGGER, see Remarks in CREATE TRIGGER (Transact-SQL ).
NOTE
The EXTERNAL_NAME and ON_ALL_SERVER options are not available in a contained database.
DML Triggers
ALTER TRIGGER supports manually updatable views through INSTEAD OF triggers on tables and views. SQL
Server applies ALTER TRIGGER the same way for all kinds of triggers (AFTER, INSTEAD -OF ).
The first and last AFTER triggers to be executed on a table can be specified by using sp_settriggerorder. Only one
first and one last AFTER trigger can be specified on a table. If there are other AFTER triggers on the same table,
they are randomly executed.
If an ALTER TRIGGER statement changes a first or last trigger, the first or last attribute set on the modified trigger
is dropped, and the order value must be reset by using sp_settriggerorder.
An AFTER trigger is executed only after the triggering SQL statement has executed successfully. This successful
execution includes all referential cascade actions and constraint checks associated with the object updated or
deleted. The AFTER trigger operation checks for the effects of the triggering statement and also all referential
cascade UPDATE and DELETE actions that are caused by the triggering statement.
When a DELETE action to a child or referencing table is the result of a CASCADE on a DELETE from the parent
table, and an INSTEAD OF trigger on DELETE is defined on that child table, the trigger is ignored and the DELETE
action is executed.
DDL Triggers
Unlike DML triggers, DDL triggers are not scoped to schemas. Therefore, the OBJECT_ID, OBJECT_NAME,
OBJECTPROPERTY, and OBJECTPROPERTY (EX) cannot be used when querying metadata about DDL triggers.
Use the catalog views instead. For more information, see Get Information About DDL Triggers.
Logon Triggers
Azure SQL Database does not support triggers on logon events.
Permissions
To alter a DML trigger requires ALTER permission on the table or view on which the trigger is defined.
To alter a DDL trigger defined with server scope (ON ALL SERVER ) or a logon trigger requires CONTROL
SERVER permission on the server. To alter a DDL trigger defined with database scope (ON DATABASE ) requires
ALTER ANY DATABASE DDL TRIGGER permission in the current database.
Examples
The following example creates a DML trigger in the AdventureWorks 2012 database, that prints a user-defined
message to the client when a user tries to add or change data in the SalesPersonQuotaHistory table. The trigger is
then modified by using ALTER TRIGGER to apply the trigger only on INSERT activities. This trigger is helpful
because it reminds the user that updates or inserts rows into this table to also notify the Compensation
department.
See Also
DROP TRIGGER (Transact-SQL )
ENABLE TRIGGER (Transact-SQL )
DISABLE TRIGGER (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_helptrigger (Transact-SQL )
Create a Stored Procedure
sp_addmessage (Transact-SQL )
Transactions
Get Information About DML Triggers
Get Information About DDL Triggers
sys.triggers (Transact-SQL )
sys.trigger_events (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.assembly_modules (Transact-SQL )
sys.server_triggers (Transact-SQL )
sys.server_trigger_events (Transact-SQL )
sys.server_sql_modules (Transact-SQL )
sys.server_assembly_modules (Transact-SQL )
Make Schema Changes on Publication Databases
ALTER USER (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Renames a database user or changes its default schema.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
<set_item> ::=
NAME = newUserName
| DEFAULT_SCHEMA = { schemaName | NULL }
| LOGIN = loginName
| PASSWORD = 'password' [ OLD_PASSWORD = 'oldpassword' ]
| DEFAULT_LANGUAGE = { NONE | <lcid> | <language name> | <language alias> }
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ]
<set_item> ::=
NAME = newUserName
| DEFAULT_SCHEMA = schemaName
| LOGIN = loginName
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ]
[;]
<set_item> ::=
NAME = newUserName
| DEFAULT_SCHEMA = { schemaName | NULL }
| LOGIN = loginName
| PASSWORD = 'password' [ OLD_PASSWORD = 'oldpassword' ]
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ]
<set_item> ::=
NAME = newUserName
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
<set_item> ::=
NAME =newUserName
| LOGIN =loginName
| DEFAULT_SCHEMA = schema_name
[;]
Arguments
userName
Specifies the name by which the user is identified inside this database.
LOGIN =loginName
Re-maps a user to another login by changing the user's Security Identifier (SID ) to match the login's SID.
If the ALTER USER statement is the only statement in a SQL batch, Windows Azure SQL Database supports the
WITH LOGIN clause. If the ALTER USER statement is not the only statement in a SQL batch or is executed in
dynamic SQL, the WITH LOGIN clause is not supported.
NAME =newUserName
Specifies the new name for this user. newUserName must not already occur in the current database.
DEFAULT_SCHEMA = { schemaName | NULL }
Specifies the first schema that will be searched by the server when it resolves the names of objects for this user.
Setting the default schema to NULL removes a default schema from a Windows group. The NULL option cannot
be used with a Windows user.
PASSWORD = 'password'
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Specifies the password for the user that is being changed. Passwords are case-sensitive.
NOTE
This option is available only for contained users. See Contained Databases and sp_migrate_user_to_contained (Transact-SQL)
for more information.
OLD_PASSWORD ='oldpassword'
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
The current user password that will be replaced by 'password'. Passwords are case-sensitive. OLD_PASSWORD is
required to change a password, unless you have ALTER ANY USER permission. Requiring OLD_PASSWORD
prevents users with IMPERSONATION permission from changing the password.
NOTE
This option is available only for contained users.
NOTE
This option may only be specified in a contained database and only for contained users.
ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.
Suppresses cryptographic metadata checks on the server in bulk copy operations. This enables the user to bulk
copy encrypted data between tables or databases, without decrypting the data. The default is OFF.
WARNING
Improper use of this option can lead to data corruption. For more information, see Migrate Sensitive Data Protected by
Always Encrypted.
Remarks
The default schema will be the first schema that will be searched by the server when it resolves the names of
objects for this database user. Unless otherwise specified, the default schema will be the owner of objects created
by this database user.
If the user has a default schema, that default schema will used. If the user does not have a default schema, but the
user is a member of a group that has a default schema, the default schema of the group will be used. If the user
does not have a default schema, and is a member of more than one group, the default schema for the user will be
that of the Windows group with the lowest principal_id and an explicitly set default schema. If no default schema
can be determined for a user, the dbo schema will be used.
DEFAULT_SCHEMA can be set to a schema that does not currently occur in the database. Therefore, you can
assign a DEFAULT_SCHEMA to a user before that schema is created.
DEFAULT_SCHEMA cannot be specified for a user who is mapped to a certificate, or an asymmetric key.
IMPORTANT
The value of DEFAULT_SCHEMA is ignored if the user is a member of the sysadmin fixed server role. All members of the
sysadmin fixed server role have a default schema of dbo .
You can change the name of a user who is mapped to a Windows login or group only when the SID of the new
user name matches the SID that is recorded in the database. This check helps prevent spoofing of Windows logins
in the database.
The WITH LOGIN clause enables the remapping of a user to a different login. Users without a login, users mapped
to a certificate, or users mapped to an asymmetric key cannot be re-mapped with this clause. Only SQL users and
Windows users (or groups) can be remapped. The WITH LOGIN clause cannot be used to change the type of user,
such as changing a Windows account to a SQL Server login.
The name of the user will be automatically renamed to the login name if the following conditions are true.
The user is a Windows user.
The name is a Windows name (contains a backslash).
No new name was specified.
The current name differs from the login name.
Otherwise, the user will not be renamed unless the caller additionally invokes the NAME clause.
The name of a user mapped to a SQL Server login, a certificate, or an asymmetric key cannot contain the
backslash character (\).
Cau t i on
Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER AUTHORIZATION.
In such databases you must instead use the new catalog views. The new catalog views take into account the
separation of principals and schemas that was introduced in SQL Server 2005. For more information about
catalog views, see Catalog Views (Transact-SQL ).
Security
NOTE
A user who has ALTER ANY USER permission can change the default schema of any user. A user who has an altered schema
might unknowingly select data from the wrong table or execute code from the wrong schema.
Permissions
To change the name of a user requires the ALTER ANY USER permission.
To change the target login of a user requires the CONTROL permission on the database.
To change the user name of a user having CONTROL permission on the database requires the CONTROL
permission on the database.
To change the default schema or language requires ALTER permission on the user. Users can change their own
default schema or language.
Examples
All examples are executed in a user database.
A. Changing the name of a database user
The following example changes the name of the database user Mary5 to Mary51 .
See Also
CREATE USER (Transact-SQL )
DROP USER (Transact-SQL )
Contained Databases
EVENTDATA (Transact-SQL )
sp_migrate_user_to_contained (Transact-SQL )
ALTER VIEW (Transact-SQL)
5/30/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies a previously created view. This includes an indexed view. ALTER VIEW does not affect dependent stored
procedures or triggers and does not change permissions.
Transact-SQL Syntax Conventions
Syntax
ALTER VIEW [ schema_name . ] view_name [ ( column [ ,...n ] ) ]
[ WITH <view_attribute> [ ,...n ] ]
AS select_statement
[ WITH CHECK OPTION ] [ ; ]
<view_attribute> ::=
{
[ ENCRYPTION ]
[ SCHEMABINDING ]
[ VIEW_METADATA ]
}
Arguments
schema_name
Is the name of the schema to which the view belongs.
view_name
Is the view to change.
column
Is the name of one or more columns, separated by commas, that are to be part of the specified view.
IMPORTANT
Column permissions are maintained only when columns have the same name before and after ALTER VIEW is performed.
NOTE
In the columns for the view, the permissions for a column name apply across a CREATE VIEW or ALTER VIEW statement,
regardless of the source of the underlying data. For example, if permissions are granted on the SalesOrderID column in a
CREATE VIEW statement, an ALTER VIEW statement can rename the SalesOrderID column, such as to OrderRef, and still
have the permissions associated with the view using SalesOrderID.
ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Encrypts the entries in sys.syscomments that contain the text of the ALTER VIEW statement. WITH ENCRYPTION
prevents the view from being published as part of SQL Server replication.
SCHEMABINDING
Binds the view to the schema of the underlying table or tables. When SCHEMABINDING is specified, the base
tables cannot be modified in a way that would affect the view definition. The view definition itself must first be
modified or dropped to remove dependencies on the table to be modified. When you use SCHEMABINDING, the
select_statement must include the two-part names (schema.object) of tables, views, or user-defined functions that
are referenced. All referenced objects must be in the same database.
Views or tables that participate in a view created with the SCHEMABINDING clause cannot be dropped, unless
that view is dropped or changed so that it no longer has schema binding. Otherwise, the Database Engine raises an
error. Also, executing ALTER TABLE statements on tables that participate in views that have schema binding fail if
these statements affect the view definition.
VIEW_METADATA
Specifies that the instance of SQL Server will return to the DB -Library, ODBC, and OLE DB APIs the metadata
information about the view, instead of the base table or tables, when browse-mode metadata is being requested
for a query that references the view. Browse-mode metadata is additional metadata that the instance of Database
Engine returns to the client-side DB -Library, ODBC, and OLE DB APIs. This metadata enables the client-side APIs
to implement updatable client-side cursors. Browse-mode metadata includes information about the base table that
the columns in the result set belong to.
For views created with VIEW_METADATA, the browse-mode metadata returns the view name and not the base
table names when it describes columns from the view in the result set.
When a view is created by using WITH VIEW_METADATA, all its columns, except a timestamp column, are
updatable if the view has INSERT or UPDATE INSTEAD OF triggers. For more information, see the Remarks
section in CREATE VIEW (Transact-SQL ).
AS
Are the actions the view is to take.
select_statement
Is the SELECT statement that defines the view.
WITH CHECK OPTION
Forces all data modification statements that are executed against the view to follow the criteria set within
select_statement.
Remarks
For more information about ALTER VIEW, see Remarks in CREATE VIEW (Transact-SQL ).
NOTE
If the previous view definition was created by using WITH ENCRYPTION or CHECK OPTION, these options are enabled only if
they are included in ALTER VIEW.
If a view currently used is modified by using ALTER VIEW, the Database Engine takes an exclusive schema lock on
the view. When the lock is granted, and there are no active users of the view, the Database Engine deletes all copies
of the view from the procedure cache. Existing plans referencing the view remain in the cache but are recompiled
when invoked.
ALTER VIEW can be applied to indexed views; however, ALTER VIEW unconditionally drops all indexes on the
view.
Permissions
To execute ALTER VIEW, at a minimum, ALTER permission on OBJECT is required.
Examples
The following example creates a view that contains all employees and their hire dates called EmployeeHireDate .
Permissions are granted to the view, but requirements are changed to select employees whose hire dates fall
before a certain date. Then, ALTER VIEW is used to replace the view.
USE AdventureWorks2012 ;
GO
CREATE VIEW HumanResources.EmployeeHireDate
AS
SELECT p.FirstName, p.LastName, e.HireDate
FROM HumanResources.Employee AS e JOIN Person.Person AS p
ON e.BusinessEntityID = p.BusinessEntityID ;
GO
The view must be changed to include only the employees that were hired before 2002 . If ALTER VIEW is not used,
but instead the view is dropped and re-created, the previously used GRANT statement and any other statements
that deal with permissions pertaining to this view must be re-entered.
See Also
CREATE TABLE (Transact-SQL )
CREATE VIEW (Transact-SQL )
DROP VIEW (Transact-SQL )
Create a Stored Procedure
SELECT (Transact-SQL )
EVENTDATA (Transact-SQL )
Make Schema Changes on Publication Databases
ALTER WORKLOAD GROUP (Transact-SQL)
5/4/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Changes an existing Resource Governor workload group configuration, and optionally assigns it to a to a
Resource Governor resource pool.
Transact-SQL Syntax Conventions.
Syntax
ALTER WORKLOAD GROUP { group_name | "default" }
[ WITH
([ IMPORTANCE = { LOW | MEDIUM | HIGH } ]
[ [ , ] REQUEST_MAX_MEMORY_GRANT_PERCENT = value ]
[ [ , ] REQUEST_MAX_CPU_TIME_SEC = value ]
[ [ , ] REQUEST_MEMORY_GRANT_TIMEOUT_SEC = value ]
[ [ , ] MAX_DOP = value ]
[ [ , ] GROUP_MAX_REQUESTS = value ] )
]
[ USING { pool_name | "default" } ]
[ ; ]
Arguments
group_name | "default"
Is the name of an existing user-defined workload group or the Resource Governor default workload group.
NOTE
Resource Governor creates the "default" and internal groups when SQL Server is installed.
The option "default" must be enclosed by quotation marks ("") or brackets ([]) when used with ALTER
WORKLOAD GROUP to avoid conflict with DEFAULT, which is a system reserved word. For more information,
see Database Identifiers.
NOTE
Predefined workload groups and resource pools all use lowercase names, such as "default". This should be taken into
account for servers that use case-sensitive collation. Servers with case-insensitive collation, such as
SQL_Latin1_General_CP1_CI_AS, will treat "default" and "Default" as the same.
IMPORTANCE is local to the resource pool; workload groups of different importance inside the same resource
pool affect each other, but do not affect workload groups in another resource pool.
REQUEST_MAX_MEMORY_GRANT_PERCENT =value
Specifies the maximum amount of memory that a single request can take from the pool. This percentage is
relative to the resource pool size specified by MAX_MEMORY_PERCENT.
NOTE
The amount specified only refers to query execution grant memory.
value must be 0 or a positive integer. The allowed range for value is from 0 through 100. The default setting for
value is 25.
Note the following:
Setting value to 0 prevents queries with SORT and HASH JOIN operations in user-defined workload
groups from running.
We do not recommend setting value greater than 70 because the server may be unable to set aside
enough free memory if other concurrent queries are running. This may eventually lead to query time-out
error 8645.
NOTE
If the query memory requirements exceed the limit that is specified by this parameter, the server does the following:
For user-defined workload groups, the server tries to reduce the query degree of parallelism until the memory requirement
falls under the limit, or until the degree of parallelism equals 1. If the query memory requirement is still greater than the
limit, error 8657 occurs.
For internal and default workload groups, the server permits the query to obtain the required memory.
Be aware that both cases are subject to time-out error 8645 if the server has insufficient physical memory.
REQUEST_MAX_CPU_TIME_SEC =value
Specifies the maximum amount of CPU time, in seconds, that a request can use. value must be 0 or a positive
integer. The default setting for value is 0, which means unlimited.
NOTE
By default, Resource Governor will not prevent a request from continuing if the maximum time is exceeded. However, an
event will be generated. For more information, see CPU Threshold Exceeded Event Class.
IMPORTANT
Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3, and using trace flag 2422, Resource Governor
will abort a request when the maximum time is exceeded.
REQUEST_MEMORY_GRANT_TIMEOUT_SEC =value
Specifies the maximum time, in seconds, that a query can wait for memory grant (work buffer memory) to
become available.
NOTE
A query does not always fail when memory grant time-out is reached. A query will only fail if there are too many concurrent
queries running. Otherwise, the query may only get the minimum memory grant, resulting in reduced query performance.
value must be a positive integer. The default setting for value, 0, uses an internal calculation based on query cost
to determine the maximum time.
MAX_DOP =value
Specifies the maximum degree of parallelism (DOP ) for parallel requests. value must be 0 or a positive integer, 1
though 255. When value is 0, the server chooses the max degree of parallelism. This is the default and
recommended setting.
NOTE
The actual value that the Database Engine sets for MAX_DOP by might be less than the specified value. The final value is
determined by the formula min(255, number of CPUs ).
Cau t i on
Changing MAX_DOP can adversely affect a server's performance. If you must change MAX_DOP, we recommend
that it be set to a value that is less than or equal to the maximum number of hardware schedulers that are present
in a single NUMA node. We recommend that you do not set MAX_DOP to a value greater than 8.
MAX_DOP is handled as follows:
MAX_DOP as a query hint is honored as long as it does not exceed workload group MAX_DOP.
MAX_DOP as a query hint always overrides sp_configure 'max degree of parallelism'.
Workload group MAX_DOP overrides sp_configure 'max degree of parallelism'.
If the query is marked as serial (MAX_DOP = 1 ) at compile time, it cannot be changed back to parallel at
run time regardless of the workload group or sp_configure setting.
After DOP is configured, it can only be lowered on grant memory pressure. Workload group
reconfiguration is not visible while waiting in the grant memory queue.
GROUP_MAX_REQUESTS =value
Specifies the maximum number of simultaneous requests that are allowed to execute in the workload
group. value must be 0 or a positive integer. The default setting for value, 0, allows unlimited requests.
When the maximum concurrent requests are reached, a user in that group can log in, but is placed in a wait
state until concurrent requests are dropped below the value specified.
USING { pool_name | "default" }
Associates the workload group with the user-defined resource pool identified by pool_name, which in effect
puts the workload group in the resource pool. If pool_name is not provided or if the USING argument is
not used, the workload group is put in the predefined Resource Governor default pool.
The option "default" must be enclosed by quotation marks ("") or brackets ([]) when used with ALTER
WORKLOAD GROUP to avoid conflict with DEFAULT, which is a system reserved word. For more
information, see Database Identifiers.
NOTE
The option "default" is case-sensitive.
Remarks
ALTER WORKLOAD GROUP is allowed on the default group.
Changes to the workload group configuration do not take effect until after ALTER RESOURCE GOVERNOR
RECONFIGURE is executed. When changing a plan affecting setting, the new setting will only take effect in
previously cached plans after executing DBCC FREEPROCCACHE (pool_name), where pool_name is the name of
a Resource Governor resource pool on which the workload group is associated with.
If you are changing MAX_DOP to 1, executing DBCC FREEPROCCACHE is not required because parallel
plans can run in serial mode. However, it may not be as efficient as a plan compiled as a serial plan.
If you are changing MAX_DOP from 1 to 0 or a value greater than 1, executing DBCC FREEPROCCACHE
is not required. However, serial plans cannot run in parallel, so clearing the respective cache will allow new
plans to potentially be compiled using parallelism.
Cau t i on
Clearing cached plans from a resource pool that is associated with more than one workload group will affect all
workload groups with the user-defined resource pool identified by pool_name.
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states. For
more information, see Resource Governor.
REQUEST_MEMORY_GRANT_PERCENT: In SQL Server 2005, index creation is allowed to use more workspace
memory than initially granted for improved performance. This special handling is supported by Resource
Governor in later versions, however, the initial grant and any additional memory grant are limited by resource
pool and workload group settings.
Index Creation on a Partitioned Table
The memory consumed by index creation on non-aligned partitioned table is proportional to the number of
partitions involved. If the total required memory exceeds the per-query limit
(REQUEST_MAX_MEMORY_GRANT_PERCENT) imposed by the Resource Governor workload group setting,
this index creation may fail to execute. Because the "default" workload group allows a query to exceed the per-
query limit with the minimum required memory to start for SQL Server 2005 compatibility, the user may be able
to run the same index creation in "default" workload group, if the "default" resource pool has enough total
memory configured to run such query.
Permissions
Requires CONTROL SERVER permission.
Examples
The following example shows how to change the importance of requests in the default group from MEDIUM to
LOW .
ALTER WORKLOAD GROUP "default"
WITH (IMPORTANCE = LOW);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
The following example shows how to move a workload group from the pool that it is in to the default pool.
See Also
Resource Governor
CREATE WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
ALTER XML SCHEMA COLLECTION (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds new schema components to an existing XML schema collection.
Transact-SQL Syntax Conventions
Syntax
ALTER XML SCHEMA COLLECTION [ relational_schema. ]sql_identifier ADD 'Schema Component'
Arguments
relational_schema
Identifies the relational schema name. If not specified, the default relational schema is assumed.
sql_identifier
Is the SQL identifier for the XML schema collection.
' Schema Component '
Is the schema component to insert.
Remarks
Use the ALTER XML SCHEMA COLLECTION to add new XML schemas whose namespaces are not already in the
XML schema collection, or add new components to existing namespaces in the collection.
The following example adds a new <element> to the existing namespace http://MySchema/test_xml_schema in the
collection MyColl .
ALTER XML SCHEMA adds element <anotherElement> to the previously defined namespace
http://MySchema/test_xml_schema .
Note that if some of the components you want to add in the collection reference components that are already in
the same collection, you must use <import namespace="referenced_component_namespace" /> . However, it is not valid
to use the current schema namespace in <xsd:import> , and therefore components from the same target
namespace as the current schema namespace are automatically imported.
To remove collections, use DROP XML SCHEMA COLLECTION (Transact-SQL ).
If the schema collection already contains a lax validation wildcard or an element of type xs:anyType, adding a new
global element, type, or attribute declaration to the schema collection will cause a revalidation of all the stored data
that is constrained by the schema collection.
Permissions
To alter an XML SCHEMA COLLECTION requires ALTER permission on the collection.
Examples
A. Creating XML schema collection in the database
The following example creates the XML schema collection ManuInstructionsSchemaCollection . The collection has
only one schema namespace.
<xsd:element name="root">
<xsd:complexType mixed="true">
<xsd:sequence>
<xsd:element name="Location" minOccurs="1" maxOccurs="unbounded">
<xsd:complexType mixed="true">
<xsd:sequence>
<xsd:element name="step" type="StepType" minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
<xsd:attribute name="LocationID" type="xsd:integer" use="required"/>
<xsd:attribute name="SetupHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="MachineHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="LaborHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="LotSize" type="xsd:decimal" use="optional"/>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>' ;
GO
GO
-- Verify - list of collections in the database.
SELECT *
FROM sys.xml_schema_collections;
-- Verify - list of namespaces in the database.
SELECT name
FROM sys.xml_schema_namespaces;
-- Use it. Create a typed xml variable. Note the collection name
-- that is specified.
DECLARE @x xml (ManuInstructionsSchemaCollection);
GO
--Or create a typed xml column.
CREATE TABLE T (
i int primary key,
x xml (ManuInstructionsSchemaCollection));
GO
-- Clean up.
DROP TABLE T;
GO
DROP XML SCHEMA COLLECTION ManuInstructionsSchemaCollection;
Go
USE master;
GO
DROP DATABASE SampleDB;
Alternatively, you can assign the schema collection to a variable and specify the variable in the
CREATE XML SCHEMA COLLECTION statement as follows:
The variable in the example is of nvarchar(max) type. The variable can also be of xml data type, in which case, it is
implicitly converted to a string.
For more information, see View a Stored XML Schema Collection.
You can store schema collections in an xml type column. In this case, to create XML schema collection, perform the
following steps:
1. Retrieve the schema collection from the column by using a SELECT statement and assign it to a variable of
xml type, or a varchar type.
2. Specify the variable name in the CREATE XML SCHEMA COLLECTION statement.
The CREATE XML SCHEMA COLLECTION stores only the schema components that SQL Server
understands; everything in the XML schema is not stored in the database. Therefore, if you want the XML
schema collection back exactly the way it was supplied, we recommend that you save your XML schemas in
a database column or some other folder on your computer.
B. Specifying multiple schema namespaces in a schema collection
You can specify multiple XML schemas when you create an XML schema collection. For example:
The following example creates the XML schema collection ProductDescriptionSchemaCollection that includes two
XML schema namespaces.
CREATE XML SCHEMA COLLECTION ProductDescriptionSchemaCollection AS
'<xsd:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelWarrAndMain"
xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain"
elementFormDefault="qualified"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" >
<xsd:element name="Warranty" >
<xsd:complexType>
<xsd:sequence>
<xsd:element name="WarrantyPeriod" type="xsd:string" />
<xsd:element name="Description" type="xsd:string" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
<xs:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelDescription"
xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription"
elementFormDefault="qualified"
xmlns:mstns="http://tempuri.org/XMLSchema.xsd"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:wm="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain" >
<xs:import
namespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain" />
<xs:element name="ProductDescription" type="ProductDescription" />
<xs:complexType name="ProductDescription">
<xs:sequence>
<xs:element name="Summary" type="Summary" minOccurs="0" />
</xs:sequence>
<xs:attribute name="ProductModelID" type="xs:string" />
<xs:attribute name="ProductModelName" type="xs:string" />
</xs:complexType>
<xs:complexType name="Summary" mixed="true" >
<xs:sequence>
<xs:any processContents="skip" namespace="http://www.w3.org/1999/xhtml" minOccurs="0"
maxOccurs="unbounded" />
</xs:sequence>
</xs:complexType>
</xs:schema>'
;
GO
-- Clean up
DROP XML SCHEMA COLLECTION ProductDescriptionSchemaCollection;
GO
See Also
CREATE XML SCHEMA COLLECTION (Transact-SQL )
DROP XML SCHEMA COLLECTION (Transact-SQL )
EVENTDATA (Transact-SQL )
Compare Typed XML to Untyped XML
Requirements and Limitations for XML Schema Collections on the Server
BACKUP (Transact-SQL)
5/16/2018 • 39 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Backs up a complete SQL Server database to create a database backup, or one or more files or filegroups of the
database to create a file backup (BACKUP DATABASE ). Also, under the full recovery model or bulk-logged
recovery model, backs up the transaction log of the database to create a log backup (BACKUP LOG ).
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
Backing Up a Whole Database
BACKUP DATABASE { database_name | @database_name_var }
TO <backup_device> [ ,...n ]
[ <MIRROR TO clause> ] [ next-mirror-to ]
[ WITH { DIFFERENTIAL -- Not supporterd in SQL Database Managed Instance
| <general_WITH_options> [ ,...n ] } ]
[;]
<backup_device>::=
{
{ logical_device_name | @logical_device_name_var }
| { DISK -- Not supported in SQL Database Managed Instance
| TAPE -- Not supported in SQL Database Managed Instance
| URL } =
{ 'physical_device_name' | @physical_device_name_var | 'NUL' }
}
}
<MIRROR TO clause>::=
MIRROR TO <backup_device> [ ,...n ]
<file_or_filegroup>::=
{
FILE = { logical_file_name | @logical_file_name_var }
| FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
}
<read_only_filegroup>::=
FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
--Compatibility Options
RESTART
--Monitoring Options
STATS [ = percentage ]
--Tape Options. These are not supported in SQL Database Managed Instance
{ REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
--Log-specific Options. These are not supported in SQL Database Managed Instance
{ NORECOVERY | STANDBY = undo_file_name }
| NO_TRUNCATE
--Encryption Options
ENCRYPTION (ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY } , encryptor_options )
<encryptor_options> ::=
SERVER CERTIFICATE = Encryptor_Name | SERVER ASYMMETRIC KEY = Encryptor_Name
Arguments
DATABASE
Specifies a complete database backup. If a list of files and filegroups is specified, only those files and filegroups are
backed up. During a full or differential database backup, SQL Server backs up enough of the transaction log to
produce a consistent database when the backup is restored.
When you restore a backup created by BACKUP DATABASE (a data backup), the entire backup is restored. Only a
log backup can be restored to a specific time or transaction within the backup.
NOTE
Only a full database backup can be performed on the master database.
NOTE
After a typical log backup, some transaction log records become inactive, unless you specify WITH NO_TRUNCATE or
COPY_ONLY . The log is truncated after all the records within one or more virtual log files become inactive. If the log is not
being truncated after routine log backups, something might be delaying log truncation. For more information, see Factors
that can delay log truncation.
{ database_name | @database_name_var }
Is the database from which the transaction log, partial database, or complete database is backed up. If supplied as
a variable (@database_name_var), this name can be specified either as a string constant
(@database_name_var=database name) or as a variable of character string data type, except for the ntext or text
data types.
NOTE
The mirror database in a database mirroring partnership cannot be backed up.
<file_or_filegroup> [ ,...n ]
Used only with BACKUP DATABASE, specifies a database file or filegroup to include in a file backup, or specifies a
read-only file or filegroup to include in a partial backup.
FILE = { logical_file_name | @logical_file_name_var }
Is the logical name of a file or a variable whose value equates to the logical name of a file that is to be included in
the backup.
FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
Is the logical name of a filegroup or a variable whose value equates to the logical name of a filegroup that is to be
included in the backup. Under the simple recovery model, a filegroup backup is allowed only for a read-only
filegroup.
NOTE
Consider using file backups when the database size and performance requirements make a database backup impractical. The
NUL device can be used to test the performance of backups, but should not be used in production environments.
n
Is a placeholder that indicates that multiple files and filegroups can be specified in a comma-separated list. The
number is unlimited.
For more information, see Full File Backups (SQL Server) and Back Up Files and Filegroups (SQL Server).
READ_WRITE_FILEGROUPS [ , FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var } [ ,...n ]
]
Specifies a partial backup. A partial backup includes all the read/write files in a database: the primary filegroup
and any read/write secondary filegroups, and also any specified read-only files or filegroups.
READ_WRITE_FILEGROUPS
Specifies that all read/write filegroups be backed up in the partial backup. If the database is read-only,
READ_WRITE_FILEGROUPS includes only the primary filegroup.
IMPORTANT
Explicitly listing the read/write filegroups by using FILEGROUP instead of READ_WRITE_FILEGROUPS creates a file backup.
NOTE
The NUL disk device will discard all information sent to it and should only be used for testing. This is not for production use.
IMPORTANT
Starting with SQL Server 2012 (11.x) SP1 CU2 through SQL Server 2014 (12.x), you can only backup to a single device when
backing up to URL. In order to backup to multiple devices when backing up to URL, you must use SQL Server 2016 (13.x)
through SQL Server 2017 and you must use Shared Access Signature (SAS) tokens. For examples creating a Shared Access
Signature, see SQL Server Backup to URL and Simplifying creation of SQL Credentials with Shared Access Signature (SAS)
tokens on Azure Storage with Powershell.
URL applies to: SQL Server ( SQL Server 2012 (11.x) SP1 CU2 through SQL Server 2017) and SQL Database
Managed Instance.
A disk device does not have to exist before it is specified in a BACKUP statement. If the physical device exists and
the INIT option is not specified in the BACKUP statement, the backup is appended to the device.
NOTE
The NUL device will discard all input sent to this file, however the backup will still mark all pages as backed up.
NOTE
The TAPE option will be removed in a future version of SQL Server. Avoid using this feature in new development work, and
plan to modify applications that currently use this feature.
n
Is a placeholder that indicates that up to 64 backup devices may be specified in a comma-separated list.
MIRROR TO <backup_device> [ ,...n ] Specifies a set of up to three secondary backup devices, each of which
mirrors the backups devices specified in the TO clause. The MIRROR TO clause must specify the same type and
number of the backup devices as the TO clause. The maximum number of MIRROR TO clauses is three.
This option is available only in the Enterprise edition of SQL Server.
NOTE
For MIRROR TO = DISK, BACKUP automatically determines the appropriate block size for disk devices. For more information
about block size, see "BLOCKSIZE" later in this table.
NOTE
By default, BACKUP DATABASE creates a full backup.
If you choose to encrypt you will also have to specify the encryptor using the encryptor options:
SERVER CERTIFICATE = Encryptor_Name
SERVER ASYMMETRIC KEY = Encryptor_Name
WARNING
When encryption is used in conjunction with the FILE_SNAPSHOT argument, the metadata file itself is encrypted using the
specified encryption algorithm and the system verifies that Transparent Data Encryption (TDE) was completed for the
database. No additional encryption happens for the data itself. The backup fails if the database was not encrypted or if the
encryption was not completed before the backup statement was issued.
NOTE
To specify a backup set for a restore operation, use the FILE = <backup_set_file_number> option. For more information
about how to specify a backup set, see "Specifying a Backup Set" in RESTORE Arguments (Transact-SQL).
COPY_ONLY Applies to: SQL Server and SQL Database Managed Instance Specifies that the backup is a copy-
only backup, which does not affect the normal sequence of backups. A copy-only backup is created independently
of your regularly scheduled, conventional backups. A copy-only backup does not affect your overall backup and
restore procedures for the database.
Copy-only backups should be used in situations in which a backup is taken for a special purpose, such as backing
up the log before an online file restore. Typically, a copy-only log backup is used once and then deleted.
When used with BACKUP DATABASE , the COPY_ONLY option creates a full backup that cannot serve as a
differential base. The differential bitmap is not updated, and differential backups behave as if the copy-only
backup does not exist. Subsequent differential backups use the most recent conventional full backup as
their base.
IMPORTANT
If DIFFERENTIAL and COPY_ONLY are used together, COPY_ONLY is ignored, and a differential backup is created.
When used with BACKUP LOG , the COPY_ONLY option creates a copy-only log backup, which does not
truncate the transaction log. The copy-only log backup has no effect on the log chain, and other log
backups behave as if the copy-only backup does not exist.
For more information, see Copy-Only Backups (SQL Server).
{ COMPRESSION | NO_COMPRESSION }
In SQL Server 2008 Enterprise and later versions only, specifies whether backup compression is performed on
this backup, overriding the server-level default.
At installation, the default behavior is no backup compression. But this default can be changed by setting the
backup compression default server configuration option. For information about viewing the current value of this
option, see View or Change Server Properties (SQL Server).
For information about using backup compression with Transparent Data Encryption (TDE ) enabled databases, see
the Remarks section.
COMPRESSION
Explicitly enables backup compression.
NO_COMPRESSION
Explicitly disables backup compression.
DESCRIPTION = { 'text' | @text_variable }
Specifies the free-form text describing the backup set. The string can have a maximum of 255 characters.
NAME = { backup_set_name | @backup_set_var }
Specifies the name of the backup set. Names can have a maximum of 128 characters. If NAME is not specified, it
is blank.
{ EXPIREDATE ='date' | RETAINDAYS = days }
Specifies when the backup set for this backup can be overwritten. If these options are both used, RETAINDAYS
takes precedence over EXPIREDATE.
If neither option is specified, the expiration date is determined by the mediaretention configuration setting. For
more information, see Server Configuration Options (SQL Server).
IMPORTANT
These options only prevent SQL Server from overwriting a file. Tapes can be erased using other methods, and disk files can
be deleted through the operating system. For more information about expiration verification, see SKIP and FORMAT in this
topic.
For information about how to specify datetime values, see Date and Time Types.
NOTE
To ignore the expiration date, use the SKIP option.
NOTE
For information about the interactions between { NOINIT | INIT } and { NOSKIP | SKIP }, see Remarks later in this topic.
NOINIT
Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media
password is defined for the media set, the password must be supplied. NOINIT is the default.
For more information, see Media Sets, Media Families, and Backup Sets (SQL Server).
INIT
Specifies that all backup sets should be overwritten, but preserves the media header. If INIT is specified, any
existing backup set on that device is overwritten, if conditions permit. By default, BACKUP checks for the
following conditions and does not overwrite the backup media if either condition exists:
Any backup set has not yet expired. For more information, see the EXPIREDATE and RETAINDAYS options.
The backup set name given in the BACKUP statement, if provided, does not match the name on the backup
media. For more information, see the NAME option, earlier in this section.
To override these checks, use the SKIP option.
For more information, see Media Sets, Media Families, and Backup Sets (SQL Server).
{ NOSKIP | SKIP }
Controls whether a backup operation checks the expiration date and time of the backup sets on the media before
overwriting them.
NOTE
For information about the interactions between { NOINIT | INIT } and { NOSKIP | SKIP }, see "Remarks," later in this topic.
NOSKIP
Instructs the BACKUP statement to check the expiration date of all backup sets on the media before allowing
them to be overwritten. This is the default behavior.
SKIP
Disables the checking of backup set expiration and name that is usually performed by the BACKUP statement to
prevent overwrites of backup sets. For information about the interactions between { INIT | NOINIT } and {
NOSKIP | SKIP }, see "Remarks," later in this topic.
To view the expiration dates of backup sets, query the expiration_date column of the backupset history table.
{ NOFORMAT | FORMAT }
Specifies whether the media header should be written on the volumes used for this backup operation, overwriting
any existing media header and backup sets.
NOFORMAT
Specifies that the backup operation preserves the existing media header and backup sets on the media volumes
used for this backup operation. This is the default behavior.
FORMAT
Specifies that a new media set be created. FORMAT causes the backup operation to write a new media header on
all media volumes used for the backup operation. The existing contents of the volume become invalid, because
any existing media header and backup sets are overwritten.
IMPORTANT
Use FORMAT carefully. Formatting any volume of a media set renders the entire media set unusable. For example, if you
initialize a single tape belonging to an existing striped media set, the entire media set is rendered useless.
Specifying FORMAT implies SKIP ; SKIP does not need to be explicitly stated.
MEDIADESCRIPTION = { text | @text_variable }
Specifies the free-form text description, maximum of 255 characters, of the media set.
MEDIANAME = { media_name | @media_name_variable }
Specifies the media name for the entire backup media set. The media name must be no longer than 128
characters, If MEDIANAME is specified, it must match the previously specified media name already existing on the
backup volumes. If it is not specified, or if the SKIP option is specified, there is no verification check of the media
name.
BLOCKSIZE = { blocksize | @blocksize_variable }
Specifies the physical block size, in bytes. The supported sizes are 512, 1024, 2048, 4096, 8192, 16384, 32768,
and 65536 (64 KB ) bytes. The default is 65536 for tape devices and 512 otherwise. Typically, this option is
unnecessary because BACKUP automatically selects a block size that is appropriate to the device. Explicitly stating
a block size overrides the automatic selection of block size.
If you are taking a backup that you plan to copy onto and restore from a CD -ROM, specify BLOCKSIZE=2048.
NOTE
This option typically affects performance only when writing to tape devices.
NOTE
For important information about using the BUFFERCOUNT option, see the Incorrect BufferCount data transfer option can
lead to OOM condition blog.
NOTE
When creating backups by using the SQL Writer Service, if the database has configured FILESTREAM, or includes memory
optimized filegroups, then the MAXTRANSFERSIZE at the time of a restore should be greater than or equal to the
MAXTRANSFERSIZE that was used when the backup was created.
NOTE
For Transparent Data Encryption (TDE) enabled databases with a single data file, the default MAXTRANSFERSIZE is 65536 (64
KB). For non-TDE encrypted databases the default MAXTRANSFERSIZE is 1048576 (1 MB) when using backup to DISK, and
65536 (64 KB) when using VDI or TAPE. For more information about using backup compression with TDE encrypted
databases, see the Remarks section.
{ UNLOAD | NOUNLOAD }
Applies to: SQL Server
NOTE
UNLOAD and NOUNLOAD are session settings that persist for the life of the session or until it is reset by specifying the
alternative.
NOTE
For a backup to a tape backup device, the BLOCKSIZE option to affect the performance of the backup operation. This
option typically affects performance only when writing to tape devices.
Log-specific options
Applies to: SQL Server
These options are only used with BACKUP LOG .
NOTE
If you do not want to take log backups, use the simple recovery model. For more information, see Recovery Models (SQL
Server).
NOTE
For an introduction to backup in SQL Server, see Backup Overview (SQL Server).
Backup types
The supported backup types depend on the recovery model of the database, as follows
All recovery models support full and differential backups of data.
File or filegroup File backups cover one or more files or filegroups, and are
relevant only for databases that contain multiple
filegroups. Under the simple recovery model, file backups
are essentially restricted to read-only secondary
filegroups.
Optionally, each file backup can serve as the base of a
series of one or more differential file backups.
Under the full recovery model or bulk-logged recovery model, conventional backups also include
sequential transaction log backups (or log backups), which are required. Each log backup covers the
portion of the transaction log that was active when the backup was created, and it includes all log records
not backed up in a previous log backup.
To minimize work-loss exposure, at the cost of administrative overhead, you should schedule frequent log
backups. Scheduling differential backups between full backups can reduce restore time by reducing the
number of log backups you have to restore after restoring the data.
We recommend that you put log backups on a separate volume than the database backups.
NOTE
Before you can create the first log backup, you must create a full backup.
A copy-only backup is a special-purpose full backup or log backup that is independent of the normal
sequence of conventional backups. To create a copy-only backup, specify the COPY_ONLY option in your
BACKUP statement. For more information, see Copy-Only Backups (SQL Server).
Transaction Log Truncation
To avoid filling up the transaction log of a database, routine backups are essential. Under the simple recovery
model, log truncation occurs automatically after you back up the database, and under the full recovery model,
after you back up the transaction log. However, sometimes the truncation process can be delayed. For information
about factors that can delay log truncation, see The Transaction Log (SQL Server).
NOTE
The BACKUP LOG WITH NO_LOG and WITH TRUNCATE_ONLY options have been discontinued. If you are using the full or bulk-
logged recovery model recovery and you must remove the log backup chain from a database, switch to the simple recovery
model. For more information, see View or Change the Recovery Model of a Database (SQL Server).
After a backup device is defined as part of a stripe set, it cannot be used for a single-device backup unless
FORMAT is specified. Similarly, a backup device that contains nonstriped backups cannot be used in a stripe set
unless FORMAT is specified. To split a striped backup set, use FORMAT.
If neither MEDIANAME or MEDIADESCRIPTION is specified when a media header is written, the media header
field corresponding to the blank item is empty.
Working with a mirrored media set
Typically, backups are unmirrored, and BACKUP statements simply include a TO clause. However, a total of four
mirrors is possible per media set. For a mirrored media set, the backup operation writes to multiple groups of
backup devices. Each group of backup devices comprises a single mirror within the mirrored media set. Every
mirror must use the same quantity and type of physical backup devices, which must all have the same properties.
To back up to a mirrored media set, all of the mirrors must be present. To back up to a mirrored media set, specify
the TO clause to specify the first mirror, and specify a MIRROR TO clause for each additional mirror.
For a mirrored media set, each MIRROR TO clause must list the same number and type of devices as the TO clause.
The following example writes to a mirrored media set that contains two mirrors and uses three devices per mirror:
IMPORTANT
This example is designed to allow you to test it on your local system. In practice, backing up to multiple devices on the same
drive would hurt performance and would eliminate the redundancy for which mirrored media sets are designed.
M e d i a fa m i l i e s i n m i r r o r e d m e d i a se t s
Each backup device specified in the TO clause of a BACKUP statement corresponds to a media family. For
example, if the TO clauses lists three devices, BACKUP writes data to three media families. In a mirrored media
set, every mirror must contain a copy of every media family. This is why the number of devices must be identical
in every mirror.
When multiple devices are listed for each mirror, the order of the devices determines which media family is
written to a particular device. For example, in each of the device lists, the second device corresponds to the second
media family. For the devices in the above example, the correspondence between devices and media families is
shown in the following table.
A media family must always be backed up onto the same device within a specific mirror. Therefore, each time you
use an existing media set, list the devices of each mirror in the same order as they were specified when the media
set was created.
For more information about mirrored media sets, see Mirrored Backup Media Sets (SQL Server). For more
information about media sets and media families in general, see Media Sets, Media Families, and Backup Sets
(SQL Server).
Restoring SQL Server backups
To restore a database and, optionally, recover it to bring it online, or to restore a file or filegroup, use either the
Transact-SQL RESTORE statement or the SQL Server Management Studio Restore tasks. For more information
see Restore and Recovery Overview (SQL Server).
NOTE
If the tape media is empty or the disk backup file does not exist, all these interactions write a media header and proceed. If
the media is not empty and lacks a valid media header, these operations give feedback stating that this is not valid MTF
media, and they terminate the backup operation.
NOINIT INIT
NOSKIP If the volume contains a valid media If the volume contains a valid media
header, verifies that the media name header, performs the following checks:
matches the given MEDIANAME , if any. If MEDIANAME was specified,
If it matches, appends the backup set, verifies that the given media
preserving all existing backup sets. name matches the media
If the volume does not contain a valid header's media name.1
media header, an error occurs. Verifies that there are no
unexpired backup sets already
on the media. If there are,
terminates the backup.
SKIP If the volume contains a valid media If the volume contains a valid2 media
header, appends the backup set, header, overwrites any backup sets on
preserving all existing backup sets. the media, preserving only the media
header.
If the media is empty, generates a
media header using the specified
MEDIANAME and MEDIADESCRIPTION , if
any.
1 The user must belong to the appropriate fixed database or server roles to perform a backup operation.
2 Validity includes the MTFversion number and other header information. If the version specified is unsupported
or an unexpected value, an error occurs.
Compatibility
Cau t i on
Backups that are created by more recent version of SQL Server cannot be restored in earlier versions of SQL
Server.
BACKUP supports the RESTART option to provide backward compatibility with earlier versions of SQL Server. But
RESTART has no effect.
General remarks
Database or log backups can be appended to any disk or tape device, allowing a database and its transaction logs
to be kept within one physical location.
The BACKUP statement is not allowed in an explicit or implicit transaction.
Cross-platform backup operations, even between different processor types, can be performed as long as the
collation of the database is supported by the operating system.
When using backup compression with Transparent Data Encryption (TDE ) enabled databases with a single data
file, it is recommended to use a MAXTRANSFERSIZE setting larger than 65536 (64 KB ).
Starting with SQL Server 2016 (13.x), this enables an optimized compression algorithm for TDE encrypted
databases that first decrypts a page, compresses it and then encrypts it again. If using MAXTRANSFERSIZE = 65536
(64 KB ), backup compression with TDE encrypted databases directly compresses the encrypted pages, and may
not yield good compression ratios. For more information, see Backup Compression for TDE -enabled Databases.
NOTE
There are some cases where the default MAXTRANSFERSIZE is greater than 64K:
When the database has multiple data files created, it uses MAXTRANSFERSIZE > 64K
When performing backup to URL, the default MAXTRANSFERSIZE = 1048576 (1MB)
Even if one of these conditions applies, you must explicitly set MAXTRANSFERSIZE greater than 64K in your backup
command in order to get the new backup compression algorithm.
By default, every successful backup operation adds an entry in the SQL Server error log and in the system event
log. If back up the log very frequently, these success messages accumulate quickly, resulting in huge error logs
that can make finding other messages difficult. In such cases you can suppress these log entries by using trace flag
3226 if none of your scripts depend on those entries. For more information, see Trace Flags (Transact-SQL ).
Interoperability
SQL Server uses an online backup process to allow a database backup while the database is still in use. During a
backup, most operations are possible; for example, INSERT, UPDATE, or DELETE statements are allowed during a
backup operation.
Operations that cannot run during a database or transaction log backup include:
File management operations such as the ALTER DATABASE statement with either the ADD FILE or
REMOVE FILE options.
NOTE
To work around this limitation on-premises, backup to DISK instead of backup to URL , upload backup file to blob, then
restore. Restore supports bigger files because a different blob type is used.
Metadata
SQL Server includes the following backup history tables that track backup activity:
backupfile (Transact-SQL )
backupfilegroup (Transact-SQL )
backupmediafamily (Transact-SQL )
backupmediaset (Transact-SQL )
backupset (Transact-SQL )
When a restore is performed, if the backup set was not already recorded in the msdb database, the backup history
tables might be modified.
Security
Beginning with SQL Server 2012 (11.x), the PASSWORD and MEDIAPASSWORD options are discontinued for creating
backups. It is still possible to restore backups created with passwords.
Permissions
BACKUP DATABASE and BACKUP LOG permissions default to members of the sysadmin fixed server role and
the db_owner and db_backupoperator fixed database roles.
Ownership and permission problems on the backup device's physical file can interfere with a backup operation.
SQL Server must be able to read and write to the device; the account under which the SQL Server service runs
must have write permissions. However, sp_addumpdevice, which adds an entry for a backup device in the system
tables, does not check file access permissions. Such problems on the backup device's physical file may not appear
until the physical resource is accessed when the backup or restore is attempted.
Examples
This section contains the following examples:
A. Backing up a complete database
B. Backing up the database and log
C. [Creating a full file backup of the secondary filegroups](#full_
file_backup)
D. Creating a differential file backup of the secondary filegroups
E. Creating and backing up to a single-family mirrored media set
F. Creating and backing up to a multifamily mirrored media set
G Backing up to an existing mirrored media set
H. Creating a compressed backup in a new media set
I. Backing up to the Microsoft Azure Blob storage service
NOTE
The backup how-to topics contain additional examples. For more information, see Backup Overview (SQL Server).
NOTE
For a production database, back up the log regularly. Log backups should be frequent enough to provide sufficient
protection against data loss.
NOTE
NOINIT, which is the default, is shown here for clarity.
See Also
Backup Devices (SQL Server)
Media Sets, Media Families, and Backup Sets (SQL Server)
Tail-Log Backups (SQL Server)
ALTER DATABASE (Transact-SQL )
DBCC SQLPERF (Transact-SQL )
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
sp_addumpdevice (Transact-SQL )
sp_configure (Transact-SQL )
sp_helpfile (Transact-SQL )
sp_helpfilegroup (Transact-SQL )
Server Configuration Options (SQL Server)
Piecemeal Restore of Databases With Memory-Optimized Tables
BACKUP CERTIFICATE (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Exports a certificate to a file.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
path_to_file
Specifies the complete path, including file name, of the file in which the certificate is to be saved. This can be a
local path or a UNC path to a network location. The default is the path of the SQL Server DATA folder.
path_to_private_key_file
Specifies the complete path, including file name, of the file in which the private key is to be saved. This can be a
local path or a UNC path to a network location. The default is the path of the SQL Server DATA folder.
encryption_password
Is the password that is used to encrypt the private key before writing the key to the backup file. The password is
subject to complexity checks.
decryption_password
Is the password that is used to decrypt the private key before backing up the key.
Remarks
If the private key is encrypted with a password in the database, the decryption password must be specified.
When you back up the private key to a file, encryption is required. The password used to protect the backed up
certificate is not the same password that is used to encrypt the private key of the certificate.
To restore a backed up certificate, use the CREATE CERTIFICATEstatement.
Permissions
Requires CONTROL permission on the certificate and knowledge of the password that is used to encrypt the
private key. If only the public part of the certificate is backed up, requires some permission on the certificate and
that the caller has not been denied VIEW permission on the certificate.
Examples
A. Exporting a certificate to a file
The following example exports a certificate to a file.
See Also
CREATE CERTIFICATE (Transact-SQL )
ALTER CERTIFICATE (Transact-SQL )
DROP CERTIFICATE (Transact-SQL )
BACKUP DATABASE (Parallel Data Warehouse)
5/4/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a backup of a Parallel Data Warehouse database and stores the backup off the appliance in a user-specified
network location. Use this statement with RESTORE DATABASE (Parallel Data Warehouse) for disaster recovery,
or to copy a database from one appliance to another.
Before you begin, see "Acquire and Configure a Backup Server" in the Parallel Data Warehouse product
documentation.
There are two types of backups in Parallel Data Warehouse. A full database backup is a backup of an entire Parallel
Data Warehouse database. A differential database backup only includes changes made since the last full backup. A
backup of a user database includes database users, and database roles. A backup of the master database includes
logins.
For more information about Parallel Data Warehouse database backups, see "Backup and Restore" in the Parallel
Data Warehouse product documentation.
Transact-SQL Syntax Conventions (Transact-SQL )
Syntax
Create a full backup of a user database or the master database.
BACKUP DATABASE database_name
TO DISK = '\\UNC_path\backup_directory'
[ WITH [ ( ] <with_options> [ ,...n ] [ ) ] ]
[;]
<with_options> ::=
DESCRIPTION = 'text'
| NAME = 'backup_name'
Arguments
database_name
The name of the database on which to create a backup. The database can be the master database or a user
database.
TO DISK = '\\UNC_path\backup_directory'
The network path and directory to which Parallel Data Warehouse will write the backup files. For example,
'\\xxx.xxx.xxx.xxx\backups\2012\Monthly\08.2012.Mybackup'.
The path to the backup directory name must already exist and must be specified as a fully qualified
universal naming convention (UNC ) path.
The backup directory, backup_directory, must not exist before running the backup command. Parallel Data
Warehouse will create the backup directory.
The path to the backup directory cannot be a local path and it cannot be a location on any of the Parallel
Data Warehouse appliance nodes.
The maximum length of the UNC path and backup directory name is 200 characters.
The server or host must be specified as an IP address. You cannot specify it as the host or server name.
DESCRIPTION = 'text'
Specifies a textual description of the backup. The maximum length of the text is 255 characters.
The description is stored in the metadata, and will be displayed when the backup header is restored with
RESTORE HEADERONLY.
NAME = 'backup _name'
Specifies the name of the backup. The backup name can be different from the database name.
Names can have a maximum of 128 characters.
Cannot include a path.
Must begin with a letter or number character or an underscore (). Special characters permitted are the
underscore (\), hyphen (-), or space ( ). Backup names cannot end with a space character.
The statement will fail if backup_name already exists in the specified location.
This name is stored in the metadata, and will be displayed when the backup header is restored with
RESTORE HEADERONLY.
DIFFERENTIAL
Specifies to perform a differential backup of a user database. If omitted, the default is a full database backup.
The name of the differential backup does not need to match the name of the full backup. For keeping track
of the differential and its corresponding full backup, consider using the same name with 'full' or 'diff'
appended.
For example:
BACKUP DATABASE Customer TO DISK = '\\xxx.xxx.xxx.xxx\backups\CustomerFull';
Permissions
Requires the BACKUP DATABASE permission or membership in the db_backupoperator fixed database role.
The master database cannot be backed up but by a regular user that was added to the db_backupoperator fixed
database role. The master database can only be backed up by sa, the fabric administrator, or members of the
sysadmin fixed server role.
Requires a Windows account that has permission to access, create, and write to the backup directory. You must
also store the Windows account name and password in Parallel Data Warehouse. To add these network credentials
to Parallel Data Warehouse, use the sp_pdw_add_network_credentials (SQL Data Warehouse) stored procedure.
For more information about managing credentials in Parallel Data Warehouse, see the Security section.
Error Handling
BACKUP DATABASE errors under the following conditions:
User permissions are not sufficient to perform a backup.
Parallel Data Warehouse does not have the correct permissions to the network location where the backup
will be stored.
The database does not exist.
The target directory already exists on the network share.
The target network share is not available.
The target network share does not have enough space for the backup. The BACKUP DATABASE command
does not confirm that sufficient disk space exists prior to initiating the backup, making it possible to
generate an out-of-disk-space error while running BACKUP DATABASE. When insufficient disk space
occurs, Parallel Data Warehouse rolls back the BACKUP DATABASE command. To decrease the size of your
database, run DBCC SHRINKLOG (Azure SQL Data Warehouse)
Attempt to start a backup within a transaction.
General Remarks
Before you perform a database backup, use DBCC SHRINKLOG (Azure SQL Data Warehouse) to decrease the size
of your database.
A Parallel Data Warehouse backup is stored as a set of multiple files within the same directory.
A differential backup usually takes less time than a full backup and can be performed more frequently. When
multiple differential backups are based on the same full backup, each differential includes all of the changes in the
previous differential backup.
If you cancel a BACKUP command, Parallel Data Warehouse will remove the target directory and any files created
for the backup. If Parallel Data Warehouse loses network connectivity to the share, the rollback cannot complete.
Full backups and differential backups are stored in separate directories. Naming conventions are not enforced for
specifying that a full backup and differential backup belong together. You can track this through your own naming
conventions. Alternatively, you can track this by using the WITH DESCRIPTION option to add a description, and
then by using the RESTORE HEADERONLY statement to retrieve the description.
Metadata
These dynamic management views contain information about all backup, restore, and load operations. The
information persists across system restarts.
sys.pdw_loader_backup_runs (Transact-SQL )
sys.pdw_loader_backup_run_details (Transact-SQL )
sys.pdw_loader_run_stages (Transact-SQL )
Performance
To perform a backup, Parallel Data Warehouse first backs up the metadata, and then it performs a parallel backup
of the database data stored on the Compute nodes. Data is copied directly from each Compute nodes to the
backup directory. To achieve the best performance for moving data from the Compute nodes to the backup
directory, Parallel Data Warehouse controls the number of Compute nodes that are copying data concurrently.
Locking
Takes an ExclusiveUpdate lock on the DATABASE object.
Security
Parallel Data Warehouse backups are not stored on the appliance. Therefore, your IT team is responsible for
managing all aspects of the backup security. For example, this includes managing the security of the backup data,
the security of the server used to store backups, and the security of the networking infrastructure that connects the
backup server to the Parallel Data Warehouse appliance.
Manage Network Credentials
Network access to the backup directory is based on standard Windows file sharing security. Before performing a
backup, you need to create or designate a Windows account that will be used for authenticating Parallel Data
Warehouse to the backup directory. This windows account must have permission to access, create, and write to the
backup directory.
IMPORTANT
To reduce security risks with your data, we advise that you designate one Windows account solely for the purpose of
performing backup and restore operations. Allow this account to have permissions to the backup location and nowhere else.
You need to store the user name and password in Parallel Data Warehouse by running the
sp_pdw_add_network_credentials (SQL Data Warehouse) stored procedure. Parallel Data Warehouse uses
Windows Credential Manager to store and encrypt user names and passwords on the Control node and Compute
nodes. The credentials are not backed up with the BACKUP DATABASE command.
To remove network credentials from Parallel Data Warehouse, see sp_pdw_remove_network_credentials (SQL
Data Warehouse).
To list all of the network credentials stored in Parallel Data Warehouse, use the sys.dm_pdw_network_credentials
(Transact-SQL ) dynamic management view.
Examples
A. Add network credentials for the backup location
To create a backup, Parallel Data Warehouse must have read/write permission to the backup directory. The
following example shows how to add the credentials for a user. Parallel Data Warehouse will store these credentials
and use them to for backup and restore operations.
IMPORTANT
For security reasons, we recommend creating one domain account solely for the purpose of performing backups.
See Also
RESTORE DATABASE (Parallel Data Warehouse)
BACKUP MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Exports the database master key.
Transact-SQL Syntax Conventions
Syntax
BACKUP MASTER KEY TO FILE = 'path_to_file'
ENCRYPTION BY PASSWORD = 'password'
Arguments
FILE ='path_to_file'
Specifies the complete path, including file name, to the file to which the master key will be exported. This may be a
local path or a UNC path to a network location.
PASSWORD ='password'
Is the password used to encrypt the master key in the file. This password is subject to complexity checks. For more
information, see Password Policy.
Remarks
The master key must be open and, therefore, decrypted before it is backed up. If it is encrypted with the service
master key, the master key does not have to be explicitly opened. But if the master key is encrypted only with a
password, it must be explicitly opened.
We recommend that you back up the master key as soon as it is created, and store the backup in a secure, off-site
location.
Permissions
Requires CONTROL permission on the database.
Examples
The following example creates a backup of the AdventureWorks2012 master key. Because this master key is not
encrypted by the service master key, a password must be specified when it is opened.
USE AdventureWorks2012;
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'sfj5300osdVdgwdfkli7';
BACKUP MASTER KEY TO FILE = 'c:\temp\exportedmasterkey'
ENCRYPTION BY PASSWORD = 'sd092735kjn$&adsg';
GO
See Also
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
RESTORE MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
DROP MASTER KEY (Transact-SQL )
Encryption Hierarchy
BACKUP SERVICE MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Exports the service master key.
Transact-SQL Syntax Conventions
Syntax
BACKUP SERVICE MASTER KEY TO FILE = 'path_to_file'
ENCRYPTION BY PASSWORD = 'password'
Arguments
FILE ='path_to_file'
Specifies the complete path, including file name, to the file to which the service master key will be exported. This
may be a local path or a UNC path to a network location.
PASSWORD ='password'
Is the password used to encrypt the service master key in the backup file. This password is subject to complexity
checks. For more information, see Password Policy.
Remarks
The service master key should be backed up and stored in a secure, off-site location. Creating this backup should
be one of the first administrative actions performed on the server.
Permissions
Requires CONTROL SERVER permission on the server.
Examples
In the following example, the service master key is backed up to a file.
See Also
ALTER SERVICE MASTER KEY (Transact-SQL )
RESTORE SERVICE MASTER KEY (Transact-SQL )
RESTORE Statements (Transact-SQL)
5/4/2018 • 24 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Restores backups taken using the BACKUP command. This command enables you to perform the following
restore scenarios:
Restore an entire database from a full database backup (a complete restore).
Restore part of a database (a partial restore).
Restore specific files or filegroups to a database (a file restore).
Restore specific pages to a database (a page restore).
Restore a transaction log onto a database (a transaction log restore).
Revert a database to the point in time captured by a database snapshot.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL
Database Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
For more information about SQL Server restore scenarios, see Restore and Recovery Overview (SQL
Server). For more information about descriptions of the arguments, see RESTORE Arguments (Transact-
SQL ). When restoring a database from another instance, consider the information from Manage Metadata
When Making a Database Available on Another Server Instance (SQL Server).
NOTE: For more information about restoring from the Windows Azure Blob storage service, see SQL
Server Backup and Restore with Microsoft Azure Blob Storage Service.
Syntax
--To Restore an Entire Database from a Full database backup (a Complete Restore):
RESTORE DATABASE { database_name | @database_name_var }
[ FROM <backup_device> [ ,...n ] ]
[ WITH
{
[ RECOVERY | NORECOVERY | STANDBY =
{standby_file_name | @standby_file_name_var }
]
| , <general_WITH_options> [ ,...n ]
| , <replication_WITH_option>
| , <change_data_capture_WITH_option>
| , <FILESTREAM_WITH_option>
| , <service_broker_WITH options>
| , \<point_in_time_WITH_options—RESTORE_DATABASE>
} [ ,...n ]
]
[;]
[;]
<backup_device>::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK -- Does not apply to SQL Database Managed Instance
| TAPE -- Does not apply to SQL Database Managed Instance
| URL -- Applies to SQL Server and SQL Database Managed Instance
} = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Note: URL is the format used to specify the location and the file name for the Windows Azure Blob.
Although Windows Azure storage is a service, the implementation is similar to disk and tape to allow for
a consistent and seemless restore experince for all the three devices.
<files_or_filegroups>::=
{
{
FILE = { logical_file_name_in_backup | @logical_file_name_in_backup_var }
| FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
| READ_WRITE_FILEGROUPS
}
--Monitoring Options
| STATS [ = percentage ]
<replication_WITH_option>::=
| KEEP_REPLICATION
<change_data_capture_WITH_option>::=
| KEEP_CDC
<FILESTREAM_WITH_option>::=
| FILESTREAM ( DIRECTORY_NAME = directory_name )
<service_broker_WITH_options>::=
| ENABLE_BROKER
| ERROR_BROKER_CONVERSATIONS
| NEW_BROKER
\<point_in_time_WITH_options—RESTORE_DATABASE>::=
| {
STOPAT = { 'datetime'| @datetime_var }
| STOPATMARK = 'lsn:lsn_number'
[ AFTER 'datetime']
| STOPBEFOREMARK = 'lsn:lsn_number'
[ AFTER 'datetime']
}
\<point_in_time_WITH_options—RESTORE_LOG>::=
| {
STOPAT = { 'datetime'| @datetime_var }
| STOPATMARK = { 'mark_name' | 'lsn:lsn_number' }
[ AFTER 'datetime']
| STOPBEFOREMARK = { 'mark_name' | 'lsn:lsn_number' }
[ AFTER 'datetime']
}
Arguments
For descriptions of the arguments, see RESTORE Arguments (Transact-SQL ).
Where online restore is supported, if the database is online, file restores and page restores are
automatically online restores and, also, restores of secondary filegroup after the initial stage of a
piecemeal restore.
RESTORE LOG
RESTORE LOG can include a file list to allow for creation of files during roll forward. This is used when the
log backup contains log records written when a file was added to the database.
NOTE: For a database using the full or bulk-logged recovery model, in most cases you must back up
the tail of the log before restoring the database. Restoring a database without first backing up the tail of
the log results in an error, unless the RESTORE DATABASE statement contains either the WITH
REPL ACE or the WITH STOPAT clause, which must specify a time or transaction that occurred after the
end of the data backup. For more information about tail-log backups, see Tail-Log Backups (SQL
Server).
Compatibility Support
Backups of master, model and msdb that were created by using an earlier version of SQL Server cannot
be restored by SQL Server 2017.
NOTE: No SQL Server backup be restored to an earlier version of SQL Server than the version on
which the backup was created.
Each version of SQL Server uses a different default path than earlier versions. Therefore, to restore a
database that was created in the default location for earlier version backups, you must use the MOVE
option. For information about the new default path, see File Locations for Default and Named Instances of
SQL Server.
After you restore an earlier version database to SQL Server 2017, the database is automatically upgraded.
Typically, the database becomes available immediately. However, if a SQL Server 2005 database has full-
text indexes, the upgrade process either imports, resets, or rebuilds them, depending on the setting of the
upgrade_option server property. If the upgrade option is set to import (upgrade_option = 2) or rebuild
(upgrade_option = 0), the full-text indexes will be unavailable during the upgrade. Depending the amount
of data being indexed, importing can take several hours, and rebuilding can take up to ten times longer.
Note also that when the upgrade option is set to import, the associated full-text indexes are rebuilt if a full-
text catalog is not available. To change the setting of the upgrade_option server property, use
sp_fulltext_service.
When a database is first attached or restored to a new instance of SQL Server, a copy of the database
master key (encrypted by the service master key) is not yet stored in the server. You must use the OPEN
MASTER KEY statement to decrypt the database master key (DMK). Once the DMK has been decrypted,
you have the option of enabling automatic decryption in the future by using the ALTER MASTER KEY
REGENERATE statement to provision the server with a copy of the DMK, encrypted with the service
master key (SMK). When a database has been upgraded from an earlier version, the DMK should be
regenerated to use the newer AES algorithm. For more information about regenerating the DMK, see
ALTER MASTER KEY (Transact-SQL ). The time required to regenerate the DMK key to upgrade to AES
depends upon the number of objects protected by the DMK. Regenerating the DMK key to upgrade to AES
is only necessary once, and has no impact on future regenerations as part of a key rotation strategy.
General Remarks
During an offline restore, if the specified database is in use, RESTORE forces the users off after a short
delay. For online restore of a non-primary filegroup, the database can stay in use except when the filegroup
being restored is being taken offline. Any data in the specified database is replaced by the restored data.
For more information about database recovery, see Restore and Recovery Overview (SQL Server).
Cross-platform restore operations, even between different processor types, can be performed as long as the
collation of the database is supported by the operating system.
RESTORE can be restarted after an error. In addition, you can instruct RESTORE to continue despite errors,
and it restores as much data as possible (see the CONTINUE_AFTER_ERROR option).
RESTORE is not allowed in an explicit or implicit transaction.
Restoring a damaged master database is performed using a special procedure. For more information, see
Back Up and Restore of System Databases (SQL Server).
Restoring a database clears the plan cache for the instance of SQL Server. Clearing the plan cache causes a
recompilation of all subsequent execution plans and can cause a sudden, temporary decrease in query
performance. For each cleared cachestore in the plan cache, the SQL Server error log contains the following
informational message: " SQL Server has encountered %d occurrence(s) of cachestore flush for the '%s'
cachestore (part of plan cache) due to some database maintenance or reconfigure operations". This
message is logged every five minutes as long as the cache is flushed within that time interval.
To restore an availability database, first restore the database to the instance of SQL Server, and then add the
database to the availability group.
Interoperability
Database Settings and Restoring
During a restore, most of the database options that are settable using ALTER DATABASE are reset to the
values in force at the time of the end of backup.
Using the WITH RESTRICTED_USER option, however, overrides this behavior for the user access option
setting. This setting is always set following a RESTORE statement, which includes the WITH
RESTRICTED_USER option.
Restoring an Encrypted Database
To restore a database that is encrypted, you must have access to the certificate or asymmetric key that was
used to encrypt the database. Without the certificate or asymmetric key, the database cannot be restored. As
a result, the certificate that is used to encrypt the database encryption key must be retained as long as the
backup is needed. For more information, see SQL Server Certificates and Asymmetric Keys.
Restoring a Database Enabled for vardecimal Storage
Backup and restore work correctly with the vardecimal storage format. For more information about
vardecimal storage format, see sp_db_vardecimal_storage_format (Transact-SQL ).
Restore Full-Text Data
Full-text data is restored together with other database data during a complete restore. Using the regular
RESTORE DATABASE database_name FROM backup_device syntax, the full-text files are restored as part of the
database file restore.
The RESTORE statement also can be used to perform restores to alternate locations, differential restores,
file and filegroup restores, and differential file and filegroup restores of full-text data. In addition, RESTORE
can restore full-text files only, as well as with database data.
NOTE: Full-text catalogs imported from SQL Server 2005 are still treated as database files. For these,
the SQL Server 2005 procedure for backing up full-text catalogs remains applicable, except that
pausing and resuming during the backup operation are no longer necessary. For more information, see
Backing Up and Restoring Full-Text Catalogs.
Metadata
SQL Server includes backup and restore history tables that track the backup and restore activity for each
server instance. When a restore is performed, the backup history tables are also modified. For information
on these tables, see Backup History and Header Information (SQL Server).
Redoing a Restore
Undoing the effects of a restore is not possible; however, you can negate the effects of the data copy and roll
forward by starting over on a per-file basis. To start over, restore the desired file and perform the roll
forward again. For example, if you accidentally restored too many log backups and overshot your intended
stopping point, you would have to restart the sequence.
A restore sequence can be aborted and restarted by restoring the entire contents of the affected files.
Reverting a Database to a Database Snapshot
A revert database operation (specified using the DATABASE_SNAPSHOT option) takes a full source
database back in time by reverting it to the time of a database snapshot, that is, overwriting the source
database with data from the point in time maintained in the specified database snapshot. Only the snapshot
to which you are reverting can currently exist. The revert operation then rebuilds the log (therefore, you
cannot later roll forward a reverted database to the point of user error).
Data loss is confined to updates to the database since the snapshot's creation. The metadata of a reverted
database is the same as the metadata at the time of snapshot creation. However, reverting to a snapshot
drops all the full-text catalogs.
Reverting from a database snapshot is not intended for media recovery. Unlike a regular backup set, the
database snapshot is an incomplete copy of the database files. If either the database or the database
snapshot is corrupted, reverting from a snapshot is likely to be impossible. Furthermore, even when
possible, reverting in the event of corruption is unlikely to correct the problem.
Restrictions on Reverting
Reverting is unsupported under the following conditions:
The source database contains any read-only or compressed filegroups.
Any files are offline that were online when the snapshot was created.
More than one snapshot of the database currently exists.
For more information, see Revert a Database to a Database Snapshot.
Security
A backup operation may optionally specify passwords for a media set, a backup set, or both. When a
password has been defined on a media set or backup set, you must specify the correct password or
passwords in the RESTORE statement. These passwords prevent unauthorized restore operations and
unauthorized appends of backup sets to media using SQL Server tools. However, password-protected
media can be overwritten by the BACKUP statement's FORMAT option.
IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server
tools by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using
this feature in new development work, and plan to modify applications that currently use this feature.The best
practice for protecting backups is to store backup tapes in a secure location or back up to disk files that are protected
by adequate access control lists (ACLs). The ACLs should be set on the directory root under which backups are
created.
For information specific to SQL Server backup and restore with the Windows Azure Blob storage, see SQL Server
Backup and Restore with Microsoft Azure Blob Storage Service.
Permissions
If the database being restored does not exist, the user must have CREATE DATABASE permissions to be
able to execute RESTORE. If the database exists, RESTORE permissions default to members of the
sysadmin and dbcreator fixed server roles and the owner (dbo) of the database (for the FROM
DATABASE_SNAPSHOT option, the database always exists).
RESTORE permissions are given to roles in which membership information is always readily available to
the server. Because fixed database role membership can be checked only when the database is accessible
and undamaged, which is not always the case when RESTORE is executed, members of the db_owner fixed
database role do not have RESTORE permissions.
Examples
All the examples assume that a full database backup has been performed.
The RESTORE examples include the following:
A. Restoring a full database
B. Restoring full and differential database backups
C. Restoring a database using RESTART syntax
D. Restoring a database and move files
E. Copying a database using BACKUP and RESTORE
F. Restoring to a point-in-time using STOPAT
G. Restoring the transaction log to a mark
H. Restoring using TAPE syntax
I. Restoring using FILE and FILEGROUP syntax
J. Reverting from a database snapshot
K. Restoring from the Microsoft Azure Blob storage service
NOTE: For additional examples, see the restore how -to topics that are listed in Restore and Recovery
Overview (SQL Server).
NOTE: For a database using the full or bulk-logged recovery model, SQL Server requires in most cases
that you back up the tail of the log before restoring the database. For more information, see Tail-Log
Backups (SQL Server).
[Top of examples]
B. Restoring full and differential database backups
The following example restores a full database backup followed by a differential backup from the
Z:\SQLServerBackups\AdventureWorks2012.bak backup device, which contains both backups. The full database
backup to be restored is the sixth backup set on the device ( FILE = 6 ), and the differential database backup
is the ninth backup set on the device ( FILE = 9 ). As soon as the differential backup is recovered, the
database is recovered.
RESTORE DATABASE AdventureWorks2012
FROM DISK = 'Z:\SQLServerBackups\AdventureWorks2012.bak'
WITH FILE = 6
NORECOVERY;
RESTORE DATABASE AdventureWorks2012
FROM DISK = 'Z:\SQLServerBackups\AdventureWorks2012.bak'
WITH FILE = 9
RECOVERY;
[Top of examples]
C. Restoring a database using RESTART syntax
The following example uses the RESTART option to restart a RESTORE operation interrupted by a server
power failure.
[Top of examples]
D. Restoring a database and move files
The following example restores a full database and transaction log and moves the restored database into
the C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Data directory.
[Top of examples]
E. Copying a database using BACKUP and RESTORE
The following example uses both the BACKUP and RESTORE statements to make a copy of the
AdventureWorks2012 database. The MOVE statement causes the data and log file to be restored to the
specified locations. The RESTORE FILELISTONLY statement is used to determine the number and names of the
files in the database being restored. The new copy of the database is named TestDB . For more information,
see RESTORE FILELISTONLY (Transact-SQL ).
BACKUP DATABASE AdventureWorks2012
TO AdventureWorksBackups ;
RESTORE FILELISTONLY
FROM AdventureWorksBackups ;
[Top of examples]
F. Restoring to a point-in-time using STOPAT
The following example restores a database to its state as of 12:00 AM on April 15, 2020 and shows a
restore operation that involves multiple log backups. On the backup device, AdventureWorksBackups , the full
database backup to be restored is the third backup set on the device ( FILE = 3 ), the first log backup is the
fourth backup set ( FILE = 4 ), and the second log backup is the fifth backup set ( FILE = 5 ).
[Top of examples]
G. Restoring the transaction log to a mark
The following example restores the transaction log to the mark in the marked transaction named
ListPriceUpdate .
USE AdventureWorks2012
GO
BEGIN TRANSACTION ListPriceUpdate
WITH MARK 'UPDATE Product list prices';
GO
UPDATE Production.Product
SET ListPrice = ListPrice * 1.10
WHERE ProductNumber LIKE 'BK-%';
GO
[Top of examples]
H. Restoring using TAPE syntax
The following example restores a full database backup from a TAPE backup device.
[Top of examples]
I. Restoring using FILE and FILEGROUP syntax
The following example restores a database named MyDatabase that has two files, one secondary filegroup,
and one transaction log. The database uses the full recovery model.
The database backup is the ninth backup set in the media set on a logical backup device named
MyDatabaseBackups . Next, three log backups, which are in the next three backup sets ( 10 , 11 , and 12 ) on
the MyDatabaseBackups device, are restored by using WITH NORECOVERY . After restoring the last log backup,
the database is recovered.
NOTE: Recovery is performed as a separate step to reduce the possibility of you recovering too early,
before all of the log backups have been restored.
In the RESTORE DATABASE , notice that there are two types of FILE options. The FILE options preceding the
backup device name specify the logical file names of the database files that are to be restored from the
backup set; for example, FILE = 'MyDatabase_data_1' . This backup set is not the first database backup in the
media set; therefore, its position in the media set is indicated by using the FILE option in the WITH clause,
FILE=9 .
RESTORE DATABASE MyDatabase
FILE = 'MyDatabase_data_1',
FILE = 'MyDatabase_data_2',
FILEGROUP = 'new_customers'
FROM MyDatabaseBackups
WITH
FILE = 9,
NORECOVERY;
GO
-- Restore the log backups.
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 10,
NORECOVERY;
GO
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 11,
NORECOVERY;
GO
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 12,
NORECOVERY;
GO
--Recover the database:
RESTORE DATABASE MyDatabase WITH RECOVERY;
GO
[Top of examples]
J. Reverting from a database snapshot
The following example reverts a database to a database snapshot. The example assumes that only one
snapshot currently exists on the database. For an example of how to create this database snapshot, see
Create a Database Snapshot (Transact-SQL ).
USE master;
RESTORE DATABASE AdventureWorks2012 FROM DATABASE_SNAPSHOT = 'AdventureWorks_dbss1800';
GO
K2. Restore a full database backup from the Microsoft Azure storage service to local storage
A full database backup, located at mysecondcontainer , of Sales will be restored to local storage. Sales
does not currently exist on the server.
K3. Restore a full database backup from local storage to the Microsoft Azure storage service
[Top of examples]
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
This section describes the RESTORE statements for backups. In addition to the main RESTORE {DATABASE | LOG }
statement for restoring and recovering backups, a number of auxiliary RESTORE statements help you manage
your backups and plan your restore sequences. The auxiliary RESTORE commands include: RESTORE
FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, RESTORE REWINDONLY, and RESTORE
VERIFYONLY.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
IMPORTANT
In previous versions of SQL Server, any user could obtain information about backup sets and backup devices by using the
RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE LABELONLY, and RESTORE VERIFYONLY Transact-SQL statements.
Because they reveal information about the content of the backup files, in SQL Server 2008 and later versions these
statements require CREATE DATABASE permission. This requirement secures your backup files and protects your backup
information more fully than in previous versions. For information about this permission, see GRANT Database Permissions
(Transact-SQL).
In This Section
STATEMENT DESCRIPTION
RESTORE (Transact-SQL) Describes the RESTORE DATABASE and RESTORE LOG Transact-
SQL statements used to restore and recover a database from
backups taken using the BACKUP command. RESTORE
DATABASE is used for databases under all recovery models.
RESTORE LOG is used only under the full and bulk-logged
recovery models. RESTORE DATABASE can also be used to
revert a database to a database snapshot.
RESTORE Arguments (Transact-SQL) Documents the arguments described in the "Syntax" sections
of the RESTORE statement and of the associated set of
auxiliary statements: RESTORE FILELISTONLY, RESTORE
HEADERONLY, RESTORE LABELONLY, RESTORE REWINDONLY,
and RESTORE VERIFYONLY. Most of the arguments are
supported by only a subset of these six statements. The
support for each argument is indicated in the description of
the argument.
STATEMENT DESCRIPTION
See Also
Back Up and Restore of SQL Server Databases
RESTORE DATABASE (Parallel Data Warehouse)
5/4/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Restores a Parallel Data Warehouse user database from a database backup to a Parallel Data Warehouse
appliance. The database is restored from a backup that was previously created by the Parallel Data
WarehouseBACKUP DATABASE (Parallel Data Warehouse) command. Use the backup and restore operations to
build a disaster recovery plan, or to move databases from one appliance to another.
NOTE
Restoring master includes restoring appliance login information. To restore master, use the Restore the master Database
(Transact-SQL) page in the Configuration Manager tool. An administrator with access to the Control node can perform this
operation.
For more information about Parallel Data Warehouse database backups, see "Backup and Restore" in the Parallel
Data Warehouse product documentation.
Transact-SQL Syntax Conventions (Transact-SQL )
Syntax
Restore the master database
-- Use the Configuration Manager tool.
Arguments
RESTORE DATABASE database_name
Specifies to restore a user database to a database called database_name. The restored database can have a
different name than the source database that was backed up. database_name cannot already exist as a database
on the destination appliance. For more details on permitted database names, see "Object Naming Rules" in the
Parallel Data Warehouse product documentation.
Restoring a user database restores a full database backup and then optionally restores a differential backup to the
appliance. A restore of a user database includes restoring database users, and database roles.
FROM DISK = '\\UNC_path\backup_directory'
The network path and directory from which Parallel Data Warehouse will restore the backup files. For example,
FROM DISK = '\\xxx.xxx.xxx.xxx\backups\2012\Monthly\08.2012.Mybackup'.
backup_directory
Specifies the name of a directory that contains the full or differential backup. For example, you can perform a
RESTORE HEADERONLY operation on a full or differential backup.
full_backup_directory
Specifies the name of a directory that contains the full backup.
differential_backup_directory
Specifies the name of the directory that contains the differential backup.
The path and backup directory must already exist and must be specified as a fully qualified universal
naming convention (UNC ) path.
The path to the backup directory cannot be a local path and it cannot be a location on any of the Parallel
Data Warehouse appliance nodes.
The maximum length of the UNC path and backup directory name is 200 characters.
The server or host must be specified as an IP address.
RESTORE HEADERONLY
Specifies to return only the header information for one user database backup. Among other fields, the
header includes the text description of the backup, and the backup name. The backup name does not need
to be the same as the name of the directory that stores the backup files.
RESTORE HEADERONLY results are patterned after the SQL Server RESTORE HEADERONLY results.
The result has over 50 columns, which are not all used by Parallel Data Warehouse. For a description of the
columns in the SQL Server RESTORE HEADERONLY results, see RESTORE HEADERONLY (Transact-
SQL ).
Permissions
Requires the CREATE ANY DATABASE permission.
Requires a Windows account that has permission to access and read from the backup directory. You must also
store the Windows account name and password in Parallel Data Warehouse.
1. To verify the credentials are already there, use sys.dm_pdw_network_credentials (Transact-SQL ).
2. To add or update the credentials, use sp_pdw_add_network_credentials (SQL Data Warehouse).
3. To remove credentials from Parallel Data Warehouse, use sp_pdw_remove_network_credentials (SQL Data
Warehouse).
Error Handling
The RESTORE DATABASE command results in errors under the following conditions:
The name of the database to restore already exists on the target appliance. To avoid this, choose a unique
database name, or drop the existing database before running the restore.
There is an invalid set of backup files in the backup directory.
The login permissions are not sufficient to restore a database.
Parallel Data Warehouse does not have the correct permissions to the network location where the backup
files are located.
The network location for the backup directory does not exist, or is not available.
There is insufficient disk space on the Compute nodes or Control node. Parallel Data Warehouse does not
confirm that sufficient disk space exists on the appliance before initiating the restore. Therefore, it is
possible to generate an out-of-disk-space error while running the RESTORE DATABASE statement. When
insufficient disk space occurs, Parallel Data Warehouse rolls back the restore.
The target appliance to which the database is being restored has fewer Compute nodes than the source
appliance from which the database was backed up.
The database restore is attempted from within a transaction.
General Remarks
Parallel Data Warehouse tracks the success of database restores. Before restoring a differential database backup,
Parallel Data Warehouse verifies the full database restore finished successfully.
After a restore, the user database will have database compatibility level 120. This is true for all databases
regardless of their original compatibility level.
Restoring to an Appliance With a Larger Number of Compute Nodes
Run DBCC SHRINKLOG (Azure SQL Data Warehouse) after restoring a database from a smaller to larger
appliance since redistribution will increase transaction log.
Restoring a backup to an appliance with a larger number of Compute nodes grows the allocated database size in
proportion to the number of Compute nodes.
For example, when restoring a 60 GB database from a 2-node appliance (30 GB per node) to a 6-node appliance,
Parallel Data Warehouse creates a 180 GB database (6 nodes with 30 GB per node) on the 6-node appliance.
Parallel Data Warehouse initially restores the database to 2 nodes to match the source configuration, and then
redistributes the data to all 6 nodes.
After the redistribution each Compute node will contain less actual data and more free space than each Compute
node on the smaller source appliance. Use the additional space to add more data to the database. If the restored
database size is larger than you need, you can use ALTER DATABASE (Parallel Data Warehouse) to shrink the
database file sizes.
Examples
A. Simple RESTORE examples
The following example restores a full backup to the SalesInvoices2013 database. The backup files are stored in the
\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full directory. The SalesInvoices2013 database cannot already exist
on the target appliance or this command will fail with an error.
RESTORE HEADERONLY
FROM DISK = '\\xxx.xxx.xxx.xxx\backups\yearly\Invoices2013Full'
[;]
You can use the header information to check the contents of a backup, or to make sure the target restoration
appliance is compatible with the source backup appliance before attempting to restore the backup.
See Also
BACKUP DATABASE (Parallel Data Warehouse)
RESTORE Statements - Arguments (Transact-SQL)
5/4/2018 • 32 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
This topic documents the arguments that are described in the Syntax sections of the RESTORE
{DATABASE|LOG } statement and of the associated set of auxiliary statements: RESTORE FILELISTONLY,
RESTORE HEADERONLY, RESTORE L ABELONLY, RESTORE REWINDONLY, and RESTORE VERIFYONLY.
Most of the arguments are supported by only a subset of these six statements. The support for each argument is
indicated in the description of the argument.
Transact-SQL Syntax Conventions
Syntax
For syntax, see the following topics:
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
Arguments
DATABASE
Supported by: RESTORE
Specifies the target database. If a list of files and filegroups is specified, only those files and filegroups are
restored.
For a database using the full or bulk-logged recovery model, SQL Server requires in most cases that you back
up the tail of the log before restoring the database. Restoring a database without first backing up the tail of the
log results in an error, unless the RESTORE DATABASE statement contains either the WITH REPL ACE or the
WITH STOPAT clause, which must specify a time or transaction that occurred after the end of the data backup.
For more information about tail-log backups, see Tail-Log Backups (SQL Server).
LOG
Supported by: RESTORE
Specifies that a transaction log backup is to be applied to this database. Transaction logs must be applied in
sequential order. SQL Server checks the backed up transaction log to ensure that the transactions are being
loaded into the correct database and in the correct sequence. To apply multiple transaction logs, use the
NORECOVERY option on all restore operations except the last.
NOTE
Typically, the last log restored is the tail-log backup. A tail-log backup is a log backup taken right before restoring a
database, typically after a failure on the database. Taking a tail-log backup from the possibly damaged database prevents
work loss by capturing the log that has not yet been backed up (the tail of the log). For more information, see Tail-Log
Backups (SQL Server).
For more information, see Apply Transaction Log Backups (SQL Server).
{ database_name | @database_name_var}
Supported by: RESTORE
Is the database that the log or complete database is restored into. If supplied as a variable
(@database_name_var), this name can be specified either as a string constant (@database_name_var =
database_name) or as a variable of character string data type, except for the ntext or text data types.
<file_or_filegroup_or_page> [ ,...n ]
Supported by: RESTORE
Specifies the name of a logical file or filegroup or page to include in a RESTORE DATABASE or RESTORE LOG
statement. You can specify a list of files or filegroups.
For a database that uses the simple recovery model, the FILE and FILEGROUP options are allowed only if the
target files or filegroups are read only, or if this is a PARTIAL restore (which results in a defunct filegroup).
For a database that uses the full or bulk-logged recovery model, after using RESTORE DATABASE to restore one
or more files, filegroups, and/or pages, typically, you must apply the transaction log to the files containing the
restored data; applying the log makes those files consistent with the rest of the database. The exceptions to this
are as follows:
If the files being restored were read-only before they were last backed up—then a transaction log does
not have to be applied, and the RESTORE statement informs you of this situation.
If the backup contains the primary filegroup and a partial restore is being performed. In this case, the
restore log is not needed because the log is restored automatically from the backup set.
FILE = { logical_file_name_in_backup| @logical_file_name_in_backup_var}
Names a file to include in the database restore.
FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }
Names a filegroup to include in the database restore.
Note FILEGROUP is allowed in simple recovery model only if the specified filegroup is read-only and this is a
partial restore (that is, if WITH PARTIAL is used). Any unrestored read-write filegroups are marked as defunct
and cannot subsequently be restored into the resulting database.
READ_WRITE_FILEGROUPS
Selects all read-write filegroups. This option is particularly useful when you have read-only filegroups that you
want to restore after read-write filegroups before the read-only filegroups.
PAGE = 'file:page [ ,...n ]'
Specifies a list of one or more pages for a page restore (which is supported only for databases using the full or
bulk-logged recovery models). The values are as follows:
PAGE
Indicates a list of one or more files and pages.
file
Is the file ID of the file containing a specific page to be restored.
page
Is the page ID of the page to be restored in the file.
n
Is a placeholder indicating that multiple pages can be specified.
The maximum number of pages that can be restored into any single file in a restore sequence is 1000. However,
if you have more than a small number of damaged pages in a file, consider restoring the whole file instead of the
pages.
NOTE
Page restores are never recovered.
For more information about page restore, see Restore Pages (SQL Server).
[ ,...n ]
Is a placeholder indicating that multiple files and filegroups and pages can be specified in a comma-separated
list. The number is unlimited.
FROM { <backup_device> [ ,...n ]| <database_snapshot> } Typically, specifies the backup devices from which to
restore the backup. Alternatively, in a RESTORE DATABASE statement, the FROM clause can specify the name
of a database snapshot to which you are reverting the database, in which case, no WITH clause is permitted.
If the FROM clause is omitted, the restore of a backup does not take place. Instead, the database is recovered.
This allows you to recover a database that has been restored with the NORECOVERY option or to switch over to
a standby server. If the FROM clause is omitted, NORECOVERY, RECOVERY, or STANDBY must be specified in
the WITH clause.
<backup_device> [ ,...n ] Specifies the logical or physical backup devices to use for the restore operation.
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY,
RESTORE REWINDONLY, and RESTORE VERIFYONLY.
<backup_device>::= Specifies a logical or physical backup device to use for the backup operation, as follows:
{ logical_backup_device_name | @logical_backup_device_name_var }
Is the logical name, which must follow the rules for identifiers, of the backup device(s) created by
sp_addumpdevice from which the database is restored. If supplied as a variable
(@logical_backup_device_name_var), the backup device name can be specified either as a string constant
(@logical_backup_device_name_var = logical_backup_device_name) or as a variable of character string data
type, except for the ntext or text data types.
{DISK | TAPE } = { 'physical_backup_device_name' | @physical_backup_device_name_var }
Allows backups to be restored from the named disk or tape device. The device types of disk and tape should be
specified with the actual name (for example, complete path and file name) of the device:
DISK ='Z:\SQLServerBackups\AdventureWorks.bak' or TAPE ='\\\\.\TAPE0' . If specified as a variable
(@physical_backup_device_name_var), the device name can be specified either as a string constant
(@physical_backup_device_name_var = 'physcial_backup_device_name') or as a variable of character string data
type, except for the ntext or text data types.
If using a network server with a UNC name (which must contain machine name), specify a device type of disk.
For more information about how to use UNC names, see Backup Devices (SQL Server).
The account under which you are running SQL Server must have READ access to the remote computer or
network server in order to perform a RESTORE operation.
n
Is a placeholder indicating that up to 64 backup devices may be specified in a comma-separated list.
Whether a restore sequence requires as many backup devices as were used to create the media set to which the
backups belong, depends on whether the restore is offline or online, as follows:
Offline restore allows a backup to be restored using fewer devices than were used to create the backup.
Online restore requires all the backup devices of the backup. An attempt to restore with fewer devices
fails.
For example, consider a case in which a database was backed up to four tape drives connected to the
server. An online restore requires that you have four drives connected to the server; an offline restore
allows you to restore the backup if there are less than four drives on the machine.
NOTE
When you are restoring a backup from a mirrored media set, you can specify only a single mirror for each media family. In
the presence of errors, however, having the other mirrors enables some restore problems to be resolved quickly. You can
substitute a damaged media volume with the corresponding volume from another mirror. Be aware that for offline restores
you can restore from fewer devices than media families, but each family is processed only once.
<database_snapshot>::=
Supported by: RESTORE DATABASE
DATABASE_SNAPSHOT =database_snapshot_name
Reverts the database to the database snapshot specified by database_snapshot_name. The
DATABASE_SNAPSHOT option is available only for a full database restore. In a revert operation, the database
snapshot takes the place of a full database backup.
A revert operation requires that the specified database snapshot is the only one on the database. During the
revert operation, the database snapshot and the destination database and are both marked as In restore . For
more information, see the "Remarks" section in RESTORE DATABASE.
WITH Options
Specifies the options to be used by a restore operation. For a summary of which statements use each option, see
"Summary of Support for WITH Options," later in this topic.
NOTE
WITH options are organized here in the same order as in the "Syntax" section in RESTORE {DATABASE|LOG}.
PARTIAL
Supported by: RESTORE DATABASE
Specifies a partial restore operation that restores the primary filegroup and any specified secondary filegroup(s).
The PARTIAL option implicitly selects the primary filegroup; specifying FILEGROUP = 'PRIMARY' is
unnecessary. To restore a secondary filegroup, you must explicitly specify the filegroup using the FILE option or
FILEGROUP option.
The PARTIAL option is not allowed on RESTORE LOG statements.
The PARTIAL option starts the initial stage of a piecemeal restore, which allows remaining filegroups to be
restored at a later time. For more information, see Piecemeal Restores (SQL Server).
[ RECOVERY | NORECOVERY | STANDBY ]
Supported by: RESTORE
RECOVERY
Instructs the restore operation to roll back any uncommitted transactions. After the recovery process, the
database is ready for use. If neither NORECOVERY, RECOVERY, nor STANDBY is specified, RECOVERY is the
default.
If subsequent RESTORE operations (RESTORE LOG, or RESTORE DATABASE from differential) are planned,
NORECOVERY or STANDBY should be specified instead.
When restoring backup sets from an earlier version of SQL Server, a database upgrade might be required. This
upgrade is performed automatically when WITH RECOVERY is specified. For more information, see Apply
Transaction Log Backups (SQL Server).
NOTE
If the FROM clause is omitted, NORECOVERY, RECOVERY, or STANDBY must be specified in the WITH clause.
NORECOVERY
Instructs the restore operation to not roll back any uncommitted transactions. If another transaction log has to
be applied later, specify either the NORECOVERY or STANDBY option. If neither NORECOVERY, RECOVERY,
nor STANDBY is specified, RECOVERY is the default. During an offline restore operation using the
NORECOVERY option, the database is not usable.
For restoring a database backup and one or more transaction logs or whenever multiple RESTORE statements
are necessary (for example, when restoring a full database backup followed by a differential database backup),
RESTORE requires the WITH NORECOVERY option on all but the final RESTORE statement. A best practice is
to use WITH NORECOVERY on ALL statements in a multi-step restore sequence until the desired recovery
point is reached, and then to use a separate RESTORE WITH RECOVERY statement for recovery only.
When used with a file or filegroup restore operation, NORECOVERY forces the database to remain in the
restoring state after the restore operation. This is useful in either of these situations:
A restore script is being run and the log is always being applied.
A sequence of file restores is used and the database is not intended to be usable between two of the
restore operations.
In some cases RESTORE WITH NORECOVERY rolls the roll forward set far enough forward that it is
consistent with the database. In such cases, roll back does not occur and the data remains offline, as
expected with this option. However, the Database Engine issues an informational message that states that
the roll-forward set can now be recovered by using the RECOVERY option.
STANDBY =standby_file_name
Specifies a standby file that allows the recovery effects to be undone. The STANDBY option is allowed for offline
restore (including partial restore). The option is disallowed for online restore. Attempting to specify the
STANDBY option for an online restore operation causes the restore operation to fail. STANDBY is also not
allowed when a database upgrade is necessary.
The standby file is used to keep a "copy-on-write" pre-image for pages modified during the undo pass of a
RESTORE WITH STANDBY. The standby file allows a database to be brought up for read-only access between
transaction log restores and can be used with either warm standby server situations or special recovery
situations in which it is useful to inspect the database between log restores. After a RESTORE WITH STANDBY
operation, the undo file is automatically deleted by the next RESTORE operation. If this standby file is manually
deleted before the next RESTORE operation, then the entire database must be re-restored. While the database is
in the STANDBY state, you should treat this standby file with the same care as any other database file. Unlike
other database files, this file is only kept open by the Database Engine during active restore operations.
The standby_file_name specifies a standby file whose location is stored in the log of the database. If an existing
file is using the specified name, the file is overwritten; otherwise, the Database Engine creates the file.
The size requirement of a given standby file depends on the volume of undo actions resulting from uncommitted
transactions during the restore operation.
IMPORTANT
If free disk space is exhausted on the drive containing the specified standby file name, the restore operation stops.
For a comparison of RECOVERY and NORECOVERY, see the "Remarks" section in RESTORE.
LOADHISTORY
Supported by: RESTORE VERIFYONLY
Specifies that the restore operation loads the information into the msdb history tables. The LOADHISTORY
option loads information, for the single backup set being verified, about SQL Server backups stored on the
media set to the backup and restore history tables in the msdb database. For more information about history
tables, see System Tables (Transact-SQL ).
<general_WITH_options> [ ,...n ]
The general WITH options are all supported in RESTORE DATABASE and RESTORE LOG statements. Some of
these options are also supported by one or more auxiliary statements, as noted below.
R e st o r e O p e r a t i o n O p t i o n s
NOTE
To obtain a list of the logical files from the backup set, use RESTORE FILELISTONLY.
If a RESTORE statement is used to relocate a database on the same server or copy it to a different server, the
MOVE option might be necessary to relocate the database files and to avoid collisions with existing files.
When used with RESTORE LOG, the MOVE option can be used only to relocate files that were added during the
interval covered by the log being restored. For example, if the log backup contains an add file operation for file
file23 , this file may be relocated using the MOVE option on RESTORE LOG.
When used with SQL Server Snaphot Backup, the MOVE option can be used only to relocate files to an Azure
blob within the same storage account as the original blob. The MOVE option cannot be used to restore the
snapshot backup to a local file or to a different storage account.
If a RESTORE VERIFYONLY statement is used when you plan to relocate a database on the same server or copy
it to a different server, the MOVE option might be necessary to verify that sufficient space is available in the
target and to identify potential collisions with existing files.
For more information, see Copy Databases with Backup and Restore.
CREDENTIAL
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
Applies to: SQL Server 2012 (11.x) SP1 CU2 through SQL Server 2017
Used only when restoring a backup from the Microsoft Azure Blob storage service.
NOTE
With SQL Server 2012 (11.x) SP1 CU2 until SQL Server 2016 (13.x), you can only restore from a single device when
restoring from URL. In order to restore from multiple devices when restoring from URL you must use SQL Server 2016
(13.x) through current version) and you must use Shared Access Signature (SAS) tokens. For more information, see Enable
SQL Server Managed Backup to Microsoft Azure and Simplifying creation of SQL Credentials with Shared Access Signature
( SAS ) tokens on Azure Storage with Powershell.
REPL ACE
Supported by: RESTORE
Specifies that SQL Server should create the specified database and its related files even if another database
already exists with the same name. In such a case, the existing database is deleted. When the REPL ACE option is
not specified, a safety check occurs. This prevents overwriting a different database by accident. The safety check
ensures that the RESTORE DATABASE statement does not restore the database to the current server if the
following conditions both exist:
The database named in the RESTORE statement already exists on the current server, and
The database name is different from the database name recorded in the backup set.
REPL ACE also allows RESTORE to overwrite an existing file that cannot be verified as belonging to the
database being restored. Normally, RESTORE refuses to overwrite pre-existing files. WITH REPL ACE can
also be used in the same way for the RESTORE LOG option.
REPL ACE also overrides the requirement that you back up the tail of the log before restoring the
database.
For information the impact of using the REPL ACE option, see RESTORE (Transact-SQL ).
RESTART
Supported by: RESTORE
Specifies that SQL Server should restart a restore operation that has been interrupted. RESTART restarts the
restore operation at the point it was interrupted.
RESTRICTED_USER
Supported by: RESTORE.
Restricts access for the newly restored database to members of the db_owner, dbcreator, or sysadmin roles.
RESTRICTED_USER replaces the DBO_ONLY option. DBO_ONLY has been discontinued with SQL Server
2008.
Use with the RECOVERY option.
B a c k u p Se t O p t i o n s
These options operate on the backup set containing the backup to be restored.
FILE ={ backup_set_file_number | @backup_set_file_number }
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, and RESTORE VERIFYONLY.
Identifies the backup set to be restored. For example, a backup_set_file_number of 1 indicates the first backup set
on the backup medium and a backup_set_file_number of 2 indicates the second backup set. You can obtain the
backup_set_file_number of a backup set by using the RESTORE HEADERONLY statement.
When not specified, the default is 1, except for RESTORE HEADERONLY in which case all backup sets in the
media set are processed. For more information, see "Specifying a Backup Set," later in this topic.
IMPORTANT
This FILE option is unrelated to the FILE option for specifying a database file, FILE = { logical_file_name_in_backup |
@logical_file_name_in_backup_var }.
NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development
work, and plan to modify applications that currently use this feature.
If a password was specified when the backup set was created, that password is required to perform any restore
operation from the backup set. It is an error to specify the wrong password or to specify a password if the
backup set does not have one.
IMPORTANT
This password provides only weak protection for the media set. For more information, see the Permissions section for the
relevant statement.
M e d i a Se t O p t i o n s
IMPORTANT
Consistently using media names in backup and restore operations provides an extra safety check for the media selected for
the restore operation.
If a password was provided when the media set was formatted, that password is required to access any backup
set on the media set. It is an error to specify the wrong password or to specify a password if the media set does
not have any.
IMPORTANT
This password provides only weak protection for the media set. For more information, see the "Permissions" section for the
relevant statement.
NOTE
This option typically affects performance only when reading from tape devices.
D a t a T r a n sfe r O p t i o n s
The options enable you to optimize data transfer from the backup device.
BUFFERCOUNT = { buffercount | @buffercount_variable }
Supported by: RESTORE
Specifies the total number of I/O buffers to be used for the restore operation. You can specify any positive
integer; however, large numbers of buffers might cause "out of memory" errors because of inadequate virtual
address space in the Sqlservr.exe process.
The total space used by the buffers is determined by: buffercount\*maxtransfersize.
MAXTRANSFERSIZE = { maxtransfersize | @maxtransfersize_variable }
Supported by: RESTORE
Specifies the largest unit of transfer in bytes to be used between the backup media and SQL Server. The possible
values are multiples of 65536 bytes (64 KB ) ranging up to 4194304 bytes (4 MB ).
NOTE
When the database has configured FILESTREAM, or includes or In-Memory OLTP File Groups, MAXTRANSFERSIZE at the
time of restore should be greater than or equal to what was used when the backup was created.
Er r o r M a n a g e m e n t O p t i o n s
These options allow you to determine whether backup checksums are enabled for the restore operation and
whether the operation stops on encountering an error.
{ CHECKSUM | NO_CHECKSUM }
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
The default behavior is to verify checksums if they are present and proceed without verification if they are not
present.
CHECKSUM
Specifies that backup checksums must be verified and, if the backup lacks backup checksums, causes the restore
operation to fail with a message indicating that checksums are not present.
NOTE
Page checksums are relevant to backup operations only if backup checksums are used.
By default, on encountering an invalid checksum, RESTORE reports a checksum error and stops. However, if you
specify CONTINUE_AFTER_ERROR, RESTORE proceeds after returning a checksum error and the number of
the page containing the invalid checksum, if the corruption permits.
For more information about working with backup checksums, see Possible Media Errors During Backup and
Restore (SQL Server).
NO_CHECKSUM
Explicitly disables the validation of checksums by the restore operation.
{ STOP_ON_ERROR | CONTINUE_AFTER_ERROR }
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
STOP_ON_ERROR
Specifies that the restore operation stops with the first error encountered. This is the default behavior for
RESTORE, except for VERIFYONLY which has CONTINUE_AFTER_ERROR as the default.
CONTINUE_AFTER_ERROR
Specifies that the restore operation is to continue after an error is encountered.
If a backup contains damaged pages, it is best to repeat the restore operation using an alternative backup that
does not contain the errors—for example, a backup taken before the pages were damaged. As a last resort,
however, you can restore a damaged backup using the CONTINUE_AFTER_ERROR option of the restore
statement and try to salvage the data.
F I L E ST R E A M O p t i o n s
These options enable you to monitor the transfer of data transfer from the backup device.
STATS [ = percentage ]
Supported by: RESTORE and RESTORE VERIFYONLY
Displays a message each time another percentage completes, and is used to gauge progress. If percentage is
omitted, SQL Server displays a message after each 10 percent is completed (approximately).
The STATS option reports the percentage complete as of the threshold for reporting the next interval. This is at
approximately the specified percentage; for example, with STATS=10, the Database Engine reports at
approximately that interval; for instance, instead of displaying precisely 40%, the option might display 43%. For
large backup sets, this is not a problem because the percentage complete moves very slowly between completed
I/O calls.
Ta p e O p t i o n s
These options are used only for TAPE devices. If a nontape device is being used, these options are ignored.
{ REWIND | NOREWIND }
These options are used only for TAPE devices. If a non-tape device is being used, these options are ignored.
REWIND
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY, and
RESTORE VERIFYONLY.
Specifies that SQL Server release and rewind the tape. REWIND is the default.
NOREWIND
Supported by: RESTORE and RESTORE VERIFYONLY
Specifying NOREWIND in any other restore statement generates an error.
Specifies that SQL Server will keep the tape open after the backup operation. You can use this option to improve
performance when performing multiple backup operations to a tape.
NOREWIND implies NOUNLOAD, and these options are incompatible within a single RESTORE statement.
NOTE
If you use NOREWIND, the instance of SQL Server retains ownership of the tape drive until a BACKUP or RESTORE
statement running in the same process uses either the REWIND or UNLOAD option, or the server instance is shut down.
Keeping the tape open prevents other processes from accessing the tape. For information about how to display a list of
open tapes and to close an open tape, see Backup Devices (SQL Server).
{ UNLOAD | NOUNLOAD }
Supported by: RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, RESTORE L ABELONLY,
RESTORE REWINDONLY, and RESTORE VERIFYONLY.
These options are used only for TAPE devices. If a non-tape device is being used, these options are ignored.
NOTE
UNLOAD/NOUNLOAD is a session setting that persists for the life of the session or until it is reset by specifying the
alternative.
UNLOAD
Specifies that the tape is automatically rewound and unloaded when the backup is finished. UNLOAD is the
default when a session begins.
NOUNLOAD
Specifies that after the RESTORE operation the tape remains loaded on the tape drive.
<replication_WITH_option>
This option is relevant only if the database was replicated when the backup was created.
KEEP_REPLICATION
Supported by: RESTORE
Use KEEP_REPLICATION when setting up replication to work with log shipping. It prevents replication settings
from being removed when a database backup or log backup is restored on a warm standby server and the
database is recovered. Specifying this option when restoring a backup with the NORECOVERY option is not
permitted. To ensure replication functions properly after restore:
The msdb and master databases at the warm standby server must be in sync with the msdb and master
databases at the primary server.
The warm standby server must be renamed to use the same name as the primary server.
<change_data_capture_WITH_option>
This option is relevant only if the database was enabled for change data capture when the backup was created.
KEEP_CDC
Supported by: RESTORE
KEEP_CDC should be used to prevent change data capture settings from being removed when a database
backup or log backup is restored on another server and the database is recovered. Specifying this option when
restoring a backup with the NORECOVERY option is not permitted.
Restoring the database with KEEP_CDC does not create the change data capture jobs. To extract changes from
the log after restoring the database, recreate the capture process job and the cleanup job for the restored
database. For information, see sys.sp_cdc_add_job (Transact-SQL ).
For information about using change data capture with database mirroring, see Change Data Capture and Other
SQL Server Features.
<service_broker_WITH_options>
Turns Service Broker message delivery on or off or sets a new Service Broker identifier. This option is relevant
only if Service Broker was enabled (activated) for the database when the backup was created.
{ ENABLE_BROKER | ERROR_BROKER_CONVERSATIONS | NEW_BROKER }
Supported by: RESTORE DATABASE
ENABLE_BROKER
Specifies that Service Broker message delivery is enabled at the end of the restore so that messages can be sent
immediately. By default Service Broker message delivery is disabled during a restore. The database retains the
existing Service Broker identifier.
ERROR_BROKER_CONVERSATIONS
Ends all conversations with an error stating that the database is attached or restored. This enables your
applications to perform regular clean up for existing conversations. Service Broker message delivery is disabled
until this operation is completed, and then it is enabled. The database retains the existing Service Broker
identifier.
NEW_BROKER
Specifies that the database be assigned a new Service Broker identifier. Because the database is considered to be
a new Service Broker, existing conversations in the database are immediately removed without producing end
dialog messages. Any route referencing the old Service Broker identifier must be recreated with the new
identifier.
<point_in_time_WITH_options>
Supported by: RESTORE {DATABASE|LOG } and only for the full or bulk-logged recovery models.
You can restore a database to a specific point in time or transaction, by specifying the target recovery point in a
STOPAT, STOPATMARK, or STOPBEFOREMARK clause. A specified time or transaction is always restored from
a log backup. In every RESTORE LOG statement of the restore sequence, you must specify your target time or
transaction in an identical STOPAT, STOPATMARK, or STOPBEFOREMARK clause.
As a prerequisite to a point-in-time restore, you must first restore a full database backup whose end point is
earlier than your target recovery point. To help you identify which database backup to restore, you can optionally
specify your WITH STOPAT, STOPATMARK, or STOPBEFOREMARK clause in a RESTORE DATABASE
statement to raise an error if a data backup is too recent for the specified target time. But the complete data
backup is always restored, even if it contains the target time.
NOTE
The RESTORE_DATABASE and RESTORE_LOG point-in-time WITH options are similar, but only RESTORE LOG supports the
mark_name argument.
NOTE
If the specified STOPAT time is after the last LOG backup, the database is left in the unrecovered state, just as if RESTORE
LOG ran with NORECOVERY.
For more information, see Restore a SQL Server Database to a Point in Time (Full Recovery Model).
STOPATMARK = { 'mark_name' | 'lsn:lsn_number' } [ AFTER 'datetime' ]
Specifies recovery to a specified recovery point. The specified transaction is included in the recovery, but it is
committed only if it was originally committed when the transaction was actually generated.
Both RESTORE DATABASE and RESTORE LOG support the lsn_number parameter. This parameter specifies a
log sequence number.
The mark_name parameter is supported only by the RESTORE LOG statement. This parameter identifies a
transaction mark in the log backup.
In a RESTORE LOG statement, if AFTER datetime is omitted, recovery stops at the first mark with the specified
name. If AFTER datetime is specified, recovery stops at the first mark having the specified name exactly at or
after datetime.
NOTE
If the specified mark, LSN, or time is after the last LOG backup, the database is left in the unrecovered state, just as if
RESTORE LOG ran with NORECOVERY.
For more information, see Use Marked Transactions to Recover Related Databases Consistently (Full Recovery
Model) and Recover to a Log Sequence Number (SQL Server).
STOPBEFOREMARK = { 'mark_name' | 'lsn:lsn_number' } [ AFTER 'datetime' ]
Specifies recovery up to a specified recovery point. The specified transaction is not included in the recovery, and
is rolled back when WITH RECOVERY is used.
Both RESTORE DATABASE and RESTORE LOG support the lsn_number parameter. This parameter specifies a
log sequence number.
The mark_name parameter is supported only by the RESTORE LOG statement. This parameter identifies a
transaction mark in the log backup.
In a RESTORE LOG statement, if AFTER datetime is omitted, recovery stops just before the first mark with the
specified name. If AFTER datetime is specified, recovery stops just before the first mark having the specified
name exactly at or after datetime.
IMPORTANT
If a partial restore sequence excludes any FILESTREAM filegroup, point-in-time restore is not supported. You can force the
restore sequence to continue. However, the FILESTREAM filegroups that are omitted from the RESTORE statement can
never be restored. To force a point-in-time restore, specify the CONTINUE_AFTER_ERROR option together with the STOPAT,
STOPATMARK, or STOPBEFOREMARK option. If you specify CONTINUE_AFTER_ERROR, the partial restore sequence
succeeds and the FILESTREAM filegroup becomes unrecoverable.
Result Sets
For result sets, see the following topics:
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
Remarks
For additional remarks, see the following topics:
RESTORE (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
RESTORE The default backup set file number is 1. Only one backup-set
FILE option is allowed in a RESTORE statement. It is
important to specify backup sets in order.
RESTORE HEADERONLY By default, all backup sets in the media set are processed.
The RESTORE HEADERONLY results set returns information
about each backup set, including its Position in the media
set. To return information on a given backup set, use its
position number as the backup_set_file_number value in the
FILE option.
NOTE
The FILE option for specifying a backup set is unrelated to the FILE option for specifying a database file, FILE = {
logical_file_name_in_backup | @logical_file_name_in_backup_var }.
NOTE
The PARTIAL option is supported only by RESTORE DATABASE.
The following table lists the WITH options that are used by one or more statements and indicates which
statements support each option. A check mark (√) indicates that an option is supported; a dash (—) indicates that
an option is not supported.
{ CHECKSUM √ √ √ √ — √
|
NO_CHECKSU
M}
{ √ √ √ √ — √
CONTINUE_A
FTER_ERROR
|
STOP_ON_ER
ROR }
RESTORE RESTORE RESTORE RESTORE RESTORE
WITH OPTION RESTORE FILELISTONLY HEADERONLY LABELONLY REWINDONLY VERIFYONLY
FILE1 √ √ √ — — √
LOADHISTOR — — — — — √
Y
MEDIANAME √ √ √ √ — √
MEDIAPASSW √ √ √ √ — √
ORD
MOVE √ — — — — √
PASSWORD √ √ √ — — √
STATS √ — — — — √
{ UNLOAD | √ √ √ √ √ √
NOUNLOAD }
Permissions
For permissions, see the following topics:
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
Examples
For examples, see the following topics:
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
See Also
BACKUP (Transact-SQL )
RESTORE (Transact-SQL )
RESTORE FILELISTONLY (Transact-SQL )
RESTORE HEADERONLY (Transact-SQL )
RESTORE L ABELONLY (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
Back Up and Restore of SQL Server Databases
FILESTREAM (SQL Server)
RESTORE Statements - FILELISTONLY (Transact-
SQL)
5/4/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Returns a result set containing a list of the database and log files contained in the backup set in SQL Server.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
NOTE
For the descriptions of the arguments, see RESTORE Arguments (Transact-SQL).
Syntax
RESTORE FILELISTONLY
FROM <backup_device>
[ WITH
{
--Backup Set Options
FILE = { backup_set_file_number | @backup_set_file_number }
| PASSWORD = { password | @password_variable }
--Tape Options
| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
} [ ,...n ]
]
[;]
<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK | TAPE } = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Arguments
For descriptions of the RESTORE FILELISTONLY arguments, see RESTORE Arguments (Transact-SQL ).
Result Sets
A client can use RESTORE FILELISTONLY to obtain a list of the files contained in a backup set. This
information is returned as a result set containing one row for each file.
Security
A backup operation may optionally specify passwords for a media set, a backup set, or both. When a
password has been defined on a media set or backup set, you must specify the correct password or passwords
in the RESTORE statement. These passwords prevent unauthorized restore operations and unauthorized
appends of backup sets to media using Microsoft SQL Server tools. However, a password does not prevent
overwrite of media using the BACKUP statement's FORMAT option.
IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server tools
by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature. The best practice for
protecting backups is to store backup tapes in a secure location or back up to disk files that are protected by adequate
access control lists (ACLs). The ACLs should be set on the directory root under which backups are created.
Permissions
Beginning in SQL Server 2008, obtaining information about a backup set or backup device requires CREATE
DATABASE permission. For more information, see GRANT Database Permissions (Transact-SQL ).
Examples
The following example returns the information from a backup device named AdventureWorksBackups . The
example uses the FILE option to specify the second backup set on the device.
See Also
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
RESTORE Statements - HEADERONLY (Transact-
SQL)
5/4/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Returns a result set containing all the backup header information for all backup sets on a particular backup
device in SQL Server.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
NOTE
For the descriptions of the arguments, see RESTORE Arguments (Transact-SQL).
Syntax
RESTORE HEADERONLY
FROM <backup_device>
[ WITH
{
--Backup Set Options
FILE = { backup_set_file_number | @backup_set_file_number }
| PASSWORD = { password | @password_variable }
--Tape Options
| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
} [ ,...n ]
]
[;]
<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK | TAPE } = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Arguments
For descriptions of the RESTORE HEADERONLY arguments, see RESTORE Arguments (Transact-SQL ).
Result Sets
For each backup on a given device, the server sends a row of header information with the following columns:
NOTE
RESTORE HEADERONLY looks at all backup sets on the media. Therefore, producing this result set when using high-
capacity tape drives can take some time. To get a quick look at the media without getting information about every
backup set, use RESTORE LABELONLY or specify FILE = backup_set_file_number.
NOTE
Due to the nature of Microsoft Tape Format, it is possible for backup sets from other software programs to occupy
space on the same media as Microsoft SQL Server backup sets. The result set returned by RESTORE HEADERONLY
includes a row for each of these other backup sets.
1 = Database
2 = Transaction log
4 = File
5 = Differential database
6 = Differential file
7 = Partial
8 = Differential partial
0 = No
1 = Yes
Disk:
2 = Logical
102 = Physical
Tape:
5 = Logical
105 = Physical
Virtual Device:
7 = Logical
107 = Physical
2 = Snapshot backup.
FULL
BULK-LOGGED
SIMPLE
DATABASE
TRANSACTION LOG
FILE OR FILEGROUP
DATABASE DIFFERENTIAL
PARTIAL DIFFERENTIAL
DESCRIPTION FOR SQL SERVER BACKUP
COLUMN NAME DATA TYPE SETS
containment tinyint not NULL Applies to: SQL Server 2012 (11.x)
through SQL Server 2017.
General Remarks
A client can use RESTORE HEADERONLY to retrieve all the backup header information for all backups on a
particular backup device. For each backup on the backup device, the server sends the header information as a
row.
Security
A backup operation may optionally specify passwords for a media set, a backup set, or both. When a
password has been defined on a media set or backup set, you must specify the correct password or
passwords in the RESTORE statement. These passwords prevent unauthorized restore operations and
unauthorized appends of backup sets to media using Microsoft SQL Server tools. However, a password does
not prevent overwrite of media using the BACKUP statement's FORMAT option.
IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server tools
by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature.The best practice for
protecting backups is to store backup tapes in a secure location or back up to disk files that are protected by adequate
access control lists (ACLs). The ACLs should be set on the directory root under which backups are created.
Permissions
Obtaining information about a backup set or backup device requires CREATE DATABASE permission. For
more information, see GRANT Database Permissions (Transact-SQL ).
Examples
The following example returns the information in the header for the disk file
C:\AdventureWorks-FullBackup.bak .
RESTORE HEADERONLY
FROM DISK = N'C:\AdventureWorks-FullBackup.bak'
WITH NOUNLOAD;
GO
See Also
BACKUP (Transact-SQL )
backupset (Transact-SQL )
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
Enable or Disable Backup Checksums During Backup or Restore (SQL Server)
Media Sets, Media Families, and Backup Sets (SQL Server)
Recovery Models (SQL Server)
RESTORE Statements - LABELONLY (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Returns a result set containing information about the backup media identified by the given backup device.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
NOTE
For the descriptions of the arguments, see RESTORE Arguments (Transact-SQL).
Syntax
RESTORE LABELONLY
FROM <backup_device>
[ WITH
{
--Media Set Options
MEDIANAME = { media_name | @media_name_variable }
| MEDIAPASSWORD = { mediapassword | @mediapassword_variable }
--Tape Options
| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
} [ ,...n ]
]
[;]
<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK | TAPE } = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Arguments
For descriptions of the RESTORE L ABELONLY arguments, see RESTORE Arguments (Transact-SQL ).
Result Sets
The result set from RESTORE L ABELONLY consists of a single row with this information.
0 = Media description
0 = not compressed
1 =compressed
NOTE
If passwords are defined for the media set, RESTORE LABELONLY returns information only if the correct media password
is specified in the MEDIAPASSWORD option of the command.
General Remarks
Executing RESTORE L ABELONLY is a quick way to find out what the backup media contains. Because
RESTORE L ABELONLY reads only the media header, this statement finishes quickly even when using high-
capacity tape devices.
Security
A backup operation may optionally specify passwords for a media set. When a password has been defined on a
media set, you must specify the correct password in the RESTORE statement. The password prevents
unauthorized restore operations and unauthorized appends of backup sets to media using Microsoft SQL
Server tools. However, a password does not prevent overwrite of media using the BACKUP statement's
FORMAT option.
IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server tools
by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature.The best practice for
protecting backups is to store backup tapes in a secure location or back up to disk files that are protected by adequate
access control lists (ACLs). The ACLs should be set on the directory root under which backups are created.
Permissions
In SQL Server 2008 and later versions, obtaining information about a backup set or backup device requires
CREATE DATABASE permission. For more information, see GRANT Database Permissions (Transact-SQL ).
See Also
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE REWINDONLY (Transact-SQL )
RESTORE VERIFYONLY (Transact-SQL )
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
RESTORE MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Imports a database master key from a backup file.
Transact-SQL Syntax Conventions
Syntax
RESTORE MASTER KEY FROM FILE = 'path_to_file'
DECRYPTION BY PASSWORD = 'password'
ENCRYPTION BY PASSWORD = 'password'
[ FORCE ]
Arguments
FILE ='path_to_file'
Specifies the complete path, including file name, to the stored database master key. path_to_file can be a local path
or a UNC path to a network location.
DECRYPTION BY PASSWORD ='password'
Specifies the password that is required to decrypt the database master key that is being imported from a file.
ENCRYPTION BY PASSWORD ='password'
Specifies the password that is used to encrypt the database master key after it has been loaded into the database.
FORCE
Specifies that the RESTORE process should continue, even if the current database master key is not open, or if
SQL Server cannot decrypt some of the private keys that are encrypted with it.
Remarks
When the master key is restored, SQL Server decrypts all the keys that are encrypted with the currently active
master key, and then encrypts these keys with the restored master key. This resource-intensive operation should
be scheduled during a period of low demand. If the current database master key is not open or cannot be opened,
or if any of the keys that are encrypted by it cannot be decrypted, the restore operation fails.
Use the FORCE option only if the master key is irretrievable or if decryption fails. Information that is encrypted
only by an irretrievable key will be lost.
If the master key was encrypted by the service master key, the restored master key will also be encrypted by the
service master key.
If there is no master key in the current database, RESTORE MASTER KEY creates a master key. The new master
key will not be automatically encrypted with the service master key.
Permissions
Requires CONTROL permission on the database.
Examples
The following example restores the database master key of the AdventureWorks2012 database.
USE AdventureWorks2012;
RESTORE MASTER KEY
FROM FILE = 'c:\backups\keys\AdventureWorks2012_master_key'
DECRYPTION BY PASSWORD = '3dH85Hhk003#GHkf02597gheij04'
ENCRYPTION BY PASSWORD = '259087M#MyjkFkjhywiyedfgGDFD';
GO
See Also
CREATE MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
Encryption Hierarchy
RESTORE Statements - REWINDONLY (Transact-
SQL)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Rewinds and closes specified tape devices that were left open by BACKUP or RESTORE statements executed
with the NOREWIND option. This command is supported only for tape devices.
Transact-SQL Syntax Conventions
Syntax
RESTORE REWINDONLY
FROM <backup_device> [ ,...n ]
[ WITH {UNLOAD | NOUNLOAD}]
}
[;]
<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| TAPE = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Arguments
<backup_device> ::=
Specifies the logical or physical backup devices to use for the restore operation.
{ logical_backup_device_name | @logical_backup_device_name_var }
Is the logical name, which must follow the rules for identifiers, of the backup devices created by
sp_addumpdevice from which the database is restored. If supplied as a variable
(@logical_backup_device_name_var), the backup device name can be specified either as a string constant
(@logical_backup_device_name_var = logical_backup_device_name) or as a variable of character string data
type, except for the ntext or text data types.
{DISK | TAPE } = { 'physical_backup_device_name' | @physical_backup_device_name_var }
Allows backups to be restored from the named disk or tape device. The device types of disk and tape should be
specified with the actual name (for example, complete path and file name) of the device: DISK = 'C:\Program
Files\Microsoft SQL Server\MSSQL\BACKUP\Mybackup.bak' or TAPE = '\\.\TAPE0'. If specified as a variable
(@physical_backup_device_name_var), the device name can be specified either as a string constant
(@physical_backup_device_name_var = 'physcial_backup_device_name') or as a variable of character string
data type, except for the ntext or text data types.
If using a network server with a UNC name (which must contain machine name), specify a device type of disk.
For more information about using UNC names, see Backup Devices (SQL Server).
The account under which you are running Microsoft SQL Server must have READ access to the remote
computer or network server in order to perform a RESTORE operation.
n
Is a placeholder that indicates multiple backup devices and logical backup devices can be specified. The
maximum number of backup devices or logical backup devices is 64.
Whether a restore sequence requires as many backup devices as were used to create the media set to which the
backups belong, depends on whether the restore is offline or online. Offline restore allows a backup to be
restored using fewer devices than were used to create the backup. Online restore requires all the backup devices
of the backup. An attempt to restore with fewer devices fails.
For more information, see Backup Devices (SQL Server).
NOTE
When restoring a backup from a mirrored media set, you can specify only a single mirror for each media family. In the
presence of errors, however, having the other mirror(s) enables some restore problems to be resolved quickly. You can
substitute a damaged media volume with the corresponding volume from another mirror. Note that for offline restores
you can restore from fewer devices than media families, but each family is processed only once.
WITH Options
UNLOAD
Specifies that the tape is automatically rewound and unloaded when the RESTORE is finished. UNLOAD is set
by default when a new user session is started. It remains set until NOUNLOAD is specified. This option is used
only for tape devices. If a non-tape device is being used for RESTORE, this option is ignored.
NOUNLOAD
Specifies that the tape is not unloaded automatically from the tape drive after a RESTORE. NOUNLOAD
remains set until UNLOAD is specified.
Specifies that the tape is not unloaded automatically from the tape drive after a RESTORE. NOUNLOAD
remains set until UNLOAD is specified.
General Remarks
RESTORE REWINDONLY is an alternative to RESTORE L ABELONLY FROM TAPE = <name> WITH
REWIND. You can get a list of opened tape drives from the sys.dm_io_backup_tapes dynamic management view.
Security
Permissions
Any user may use RESTORE REWINDONLY.
See Also
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
RESTORE Statements - VERIFYONLY (Transact-
SQL)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance
only) Azure SQL Data Warehouse Parallel Data Warehouse
Verifies the backup but does not restore it, and checks to see that the backup set is complete and the entire
backup is readable. However, RESTORE VERIFYONLY does not attempt to verify the structure of the data
contained in the backup volumes. In Microsoft SQL Server, RESTORE VERIFYONLY has been enhanced to
do additional checking on the data to increase the probability of detecting errors. The goal is to be as close to
an actual restore operation as practical. For more information, see the Remarks.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL
Database Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
If the backup is valid, the SQL Server Database Engine returns a success message.
NOTE
For the descriptions of the arguments, see RESTORE Arguments (Transact-SQL).
Syntax
RESTORE VERIFYONLY
FROM <backup_device> [ ,...n ]
[ WITH
{
LOADHISTORY
--Monitoring Options
| STATS [ = percentage ]
--Tape Options
| { REWIND | NOREWIND }
| { UNLOAD | NOUNLOAD }
} [ ,...n ]
]
[;]
<backup_device> ::=
{
{ logical_backup_device_name |
@logical_backup_device_name_var }
| { DISK | TAPE } = { 'physical_backup_device_name' |
@physical_backup_device_name_var }
}
Arguments
For descriptions of the RESTORE VERIFYONLY arguments, see RESTORE Arguments (Transact-SQL ).
General Remarks
The media set or the backup set must contain minimal correct information to enable it to be interpreted as
Microsoft Tape Format. If not, RESTORE VERIFYONLY stops and indicates that the format of the backup is
invalid.
Checks performed by RESTORE VERIFYONLY include:
That the backup set is complete and all volumes are readable.
Some header fields of database pages, such as the page ID (as if it were about to write the data).
Checksum (if present on the media).
Checking for sufficient space on destination devices.
NOTE
RESTORE VERIFYONLY does not work on a database snapshot. To verify a database snapshot before a revert
operation, you can run DBCC CHECKDB.
NOTE
With snapshot backups, RESTORE VERIFYONLY confirms the existence of the snapshots in the locations specified in
the backup file. Snapshot backups are a new feature in SQL Server 2016 (13.x). For more information about Snapshot
Backups, see File-Snapshot Backups for Database Files in Azure.
Security
A backup operation may optionally specify passwords for a media set, a backup set, or both. When a
password has been defined on a media set or backup set, you must specify the correct password or
passwords in the RESTORE statement. These passwords prevent unauthorized restore operations and
unauthorized appends of backup sets to media using SQL Server tools. However, a password does not
prevent overwrite of media using the BACKUP statement's FORMAT option.
IMPORTANT
The protection provided by this password is weak. It is intended to prevent an incorrect restore using SQL Server
tools by authorized or unauthorized users. It does not prevent the reading of the backup data by other means or the
replacement of the password. This feature will be removed in a future version of Microsoft SQL Server. Avoid using
this feature in new development work, and plan to modify applications that currently use this feature.The best
practice for protecting backups is to store backup tapes in a secure location or back up to disk files that are protected
by adequate access control lists (ACLs). The ACLs should be set on the directory root under which backups are
created.
Permissions
Beginning in SQL Server 2008, obtaining information about a backup set or backup device requires
CREATE DATABASE permission. For more information, see GRANT Database Permissions (Transact-SQL ).
See Also
BACKUP (Transact-SQL )
Media Sets, Media Families, and Backup Sets (SQL Server)
RESTORE REWINDONLY (Transact-SQL )
RESTORE (Transact-SQL )
Backup History and Header Information (SQL Server)
BULK INSERT (Transact-SQL)
5/3/2018 • 19 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Imports a data file into a database table or view in a user-specified format in SQL Server
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
BULK INSERT
[ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ]
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] DATASOURCE = 'data_source_name' ]
[ [ , ] ERRORFILE = 'file_name' ]
[ [ , ] ERRORFILE_DATASOURCE = 'data_source_name' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE_DATASOURCE = 'data_source_name' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]
Arguments
database_name
Is the database name in which the specified table or view resides. If not specified, this is the current database.
schema_name
Is the name of the table or view schema. schema_name is optional if the default schema for the user performing
the bulk-import operation is schema of the specified table or view. If schema is not specified and the default
schema of the user performing the bulk-import operation is different from the specified table or view, SQL Server
returns an error message, and the bulk-import operation is canceled.
table_name
Is the name of the table or view to bulk import data into. Only views in which all columns refer to the same base
table can be used. For more information about the restrictions for loading data into views, see INSERT (Transact-
SQL ).
' data_file '
Is the full path of the data file that contains data to import into the specified table or view. BULK INSERT can
import data from a disk (including network, floppy disk, hard disk, and so on).
data_file must specify a valid path from the server on which SQL Server is running. If data_file is a remote file,
specify the Universal Naming Convention (UNC ) name. A UNC name has the form
\\Systemname\ShareName\Path\FileName. For example, \\SystemX\DiskZ\Sales\update.txt .
Applies to: SQL Server 2017 (14.x) CTP 1.1.
Beginning with SQL Server 2017 (14.x) CTP1.1, the data_file can be in Azure blob storage.
' data_source_name '
Applies to: SQL Server 2017 (14.x) CTP 1.1.
Is a named external data source pointing to the Azure Blob storage location of the file that will be imported. The
external data source must be created using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (14.x) CTP
1.1. For more information, see CREATE EXTERNAL DATA SOURCE.
BATCHSIZE =batch_size
Specifies the number of rows in a batch. Each batch is copied to the server as one transaction. If this fails, SQL
Server commits or rolls back the transaction for every batch. By default, all data in the specified data file is one
batch. For information about performance considerations, see "Remarks," later in this topic.
CHECK_CONSTRAINTS
Specifies that all constraints on the target table or view must be checked during the bulk-import operation.
Without the CHECK_CONSTRAINTS option, any CHECK and FOREIGN KEY constraints are ignored, and after
the operation, the constraint on the table is marked as not-trusted.
NOTE
UNIQUE, and PRIMARY KEY constraints are always enforced. When importing into a character column that is defined with a
NOT NULL constraint, BULK INSERT inserts a blank string when there is no value in the text file.
At some point, you must examine the constraints on the whole table. If the table was non-empty before the bulk-
import operation, the cost of revalidating the constraint may exceed the cost of applying CHECK constraints to
the incremental data.
A situation in which you might want constraints disabled (the default behavior) is if the input data contains rows
that violate constraints. With CHECK constraints disabled, you can import the data and then use Transact-SQL
statements to remove the invalid data.
NOTE
The MAXERRORS option does not apply to constraint checking.
IMPORTANT
CODEPAGE is not a supported option on Linux.
NOTE
Microsoft recommends that you specify a collation name for each column in a format file.
OEM (default) Columns of char, varchar, or text data type are converted
from the system OEM code page to the SQL Server code
page.
native Native (database) data types. Create the native data file by
bulk importing data from SQL Server using the bcp utility.
ERRORFILE ='file_name'
Specifies the file used to collect rows that have formatting errors and cannot be converted to an OLE DB rowset.
These rows are copied into this error file from the data file "as is."
The error file is created when the command is executed. An error occurs if the file already exists. Additionally, a
control file that has the extension .ERROR.txt is created. This references each row in the error file and provides
error diagnostics. As soon as the errors have been corrected, the data can be loaded.
Applies to: SQL Server 2017 (14.x) CTP 1.1. Beginning with SQL Server 2017 (14.x), the error_file_path can
be in Azure blob storage.
'errorfile_data_source_name'
Applies to: SQL Server 2017 (14.x) CTP 1.1. Is a named external data source pointing to the Azure Blob storage
location of the error file that will contain errors found during the import. The external data source must be created
using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (14.x) CTP 1.1. For more information, see
CREATE EXTERNAL DATA SOURCE.
FIRSTROW =first_row
Specifies the number of the first row to load. The default is the first row in the specified data file. FIRSTROW is 1-
based.
NOTE
The FIRSTROW attribute is not intended to skip column headers. Skipping headers is not supported by the BULK INSERT
statement. When skipping rows, the SQL Server Database Engine looks only at the field terminators, and does not validate
the data in the fields of skipped rows.
FIRE_TRIGGERS
Specifies that any insert triggers defined on the destination table execute during the bulk-import operation. If
triggers are defined for INSERT operations on the target table, they are fired for every completed batch.
If FIRE_TRIGGERS is not specified, no insert triggers execute.
FORMATFILE_DATASOURCE = 'data_source_name'
Applies to: SQL Server 2017 (14.x) 1.1.
Is a named external data source pointing to the Azure Blob storage location of the format file that will define the
schema of imported data. The external data source must be created using the TYPE = BLOB_STORAGE option added
in SQL Server 2017 (14.x) CTP 1.1. For more information, see CREATE EXTERNAL DATA SOURCE.
KEEPIDENTITY
Specifies that identity value or values in the imported data file are to be used for the identity column. If
KEEPIDENTITY is not specified, the identity values for this column are verified but not imported and SQL Server
automatically assigns unique values based on the seed and increment values specified during table creation. If the
data file does not contain values for the identity column in the table or view, use a format file to specify that the
identity column in the table or view is to be skipped when importing data; SQL Server automatically assigns
unique values for the column. For more information, see DBCC CHECKIDENT (Transact-SQL ).
For more information, see about keeping identify values see Keep Identity Values When Bulk Importing Data
(SQL Server).
KEEPNULLS
Specifies that empty columns should retain a null value during the bulk-import operation, instead of having any
default values for the columns inserted. For more information, see Keep Nulls or Use Default Values During Bulk
Import (SQL Server).
KILOBYTES_PER_BATCH = kilobytes_per_batch
Specifies the approximate number of kilobytes (KB ) of data per batch as kilobytes_per_batch. By default,
KILOBYTES_PER_BATCH is unknown. For information about performance considerations, see "Remarks," later in
this topic.
L ASTROW=last_row
Specifies the number of the last row to load. The default is 0, which indicates the last row in the specified data file.
MAXERRORS = max_errors
Specifies the maximum number of syntax errors allowed in the data before the bulk-import operation is canceled.
Each row that cannot be imported by the bulk-import operation is ignored and counted as one error. If
max_errors is not specified, the default is 10.
NOTE
The MAX_ERRORS option does not apply to constraint checks or to converting money and bigint data types.
Compatibility
BULK INSERT enforces strict data validation and data checks of data read from a file that could cause existing
scripts to fail when they are executed on invalid data. For example, BULK INSERT verifies that:
The native representations of float or real data types are valid.
Unicode data has an even-byte length.
Data Types
String-to -Decimal Data Type Conversions
The string-to-decimal data type conversions used in BULK INSERT follow the same rules as the Transact-SQL
CONVERT function, which rejects strings representing numeric values that use scientific notation. Therefore,
BULK INSERT treats such strings as invalid values and reports conversion errors.
To work around this behavior, use a format file to bulk import scientific notation float data into a decimal column.
In the format file, explicitly describe the column as real or float data. For more information about these data
types, see float and real (Transact-SQL ).
NOTE
Format files represent real data as the SQLFLT4 data type and float data as the SQLFLT8 data type. For information about
non-XML format files, see Specify File Storage Type by Using bcp (SQL Server).
The user wants to bulk import data into the t_float table. The data file, C:\t_float-c.dat, contains scientific
notation float data; for example:
8.0000000000000002E-28.0000000000000002E-2
However, BULK INSERT cannot import this data directly into t_float , because its second column, c2 , uses the
decimal data type. Therefore, a format file is necessary. The format file must map the scientific notation float
data to the decimal format of column c2 .
The following format file uses the SQLFLT8 data type to map the second data field to the second column:
<?xml version="1.0"?>
<BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<RECORD>
<FIELD ID="1" xsi:type="CharTerm" TERMINATOR="\t" MAX_LENGTH="30"/>
<FIELD ID="2" xsi:type="CharTerm" TERMINATOR="\r\n" MAX_LENGTH="30"/> </RECORD> <ROW>
<COLUMN SOURCE="1" NAME="c1" xsi:type="SQLFLT8"/>
<COLUMN SOURCE="2" NAME="c2" xsi:type="SQLFLT8"/> </ROW> </BCPFORMAT>
To use this format file (using the file name C:\t_floatformat-c-xml.xml ) to import the test data into the test table,
issue the following Transact-SQL statement:
SQLCHAR or SQLVARCHAR The data is sent in the client code page or in the code page
implied by the collation). The effect is the same as specifying
the DATAFILETYPE ='char' without specifying a format file.
SQLNCHAR or SQLNVARCHAR The data is sent as Unicode. The effect is the same as
specifying the DATAFILETYPE = 'widechar' without specifying
a format file.
Interoperability
Importing Data from a CSV file
Beginning with SQL Server 2017 (14.x) CTP 1.1, BULK INSERT supports the CSV format.
Before SQL Server 2017 (14.x) CTP 1.1, comma-separated value (CSV ) files are not supported by SQL Server
bulk-import operations. However, in some cases, a CSV file can be used as the data file for a bulk import of data
into SQL Server. For information about the requirements for importing data from a CSV data file, see Prepare
Data for Bulk Export or Import (SQL Server).
Logging Behavior
For information about when row -insert operations that are performed by bulk import are logged in the
transaction log, see Prerequisites for Minimal Logging in Bulk Import.
Restrictions
When using a format file with BULK INSERT, you can specify up to 1024 fields only. This is same as the
maximum number of columns allowed in a table. If you use BULK INSERT with a data file that contains more
than 1024 fields, BULK INSERT generates the 4822 error. The bcp utility does not have this limitation, so for data
files that contain more than 1024 fields, use the bcp command.
Performance Considerations
If the number of pages to be flushed in a single batch exceeds an internal threshold, a full scan of the buffer pool
might occur to identify which pages to flush when the batch commits. This full scan can hurt bulk-import
performance. A likely case of exceeding the internal threshold occurs when a large buffer pool is combined with a
slow I/O subsystem. To avoid buffer overflows on large machines, either do not use the TABLOCK hint (which will
remove the bulk optimizations) or use a smaller batch size (which preserves the bulk optimizations).
Because computers vary, we recommend that you test various batch sizes with your data load to find out what
works best for you.
Security
Security Account Delegation (Impersonation)
If a user uses a SQL Server login, the security profile of the SQL Server process account is used. A login using
SQL Server authentication cannot be authenticated outside of the Database Engine. Therefore, when a BULK
INSERT command is initiated by a login using SQL Server authentication, the connection to the data is made
using the security context of the SQL Server process account (the account used by the SQL Server Database
Engine service). To successfully read the source data you must grant the account used by the SQL Server
Database Engine, access to the source data.In contrast, if a SQL Server user logs on by using Windows
Authentication, the user can read only those files that can be accessed by the user account, regardless of the
security profile of the SQL Server process.
When executing the BULK INSERT statement by using sqlcmd or osql, from one computer, inserting data into
SQL Server on a second computer, and specifying a data_file on third computer by using a UNC path, you may
receive a 4861 error.
To resolve this error, use SQL Server Authentication and specify a SQL Server login that uses the security profile
of the SQL Server process account, or configure Windows to enable security account delegation. For information
about how to enable a user account to be trusted for delegation, see Windows Help.
For more information about this and other security considerations for using BULK INSERT, see Import Bulk Data
by Using BULK INSERT or OPENROWSET(BULK...) (SQL Server).
Permissions
Requires INSERT and ADMINISTER BULK OPERATIONS permissions. In Azure SQL Database, INSERT and
ADMINISTER DATABASE BULK OPERATIONS permissions are required. Additionally, ALTER TABLE
permission is required if one or more of the following is true:
Constraints exist and the CHECK_CONSTRAINTS option is not specified.
NOTE
Disabling constraints is the default behavior. To check constraints explicitly, use the CHECK_CONSTRAINTS option.
NOTE
By default, triggers are not fired. To fire triggers explicitly, use the FIRE_TRIGGER option.
You use the KEEPIDENTITY option to import identity value from data file.
Examples
A. Using pipes to import data from a file
The following example imports order detail information into the AdventureWorks2012.Sales.SalesOrderDetail table
from the specified data file by using a pipe ( | ) as the field terminator and |\n as the row terminator.
NOTE
Due to how Microsoft Windows treats text files (\n automatically gets replaced with \r\n).
For complete BULK INSERT examples including configuring the credential and external data source, see Examples
of Bulk Access to Data in Azure Blob Storage.
Additional Examples
Other BULK INSERT examples are provided in the following topics:
Examples of Bulk Import and Export of XML Documents (SQL Server)
Keep Identity Values When Bulk Importing Data (SQL Server)
Keep Nulls or Use Default Values During Bulk Import (SQL Server)
Specify Field and Row Terminators (SQL Server)
Use a Format File to Bulk Import Data (SQL Server)
Use Character Format to Import or Export Data (SQL Server)
Use Native Format to Import or Export Data (SQL Server)
Use Unicode Character Format to Import or Export Data (SQL Server)
Use Unicode Native Format to Import or Export Data (SQL Server)
Use a Format File to Skip a Table Column (SQL Server)
Use a Format File to Map Table Columns to Data-File Fields (SQL Server)
See Also
Bulk Import and Export of Data (SQL Server)
bcp Utility
Format Files for Importing or Exporting Data (SQL Server)
INSERT (Transact-SQL )
OPENROWSET (Transact-SQL )
Prepare Data for Bulk Export or Import (SQL Server)
sp_tableoption (Transact-SQL )
CREATE AGGREGATE (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a user-defined aggregate function whose implementation is defined in a class of an assembly in the .NET
Framework. For the Database Engine to bind the aggregate function to its implementation, the .NET Framework
assembly that contains the implementation must first be uploaded into an instance of SQL Server by using a
CREATE ASSEMBLY statement.
Transact-SQL Syntax Conventions
Syntax
CREATE AGGREGATE [ schema_name . ] aggregate_name
(@param_name <input_sqltype>
[ ,...n ] )
RETURNS <return_sqltype>
EXTERNAL NAME assembly_name [ .class_name ]
<input_sqltype> ::=
system_scalar_type | { [ udt_schema_name. ] udt_type_name }
<return_sqltype> ::=
system_scalar_type | { [ udt_schema_name. ] udt_type_name }
Arguments
schema_name
Is the name of the schema to which the user-defined aggregate function belongs.
aggregate_name
Is the name of the aggregate function you want to create.
@ param_name
One or more parameters in the user-defined aggregate. The value of a parameter must be supplied by the user
when the aggregate function is executed. Specify a parameter name by using an "at" sign (@) as the first character.
The parameter name must comply with the rules for identifiers. Parameters are local to the function.
system_scalar_type
Is any one of the SQL Server system scalar data types to hold the value of the input parameter or return value. All
scalar data types can be used as a parameter for a user-defined aggregate, except text, ntext, and image.
Nonscalar types, such as cursor and table, cannot be specified.
udt_schema_name
Is the name of the schema to which the CLR user-defined type belongs. If not specified, the Database Engine
references udt_type_name in the following order:
The native SQL type namespace.
The default schema of the current user in the current database.
The dbo schema in the current database.
udt_type_name
Is the name of a CLR user-defined type already created in the current database. If udt_schema_name is not
specified, SQL Server assumes the type belongs to the schema of the current user.
assembly_name [ .class_name ]
Specifies the assembly to bind with the user-defined aggregate function and, optionally, the name of the
schema to which the assembly belongs and the name of the class in the assembly that implements the user-
defined aggregate. The assembly must already have been created in the database by using a CREATE
ASSEMBLY statement. class_name must be a valid SQL Server identifier and match the name of a class
that exists in the assembly. class_name may be a namespace-qualified name if the programming language
used to write the class uses namespaces, such as C#. If class_name is not specified, SQL Server assumes it
is the same as aggregate_name.
Remarks
By default, the ability of SQL Server to run CLR code is off. You can create, modify, and drop database objects that
reference managed code modules, but the code in these modules will not run in an instance of SQL Server unless
the clr enabled option is enabled by using sp_configure.
The class of the assembly referenced in assembly_name and its methods, should satisfy all the requirements for
implementing a user-defined aggregate function in an instance of SQL Server. For more information, see CLR
User-Defined Aggregates.
Permissions
Requires CREATE AGGREGATE permission and also REFERENCES permission on the assembly that is specified
in the EXTERNAL NAME clause.
Examples
The following example assumes that a StringUtilities.csproj sample application is compiled. For more information,
see String Utility Functions Sample.
The example creates aggregate Concatenate . Before the aggregate is created, the assembly StringUtilities.dll is
registered in the local database.
USE AdventureWorks2012;
GO
DECLARE @SamplesPath nvarchar(1024)
-- You may have to modify the value of the this variable if you have
--installed the sample some location other than the default location.
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds an application role to the current database.
Transact-SQL Syntax Conventions
Syntax
CREATE APPLICATION ROLE application_role_name
WITH PASSWORD = 'password' [ , DEFAULT_SCHEMA = schema_name ]
Arguments
application_role_name
Specifies the name of the application role. This name must not already be used to refer to any principal in the
database.
PASSWORD ='password'
Specifies the password that database users will use to activate the application role. You should always use strong
passwords. password must meet the Windows password policy requirements of the computer that is running
the instance of SQL Server.
DEFAULT_SCHEMA =schema_name
Specifies the first schema that will be searched by the server when it resolves the names of objects for this role.
If DEFAULT_SCHEMA is left undefined, the application role will use DBO as its default schema. schema_name
can be a schema that does not exist in the database.
Remarks
IMPORTANT
Password complexity is checked when application role passwords are set. Applications that invoke application roles must
store their passwords. Application role passwords should always be stored encrypted.
Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that
schemas are equivalent to database users may no longer return correct results. Old catalog views, including
sysobjects, should not be used in a database in which any of the following DDL statements have ever been used:
CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE
ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).
Permissions
Requires ALTER ANY APPLICATION ROLE permission on the database.
Examples
The following example creates an application role called weekly_receipts that has the password
987Gbv876sPYY5m23 and Sales as its default schema.
See Also
Application Roles
sp_setapprole (Transact-SQL )
ALTER APPLICATION ROLE (Transact-SQL )
DROP APPLICATION ROLE (Transact-SQL )
Password Policy
EVENTDATA (Transact-SQL )
CREATE ASSEMBLY (Transact-SQL)
5/4/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Creates a managed application module that contains class metadata and managed code as an object in an
instance of SQL Server. By referencing this module, common language runtime (CLR ) functions, stored
procedures, triggers, user-defined aggregates, and user-defined types can be created in the database.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
WARNING
CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer supported as a security boundary. A CLR
assembly created with PERMISSION_SET = SAFE may be able to access external system resources, call unmanaged code,
and acquire sysadmin privileges. Beginning with SQL Server 2017 (14.x), an sp_configure option called
clr strict security is introduced to enhance the security of CLR assemblies. clr strict security is enabled by
default, and treats SAFE and EXTERNAL_ACCESS assemblies as if they were marked UNSAFE . The clr strict security
option can be disabled for backward compatibility, but this is not recommended. Microsoft recommends that all assemblies
be signed by a certificate or asymmetric key with a corresponding login that has been granted UNSAFE ASSEMBLY
permission in the master database. For more information, see CLR strict security.
Syntax
CREATE ASSEMBLY assembly_name
[ AUTHORIZATION owner_name ]
FROM { <client_assembly_specifier> | <assembly_bits> [ ,...n ] }
[ WITH PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE } ]
[ ; ]
<client_assembly_specifier> :: =
'[\\computer_name\]share_name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'
<assembly_bits> :: =
{ varbinary_literal | varbinary_expression }
Arguments
assembly_name
Is the name of the assembly. The name must be unique within the database and a valid identifier.
AUTHORIZATION owner_name
Specifies the name of a user or role as owner of the assembly. owner_name must either be the name of a role of
which the current user is a member, or the current user must have IMPERSONATE permission on owner_name.
If not specified, ownership is given to the current user.
<client_assembly_specifier>
Specifies the local path or network location where the assembly that is being uploaded is located, and also the
manifest file name that corresponds to the assembly. <client_assembly_specifier> can be expressed as a fixed
string or an expression evaluating to a fixed string, with variables. CREATE ASSEMBLY does not support loading
multimodule assemblies. SQL Server also looks for any dependent assemblies of this assembly in the same
location and also uploads them with the same owner as the root level assembly. If these dependent assemblies
are not found and they are not already loaded in the current database, CREATE ASSEMBLY fails. If the dependent
assemblies are already loaded in the current database, the owner of those assemblies must be the same as the
owner of the newly created assembly.
<client_assembly_specifier> cannot be specified if the logged in user is being impersonated.
<assembly_bits>
Is the list of binary values that make up the assembly and its dependent assemblies. The first value in the list is
considered the root-level assembly. The values corresponding to the dependent assemblies can be supplied in any
order. Any values that do not correspond to dependencies of the root assembly are ignored.
NOTE
This option is not available in a contained database.
varbinary_literal
Is a varbinary literal.
varbinary_expression
Is an expression of type varbinary.
PERMISSION_SET { SAFE | EXTERNAL_ACCESS | UNSAFE }
IMPORTANT
The PERMISSION_SET option is affected by the clr strict security option, described in the opening warning. When
clr strict security is enabled, all assemblies are treated as UNSAFE .
Specifies a set of code access permissions that are granted to the assembly when it is accessed by SQL Server. If
not specified, SAFE is applied as the default.
We recommend using SAFE. SAFE is the most restrictive permission set. Code executed by an assembly with
SAFE permissions cannot access external system resources such as files, the network, environment variables, or
the registry.
EXTERNAL_ACCESS enables assemblies to access certain external system resources such as files, networks,
environmental variables, and the registry.
NOTE
This option is not available in a contained database.
UNSAFE enables assemblies unrestricted access to resources, both within and outside an instance of SQL Server.
Code running from within an UNSAFE assembly can call unmanaged code.
NOTE
This option is not available in a contained database.
IMPORTANT
SAFE is the recommended permission setting for assemblies that perform computation and data management tasks
without accessing resources outside an instance of SQL Server.
We recommend using EXTERNAL_ACCESS for assemblies that access resources outside of an instance of SQL Server.
EXTERNAL_ACCESS assemblies include the reliability and scalability protections of SAFE assemblies, but from a security
perspective are similar to UNSAFE assemblies. This is because code in EXTERNAL_ACCESS assemblies runs by default under
the SQL Server service account and accesses external resources under that account, unless the code explicitly impersonates
the caller. Therefore, permission to create EXTERNAL_ACCESS assemblies should be granted only to logins that are trusted
to run code under the SQL Server service account. For more information about impersonation, see CLR Integration
Security.
Specifying UNSAFE enables the code in the assembly complete freedom to perform operations in the SQL Server process
space that can potentially compromise the robustness of SQL Server. UNSAFE assemblies can also potentially subvert the
security system of either SQL Server or the common language runtime. UNSAFE permissions should be granted only to
highly trusted assemblies. Only members of the sysadmin fixed server role can create and alter UNSAFE assemblies.
For more information about assembly permission sets, see Designing Assemblies.
Remarks
CREATE ASSEMBLY uploads an assembly that was previously compiled as a .dll file from managed code for use
inside an instance of SQL Server.
When enabled, the PERMISSION_SET option in the CREATE ASSEMBLY and ALTER ASSEMBLY statements is ignored at
run-time, but the PERMISSION_SET options are preserved in metadata. Ignoring the option, minimizes breaking
existing code statements.
SQL Server does not allow registering different versions of an assembly with the same name, culture and public
key.
When attempting to access the assembly specified in <client_assembly_specifier>, SQL Server impersonates the
security context of the current Windows login. If <client_assembly_specifier> specifies a network location (UNC
path), the impersonation of the current login is not carried forward to the network location because of delegation
limitations. In this case, access is made using the security context of the SQL Server service account. For more
information, see Credentials (Database Engine).
Besides the root assembly specified by assembly_name, SQL Server tries to upload any assemblies that are
referenced by the root assembly being uploaded. If a referenced assembly is already uploaded to the database
because of an earlier CREATE ASSEMBLY statement, this assembly is not uploaded but is available to the root
assembly. If a dependent assembly was not previously uploaded, but SQL Server cannot locate its manifest file in
the source directory, CREATE ASSEMBLY returns an error.
If any dependent assemblies referenced by the root assembly are not already in the database and are implicitly
loaded together with the root assembly, they have the same permission set as the root level assembly. If the
dependent assemblies must be created by using a different permission set than the root-level assembly, they
must be uploaded explicitly before the root level assembly with the appropriate permission set.
Assembly Validation
SQL Server performs checks on the assembly binaries uploaded by the CREATE ASSEMBLY statement to
guarantee the following:
The assembly binary is well formed with valid metadata and code segments, and the code segments have
valid Microsoft Intermediate language (MSIL ) instructions.
The set of system assemblies it references is one of the following supported assemblies in SQL Server:
Microsoft.Visualbasic.dll, Mscorlib.dll, System.Data.dll, System.dll, System.Xml.dll, Microsoft.Visualc.dll,
Custommarshallers.dll, System.Security.dll, System.Web.Services.dll, System.Data.SqlXml.dll,
System.Core.dll, and System.Xml.Linq.dll. Other system assemblies can be referenced, but they must be
explicitly registered in the database.
For assemblies created by using SAFE or EXTERNAL ACCESS permission sets:
The assembly code should be type-safe. Type safety is established by running the common
language runtime verifier against the assembly.
The assembly should not contain any static data members in its classes unless they are marked as
read-only.
The classes in the assembly cannot contain finalizer methods.
The classes or methods of the assembly should be annotated only with allowed code attributes. For
more information, see Custom Attributes for CLR Routines.
Besides the previous checks that are performed when CREATE ASSEMBLY executes, there are additional
checks that are performed at execution time of the code in the assembly:
Calling certain Microsoft .NET Framework APIs that require a specific Code Access Permission may fail if
the permission set of the assembly does not include that permission.
For SAFE and EXTERNAL_ACCESS assemblies, any attempt to call .NET Framework APIs that are
annotated with certain HostProtectionAttributes will fail.
For more information, see Designing Assemblies.
Permissions
Requires CREATE ASSEMBLY permission.
If PERMISSION_SET = EXTERNAL_ACCESS is specified, requiresEXTERNAL ACCESS ASSEMBLY
permission on the server. If PERMISSION_SET = UNSAFE is specified, requires UNSAFE ASSEMBLY
permission on the server.
User must be the owner of any assemblies that are referenced by the assembly that are to be uploaded if the
assemblies already exist in the database. To upload an assembly by using a file path, the current user must be a
Windows authenticated login or a member of the sysadmin fixed server role. The Windows login of the user that
executes CREATE ASSEMBLY must have read permission on the share and the files being loaded in the
statement.
Permissions with CLR strict security
The following permissions required to create a CLR assembly when CLR strict security is enabled:
The user must have the CREATE ASSEMBLY permission
And one of the following conditions must also be true:
The assembly is signed with a certificate or asymmetric key that has a corresponding login with the
UNSAFE ASSEMBLY permission on the server. Signing the assembly is recommended.
The database has the TRUSTWORTHY property set to ON , and the database is owned by a login that has
the UNSAFE ASSEMBLY permission on the server. This option is not recommended.
For more information about assembly permission sets, see Designing Assemblies.
Examples
Example A: Creating an assembly from a dll
Applies to: SQL Server 2008 through SQL Server 2017.
The following example assumes that the SQL Server Database Engine samples are installed in the default
location of the local computer and the HelloWorld.csproj sample application is compiled. For more information,
see Hello World Sample.
See Also
ALTER ASSEMBLY (Transact-SQL )
DROP ASSEMBLY (Transact-SQL )
CREATE FUNCTION (Transact-SQL )
CREATE PROCEDURE (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
CREATE TYPE (Transact-SQL )
CREATE AGGREGATE (Transact-SQL )
EVENTDATA (Transact-SQL )
Usage Scenarios and Examples for Common Language Runtime (CLR ) Integration
CREATE ASYMMETRIC KEY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an asymmetric key in the database.
This feature is incompatible with database export using Data Tier Application Framework (DACFx). You must
drop all asymmetric keys before exporting.
Transact-SQL Syntax Conventions
Syntax
CREATE ASYMMETRIC KEY Asym_Key_Name
[ AUTHORIZATION database_principal_name ]
[ FROM <Asym_Key_Source> ]
[ WITH <key_option> ]
[ ENCRYPTION BY <encrypting_mechanism> ]
[ ; ]
<Asym_Key_Source>::=
FILE = 'path_to_strong-name_file'
| EXECUTABLE FILE = 'path_to_executable_file'
| ASSEMBLY Assembly_Name
| PROVIDER Provider_Name
<key_option> ::=
ALGORITHM = <algorithm>
|
PROVIDER_KEY_NAME = 'key_name_in_provider'
|
CREATION_DISPOSITION = { CREATE_NEW | OPEN_EXISTING }
<algorithm> ::=
{ RSA_4096 | RSA_3072 | RSA_2048 | RSA_1024 | RSA_512 }
<encrypting_mechanism> ::=
PASSWORD = 'password'
Arguments
FROM Asym_Key_Source
Specifies the source from which to load the asymmetric key pair.
AUTHORIZATION database_principal_name
Specifies the owner of the asymmetric key. The owner cannot be a role or a group. If this option is omitted, the
owner will be the current user.
FILE ='path_to_strong -name_file'
Specifies the path of a strong-name file from which to load the key pair.
NOTE
This option is not available in a contained database.
NOTE
This option is not available in a contained database.
ASSEMBLY Assembly_Name
Specifies the name of an assembly from which to load the public key.
ENCRYPTION BY <key_name_in_provider> Specifies how the key is encrypted. Can be a certificate, password,
or asymmetric key.
KEY_NAME ='key_name_in_provider'
Specifies the key name from the external provider. For more information about external key management, see
Extensible Key Management (EKM ).
CREATION_DISPOSITION = CREATE_NEW
Creates a new key on the Extensible Key Management device. PROV_KEY_NAME must be used to specify key
name on the device. If a key already exists on the device the statement fails with error.
CREATION_DISPOSITION = OPEN_EXISTING
Maps a SQL Server asymmetric key to an existing Extensible Key Management key. PROV_KEY_NAME must be
used to specify key name on the device. If CREATION_DISPOSITION = OPEN_EXISTING is not provided, the
default is CREATE_NEW.
ALGORITHM = <algorithm>
Five algorithms can be provided; RSA_4096, RSA_3072, RSA_2048, RSA_1024, and RSA_512.
RSA_1024 and RSA_512 are deprecated. To use RSA_1024 or RSA_512 (not recommended) you must set the
database to database compatibility level 120 or lower.
PASSWORD = 'password'
Specifies the password with which to encrypt the private key. If this clause is not present, the private key will be
encrypted with the database master key. password is a maximum of 128 characters. password must meet the
Windows password policy requirements of the computer that is running the instance of SQL Server.
Remarks
An asymmetric key is a securable entity at the database level. In its default form, this entity contains both a public
key and a private key. When executed without the FROM clause, CREATE ASYMMETRIC KEY generates a new
key pair. When executed with the FROM clause, CREATE ASYMMETRIC KEY imports a key pair from a file or
imports a public key from an assembly.
By default, the private key is protected by the database master key. If no database master key has been created, a
password is required to protect the private key. If a database master key does exist, the password is optional.
The private key can be 512, 1024, or 2048 bits long.
Permissions
Requires CREATE ASYMMETRIC KEY permission on the database. If the AUTHORIZATION clause is specified,
requires IMPERSONATE permission on the database principal, or ALTER permission on the application role.
Only Windows logins, SQL Server logins, and application roles can own asymmetric keys. Groups and roles
cannot own asymmetric keys.
Examples
A. Creating an asymmetric key
The following example creates an asymmetric key named PacificSales09 by using the RSA_2048 algorithm, and
protects the private key with a password.
See Also
Choose an Encryption Algorithm
ALTER ASYMMETRIC KEY (Transact-SQL )
DROP ASYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
Extensible Key Management Using Azure Key Vault (SQL Server)
CREATE AVAILABILITY GROUP (Transact-SQL)
5/30/2018 • 28 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new availability group, if the instance of SQL Server is enabled for the Always On availability groups
feature.
IMPORTANT
Execute CREATE AVAILABILITY GROUP on the instance of SQL Server that you intend to use as the initial primary replica of
your new availability group. This server instance must reside on a Windows Server Failover Clustering (WSFC) node.
Syntax
CREATE AVAILABILITY GROUP group_name
{ <availability_group_spec> | <distributed_availability_group_spec> }
[ ; ]
<availability_group_spec>::=
[ WITH (<with_option_spec> [ ,...n ] ) ]
FOR [ DATABASE database_name [ ,...n ] ]
REPLICA ON <add_replica_spec> [ ,...n ]
[ LISTENER ‘dns_name’ ( <listener_option> ) ]
<with_option_spec>::=
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
| FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
| HEALTH_CHECK_TIMEOUT = milliseconds
| DB_FAILOVER = { ON | OFF }
| DTC_SUPPORT = { PER_DB | NONE }
| BASIC
| REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = { integer }
| CLUSTER_TYPE = { WSFC | EXTERNAL | NONE }
<add_replica_spec>::=
<server_instance> WITH
(
ENDPOINT_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT | CONFIGURATION_ONLY },
FAILOVER_MODE = { AUTOMATIC | MANUAL | EXTERNAL }
[ , <add_replica_option> [ ,...n ] ]
)
<add_replica_option>::=
SEEDING_MODE = { AUTOMATIC | MANUAL }
| BACKUP_PRIORITY = n
| SECONDARY_ROLE ( {
[ ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL } ]
[,] [ READ_ONLY_ROUTING_URL = 'TCP://system-address:port' ]
} )
| PRIMARY_ROLE ( {
[ ALLOW_CONNECTIONS = { READ_WRITE | ALL } ]
[,] [ READ_ONLY_ROUTING_LIST = { ( ‘<server_instance>’ [ ,...n ] ) | NONE } ]
} )
} )
| SESSION_TIMEOUT = integer
<listener_option> ::=
{
WITH DHCP [ ON ( <network_subnet_option> ) ]
| WITH IP ( { ( <ip_address_option> ) } [ , ...n ] ) [ , PORT = listener_port ]
}
<network_subnet_option> ::=
‘four_part_ipv4_address’, ‘four_part_ipv4_mask’
<ip_address_option> ::=
{
‘four_part_ipv4_address’, ‘four_part_ipv4_mask’
| ‘ipv6_address’
}
<distributed_availability_group_spec>::=
WITH (DISTRIBUTED)
AVAILABILITY GROUP ON <add_availability_group_spec> [ ,...2 ]
<add_availability_group_spec>::=
<ag_name> WITH
(
LISTENER_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT },
FAILOVER_MODE = MANUAL,
SEEDING_MODE = { AUTOMATIC | MANUAL }
)
Arguments
group_name
Specifies the name of the new availability group. group_name must be a valid SQL Serveridentifier, and it must
be unique across all availability groups in the WSFC cluster. The maximum length for an availability group name
is 128 characters.
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
Specifies a preference about how a backup job should evaluate the primary replica when choosing where to
perform backups. You can script a given backup job to take the automated backup preference into account. It is
important to understand that the preference is not enforced by SQL Server, so it has no impact on ad-hoc
backups.
The supported values are as follows:
PRIMARY
Specifies that the backups should always occur on the primary replica. This option is useful if you need backup
features, such as creating differential backups, that are not supported when backup is run on a secondary replica.
IMPORTANT
If you plan to use log shipping to prepare any secondary databases for an availability group, set the automated backup
preference to Primary until all the secondary databases have been prepared and joined to the availability group.
SECONDARY_ONLY
Specifies that backups should never be performed on the primary replica. If the primary replica is the only replica
online, the backup should not occur.
SECONDARY
Specifies that backups should occur on a secondary replica except when the primary replica is the only replica
online. In that case, the backup should occur on the primary replica. This is the default behavior.
NONE
Specifies that you prefer that backup jobs ignore the role of the availability replicas when choosing the replica to
perform backups. Note backup jobs might evaluate other factors such as backup priority of each availability
replica in combination with its operational state and connected state.
IMPORTANT
There is no enforcement of the AUTOMATED_BACKUP_PREFERENCE setting. The interpretation of this preference depends
on the logic, if any, that you script into back jobs for the databases in a given availability group. The automated backup
preference setting has no impact on ad-hoc backups. For more information, see Configure Backup on Availability Replicas
(SQL Server).
NOTE
To view the automated backup preference of an existing availability group, select the automated_backup_preference or
automated_backup_preference_desc column of the sys.availability_groups catalog view. Additionally,
sys.fn_hadr_backup_is_preferred_replica (Transact-SQL) can be used to determine the preferred backup replica. This function
returns 1 for at least one of the replicas, even when AUTOMATED_BACKUP_PREFERENCE = NONE .
FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
Specifies what failure conditions trigger an automatic failover for this availability group.
FAILURE_CONDITION_LEVEL is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode (AVAILIBILITY_MODE = SYNCHRONOUS_COMMIT).
Furthermore, failure conditions can trigger an automatic failover only if both the primary and secondary replicas
are configured for automatic failover mode (FAILOVER_MODE = AUTOMATIC ) and the secondary replica is
currently synchronized with the primary replica.
The failure-condition levels (1–5) range from the least restrictive, level 1, to the most restrictive, level 5. A given
condition level encompasses all the less restrictive levels. Thus, the strictest condition level, 5, includes the four
less restrictive condition levels (1-4), level 4 includes levels 1-3, and so forth. The following table describes the
failure-condition that corresponds to each level.
NOTE
Lack of response by an instance of SQL Server to client requests is not relevant to availability groups.
The FAILURE_CONDITION_LEVEL and HEALTH_CHECK_TIMEOUT values, define a flexible failover policy for a
given group. This flexible failover policy provides you with granular control over what conditions must cause an
automatic failover. For more information, see Flexible Failover Policy for Automatic Failover of an Availability
Group (SQL Server).
HEALTH_CHECK_TIMEOUT = milliseconds
Specifies the wait time (in milliseconds) for the sp_server_diagnostics system stored procedure to return server-
health information before the WSFC cluster assumes that the server instance is slow or hung.
HEALTH_CHECK_TIMEOUT is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode with automatic failover (AVAILIBILITY_MODE =
SYNCHRONOUS_COMMIT). Furthermore, a health-check timeout can trigger an automatic failover only if both
the primary and secondary replicas are configured for automatic failover mode (FAILOVER_MODE =
AUTOMATIC ) and the secondary replica is currently synchronized with the primary replica.
The default HEALTH_CHECK_TIMEOUT value is 30000 milliseconds (30 seconds). The minimum value is 15000
milliseconds (15 seconds), and the maximum value is 4294967295 milliseconds.
IMPORTANT
sp_server_diagnostics does not perform health checks at the database level.
DB_FAILOVER = { ON | OFF }
Specifies the response to take when a database on the primary replica is offline. When set to ON, any status other
than ONLINE for a database in the availability group triggers an automatic failover. When this option is set to
OFF, only the health of the instance is used to trigger automatic failover.
For more information regarding this setting, see Database Level Health Detection Option
DTC_SUPPORT = { PER_DB | NONE }
Specifies whether cross-database transactions are supported through the distributed transaction coordinator
(DTC ). Cross-database transactions are only supported beginning in SQL Server 2016 (13.x). PER_DB creates the
availability group with support for these transactions. For more information, see Cross-Database Transactions and
Distributed Transactions for Always On Availability Groups and Database Mirroring (SQL Server).
BASIC
Used to create a basic availability group. Basic availability groups are limited to one database and two replicas: a
primary replica and one secondary replica. This option is a replacement for the deprecated database mirroring
feature on SQL Server Standard Edition. For more information, see Basic Availability Groups (Always On
Availability Groups). Basic availability groups are supported beginning in SQL Server 2016 (13.x).
DISTRIBUTED
Used to create a distributed availability group. The DISTRIBUTED option cannot be combined with any other
options or clauses. This option is used with the AVAIL ABILITY GROUP ON parameter to connect two availability
groups in separate Windows Server Failover Clusters. For more information, see Distributed Availability Groups
(Always On Availability Groups). Distributed availability groups are supported beginning in SQL Server 2016
(13.x).
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT
Introduced in SQL Server 2017. Used to set a minimum number of synchronous secondary replicas required to
commit before the primary commits a transaction. Guarantees that SQL Server transaction waits until the
transaction logs are updated on the minimum number of secondary replicas. The default is 0 which gives the
same behavior as SQL Server 2016. The minimum value is 0. The maximum value is the number of replicas
minus 1. This option relates to replicas in synchronous commit mode. When replicas are in synchronous commit
mode, writes on the primary replica wait until writes on the secondary synchronous replicas are committed to the
replica database transaction log. If a SQL Server that hosts a secondary synchronous replica stops responding,
the SQL Server that hosts the primary replica marks that secondary replica as NOT SYNCHRONIZED and
proceed. When the unresponsive database comes back online it is in a "not synced" state and the replica marked
as unhealthy until the primary can make it synchronous again. This setting guarantees that the primary replica
waits until the minimum number of replicas have committed each transaction. If the minimum number of replicas
is not available then commits on the primary fail. For cluster type EXTERNAL the setting is changed when the
availability group is added to a cluster resource. See High availability and data protection for availability group
configurations.
CLUSTER_TYPE
Introduced in SQL Server 2017. Used to identify if the availability group is on a Windows Server Failover Cluster
(WSFC ). Set to WSFC when availability group is on a failover cluster instance on a Windows Server failover
cluster. Set to EXTERNAL when the cluster is managed by a cluster manager that is not a Windows Server failover
cluster, like Linux Pacemaker. Set to NONE when availability group not using WSFC for cluster coordination. For
example, when an availability group includes Linux servers with no cluster manager.
DATABASE database_name
Specifies a list of one or more user databases on the local SQL Server instance (that is, the server instance on
which you are creating the availability group). You can specify multiple databases for an availability group, but
each database can belong to only one availability group. For information about the type of databases that an
availability group can support, see Prerequisites, Restrictions, and Recommendations for Always On Availability
Groups (SQL Server). To find out which local databases already belong to an availability group, see the replica_id
column in the sys.databases catalog view.
The DATABASE clause is optional. If you omit it, the new availability group is empty.
After you have created the availability group, connect to each server instance that hosts a secondary replica and
then prepare each secondary database and join it to the availability group. For more information, see Start Data
Movement on an Always On Secondary Database (SQL Server).
NOTE
Later, you can add eligible databases on the server instance that hosts the current primary replica to an availability group.
You can also remove a database from an availability group. For more information, see ALTER AVAILABILITY GROUP
(Transact-SQL).
REPLICA ON
Specifies from one to five SQL server instances to host availability replicas in the new availability group. Each
replica is specified by its server instance address followed by a WITH (…) clause. Minimally, you must specify your
local server instance, which becomes the initial primary replica. Optionally, you can also specify up to four
secondary replicas.
You need to join every secondary replica to the availability group. For more information, see ALTER
AVAIL ABILITY GROUP (Transact-SQL ).
NOTE
If you specify less than four secondary replicas when you create an availability group, you can an additional secondary
replica at any time by using the ALTER AVAILABILITY GROUP Transact-SQL statement. You can also use this statement this
remove any secondary replica from an existing availability group.
<server_instance> Specifies the address of the instance of SQL Server that is the host for an replica. The address
format depends on whether the instance is the default instance or a named instance and whether it is a standalone
instance or a failover cluster instance (FCI), as follows:
{ 'system_name[\instance_name]' | 'FCI_network_name[\instance_name]' }
The components of this address are as follows:
system_name
Is the NetBIOS name of the computer system on which the target instance of SQL Server resides. This computer
must be a WSFC node.
FCI_network_name
Is the network name that is used to access a SQL Server failover cluster. Use this if the server instance participates
as a SQL Server failover partner. Executing SELECT @@SERVERNAME on an FCI server instance returns its
entire 'FCI_network_name[\instance_name]' string (which is the full replica name).
instance_name
Is the name of an instance of a SQL Server that is hosted by system_name or FCI_network_name and that has
HADR service is enabled. For a default server instance, instance_name is optional. The instance name is case
insensitive. On a stand-alone server instance, this value name is the same as the value returned by executing
SELECT @@SERVERNAME.
\
Is a separator used only when specifying instance_name, in order to separate it from system_name or
FCI_network_name.
For information about the prerequisites for WSFC nodes and server instances, see Prerequisites, Restrictions, and
Recommendations for Always On Availability Groups (SQL Server).
ENDPOINT_URL ='TCP://system -address:port'
Specifies the URL path for the database mirroring endpoint on the instance of SQL Server that hosts the
availability replica that you are defining in your current REPLICA ON clause.
The ENDPOINT_URL clause is required. For more information, see Specify the Endpoint URL When Adding or
Modifying an Availability Replica (SQL Server).
'TCP://system -address:port'
Specifies a URL for specifying an endpoint URL or read-only routing URL. The URL parameters are as follows:
system -address
Is a string, such as a system name, a fully qualified domain name, or an IP address, that unambiguously identifies
the destination computer system.
port
Is a port number that is associated with the mirroring endpoint of the partner server instance (for the
ENDPOINT_URL option) or the port number used by the Database Engine of the server instance (for the
READ_ONLY_ROUTING_URL option).
AVAIL ABILITY_MODE = { {SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT |
CONFIGURATION_ONLY }
SYNCHRONOUS_COMMIT or ASYNCHRONOUS_COMMIT specifies whether the primary replica has to wait
for the secondary replica to acknowledge the hardening (writing) of the log records to disk before the primary
replica can commit the transaction on a given primary database. The transactions on different databases on the
same primary replica can commit independently. SQL Server 2017 CU 1 introduces CONFIGURATION_ONLY.
CONFIGURATION_ONLY replica only applies to availability groups with CLUSTER_TYPE = EXTERNAL or
CLUSTER_TYPE = NONE.
SYNCHRONOUS_COMMIT
Specifies that the primary replica waits to commit transactions until they have been hardened on this secondary
replica (synchronous-commit mode). You can specify SYNCHRONOUS_COMMIT for up to three replicas,
including the primary replica.
ASYNCHRONOUS_COMMIT
Specifies that the primary replica commits transactions without waiting for this secondary replica to harden the
log (synchronous-commit availability mode). You can specify ASYNCHRONOUS_COMMIT for up to five
availability replicas, including the primary replica.
CONFIGURATION_ONLY Specifies that the primary replica synchronously commit availability group
configuration metadata to the master database on this replica. The replica will not contain user data. This option:
Can be hosted on any edition of SQL Server, including Express Edition.
Requires the data mirroring endpoint of the CONFIGURATION_ONLY replica to be type WITNESS .
Can not be altered.
Is not valid when CLUSTER_TYPE = WSFC .
For more information, see Configuration only replica.
The AVAIL ABILITY_MODE clause is required. For more information, see Availability Modes (Always On
Availability Groups).
FAILOVER_MODE = { AUTOMATIC | MANUAL }
Specifies the failover mode of the availability replica that you are defining.
AUTOMATIC
Enables automatic failover. This option is supported only if you also specify AVAIL ABILITY_MODE =
SYNCHRONOUS_COMMIT. You can specify AUTOMATIC for two availability replicas, including the
primary replica.
NOTE
SQL Server Failover Cluster Instances (FCIs) do not support automatic failover by availability groups, so any availability
replica that is hosted by an FCI can only be configured for manual failover.
MANUAL
Enables planned manual failover or forced manual failover (typically called forced failover) by the database
administrator.
The FAILOVER_MODE clause is required. The two types of manual failover, manual failover without data loss and
forced failover (with possible data loss), are supported under different conditions. For more information, see
Failover and Failover Modes (Always On Availability Groups).
SEEDING_MODE = { AUTOMATIC | MANUAL }
Specifies how the secondary replica is initially seeded.
AUTOMATIC
Enables direct seeding. This method seeds the secondary replica over the network. This method does not require
you to backup and restore a copy of the primary database on the replica.
NOTE
For direct seeding, you must allow database creation on each secondary replica by calling ALTER AVAILABILITY GROUP
with the GRANT CREATE ANY DATABASE option.
MANUAL
Specifies manual seeding (default). This method requires you to create a backup of the database on the primary
replica and manually restore that backup on the secondary replica.
BACKUP_PRIORITY = n
Specifies your priority for performing backups on this replica relative to the other replicas in the same availability
group. The value is an integer in the range of 0..100. These values have the following meanings:
1..100 indicates that the availability replica could be chosen for performing backups. 1 indicates the lowest
priority, and 100 indicates the highest priority. If BACKUP_PRIORITY = 1, the availability replica would be
chosen for performing backups only if no higher priority availability replicas are currently available.
0 indicates that this availability replica is not for performing backups. This is useful, for example, for a
remote availability replica to which you never want backups to fail over.
For more information, see Active Secondaries: Backup on Secondary Replicas (Always On Availability
Groups).
SECONDARY_ROLE ( … )
Specifies role-specific settings that take effect if this availability replica currently owns the secondary role
(that is, whenever it is a secondary replica). Within the parentheses, specify either or both secondary-role
options. If you specify both, use a comma-separated list.
The secondary role options are as follows:
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
Specifies whether the databases of a given availability replica that is performing the secondary role (that is,
is acting as a secondary replica) can accept connections from clients, one of:
NO
No user connections are allowed to secondary databases of this replica. They are not available for read
access. This is the default behavior.
READ_ONLY
Only connections are allowed to the databases in the secondary replica where the Application Intent
property is set to ReadOnly. For more information about this property, see Using Connection String
Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the secondary replica for read-only access.
For more information, see Active Secondaries: Readable Secondary Replicas (Always On Availability
Groups).
READ_ONLY_ROUTING_URL ='TCP://system -address:port'
Specifies the URL to be used for routing read-intent connection requests to this availability replica. This is
the URL on which the SQL Server Database Engine listens. Typically, the default instance of the SQL
Server Database Engine listens on TCP port 1433.
For a named instance, you can obtain the port number by querying the port and type_desc columns of the
sys.dm_tcp_listener_states dynamic management view. The server instance uses the Transact-SQL listener
(type_desc='TSQL').
For more information about calculating the read-only routing URL for a replica, see Calculating
read_only_routing_url for Always On.
NOTE
For a named instance of SQL Server, the Transact-SQL listener should be configured to use a specific port. For more
information, see Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager).
PRIMARY_ROLE ( … )
Specifies role-specific settings that take effect if this availability replica currently owns the primary role (that is,
whenever it is the primary replica). Within the parentheses, specify either or both primary-role options. If you
specify both, use a comma-separated list.
The primary role options are as follows:
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
Specifies the type of connection that the databases of a given availability replica that is performing the primary
role (that is, is acting as a primary replica) can accept from clients, one of:
READ_WRITE
Connections where the Application Intent connection property is set to ReadOnly are disallowed. When the
Application Intent property is set to ReadWrite or the Application Intent connection property is not set, the
connection is allowed. For more information about Application Intent connection property, see Using Connection
String Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the primary replica. This is the default behavior.
READ_ONLY_ROUTING_LIST = { (‘<server_instance>’ [ ,...n ] ) | NONE } Specifies a comma-separated list of
server instances that host availability replicas for this availability group that meet the following requirements
when running under the secondary role:
Be configured to allow all connections or read-only connections (see the ALLOW_CONNECTIONS
argument of the SECONDARY_ROLE option, above).
Have their read-only routing URL defined (see the READ_ONLY_ROUTING_URL argument of the
SECONDARY_ROLE option, above).
The READ_ONLY_ROUTING_LIST values are as follows:
<server_instance> Specifies the address of the instance of SQL Server that is the host for a replica that is a
readable secondary replica when running under the secondary role.
Use a comma-separated list to specify all the server instances that might host a readable secondary replica.
Read-only routing follows the order in which server instances are specified in the list. If you include a
replica's host server instance on the replica's read-only routing list, placing this server instance at the end of
the list is typically a good practice, so that read-intent connections go to a secondary replica, if one is
available.
Beginning with SQL Server 2016 (13.x), you can load-balance read-intent requests across readable
secondary replicas. You specify this by placing the replicas in a nested set of parentheses within the read-
only routing list. For more information and examples, see Configure load-balancing across read-only
replicas.
NONE
Specifies that when this availability replica is the primary replica, read-only routing is not supported. This is
the default behavior.
SESSION_TIMEOUT = integer
Specifies the session-timeout period in seconds. If you do not specify this option, by default, the time
period is 10 seconds. The minimum value is 5 seconds.
IMPORTANT
We recommend that you keep the time-out period at 10 seconds or greater.
For more information about the session-timeout period, see Overview of Always On Availability Groups (SQL
Server).
AVAIL ABILITY GROUP ON
Specifies two availability groups that constitute a distributed availability group. Each availability group is part of
its own Windows Server Failover Cluster (WSFC ). When you create a distributed availability group, the
availability group on the current SQL Server Instance becomes the primary availability group and the remote
availability group becomes the secondary availability group.
You need to join the secondary availability group to the distributed availability group. For more information, see
ALTER AVAIL ABILITY GROUP (Transact-SQL ).
<ag_name> Specifies the name of the availability group that makes up one half of the distributed availability
group.
LISTENER ='TCP://system -address:port'
Specifies the URL path for the listener associated with the availability group.
The LISTENER clause is required.
'TCP://system -address:port'
Specifies a URL for the listener associated with the availability group. The URL parameters are as follows:
system -address
Is a string, such as a system name, a fully qualified domain name, or an IP address, that unambiguously identifies
the listener.
port
Is a port number that is associated with the mirroring endpoint of the availability group. Note that this is not the
port of the listener.
AVAIL ABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT |
CONFIGURATION_ONLY }
Specifies whether the primary replica has to wait for the secondary availability group to acknowledge the
hardening (writing) of the log records to disk before the primary replica can commit the transaction on a given
primary database.
SYNCHRONOUS_COMMIT
Specifies that the primary replica waits to commit transactions until they have been hardened on the secondary
availability group. You can specify SYNCHRONOUS_COMMIT for up to two availability groups, including the
primary availability group.
ASYNCHRONOUS_COMMIT
Specifies that the primary replica commits transactions without waiting for this secondary availability group to
harden the log. You can specify ASYNCHRONOUS_COMMIT for up to two availability groups, including the
primary availability group.
The AVAIL ABILITY_MODE clause is required.
FAILOVER_MODE = { MANUAL }
Specifies the failover mode of the distributed availability group.
MANUAL
Enables planned manual failover or forced manual failover (typically called forced failover) by the database
administrator.
The FAILOVER_MODE clause is required, and the only option is MANUAL. Automatic failover to the secondary
availability group is not supported.
SEEDING_MODE = { AUTOMATIC | MANUAL }
Specifies how the secondary availability group is initially seeded.
AUTOMATIC
Enables direct seeding. This method seeds the secondary availability group over the network. This method does
not require you to backup and restore a copy of the primary database on the replicas of the secondary availability
group.
MANUAL
Specifies manual seeding (default). This method requires you to create a backup of the database on the primary
replica and manually restore that backup on the replica(s) of the secondary availability group.
LISTENER ‘dns_name’( <listener_option> ) Defines a new availability group listener for this availability group.
LISTENER is an optional argument.
IMPORTANT
Before you create your first listener, we strongly recommend that you read Create or Configure an Availability Group
Listener (SQL Server).
After you create a listener for a given availability group, we strongly recommend that you do the following:
Ask your network administrator to reserve the listener's IP address for its exclusive use.
Give the listener's DNS host name to application developers to use in connection strings when requesting client
connections to this availability group.
dns_name
Specifies the DNS host name of the availability group listener. The DNS name of the listener must be unique in
the domain and in NetBIOS.
dns_name is a string value. This name can contain only alphanumeric characters, dashes (-), and hyphens (_), in
any order. DNS host names are case insensitive. The maximum length is 63 characters.
We recommend that you specify a meaningful string. For example, for an availability group named AG1 ,a
meaningful DNS host name would be ag1-listener .
IMPORTANT
NetBIOS recognizes only the first 15 chars in the dns_name. If you have two WSFC clusters that are controlled by the same
Active Directory and you try to create availability group listeners in both clusters using names with more than 15 characters
and an identical 15 character prefix, an error reports that the Virtual Network Name resource could not be brought online.
For information about prefix naming rules for DNS names, see Assigning Domain Names.
IMPORTANT
We do not recommend DHCP in production environment. If there is a down time and the DHCP IP lease expires, extra time
is required to register the new DHCP network IP address that is associated with the listener DNS name and impact the client
connectivity. However, DHCP is good for setting up your development and testing environment to verify basic functions of
availability groups and for integration with your applications.
For example:
WITH DHCP ON ('10.120.19.0','255.255.254.0')
four_part_ipv4_address
Specifies an IPv4 four-part address for an availability group listener. For example, 10.120.19.155 .
four_part_ipv4_mask
Specifies an IPv4 four-part mask for an availability group listener. For example, 255.255.254.0 .
ipv6_address
Specifies an IPv6 address for an availability group listener. For example, 2001::4898:23:1002:20f:1fff:feff:b3a3 .
PORT = listener_port
Specifies the port number—listener_port—to be used by an availability group listener that is specified by a WITH
IP clause. PORT is optional.
The default port number, 1433, is supported. However, if you have security concerns, we recommend using a
different port number.
For example: WITH IP ( ('2001::4898:23:1002:20f:1fff:feff:b3a3') ) , PORT = 7777
Security
Permissions
Requires membership in the sysadmin fixed server role and either CREATE AVAIL ABILITY GROUP server
permission, ALTER ANY AVAIL ABILITY GROUP permission, or CONTROL SERVER permission.
Examples
A. Configuring Backup on Secondary Replicas, Flexible Failover Policy, and Connection Access
The following example creates an availability group named MyAg for two user databases, ThisDatabase and
ThatDatabase . The following table summarizes the values specified for the options that are set for the availability
group as a whole.
This argument is
optional.
This argument is
optional.
Finally, the example specifies the optional LISTENER clause to create an availability group listener for the new
availability group. A unique DNS name, MyAgListenerIvP6 , is specified for this listener. The two replicas are on
different subnets, so the listener must use static IP addresses. For each of the two availability replicas, the WITH IP
clause specifies a static IP address, 2001:4898:f0:f00f::cf3c and 2001:4898:e0:f213::4ce2 , which use the IPv6
format. This example also specifies uses the optional PORT argument to specify port 60173 as the listener port.
CREATE AVAILABILITY GROUP MyAg
WITH (
AUTOMATED_BACKUP_PREFERENCE = SECONDARY,
FAILURE_CONDITION_LEVEL = 3,
HEALTH_CHECK_TIMEOUT = 600000
)
FOR
DATABASE ThisDatabase, ThatDatabase
REPLICA ON
'COMPUTER01' WITH
(
ENDPOINT_URL = 'TCP://COMPUTER01:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = AUTOMATIC,
BACKUP_PRIORITY = 30,
SECONDARY_ROLE (ALLOW_CONNECTIONS = NO,
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER01:1433' ),
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
READ_ONLY_ROUTING_LIST = (COMPUTER03) ),
SESSION_TIMEOUT = 10
),
'COMPUTER02' WITH
(
ENDPOINT_URL = 'TCP://COMPUTER02:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = AUTOMATIC,
BACKUP_PRIORITY = 30,
SECONDARY_ROLE (ALLOW_CONNECTIONS = NO,
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER02:1433' ),
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
READ_ONLY_ROUTING_LIST = (COMPUTER03) ),
SESSION_TIMEOUT = 10
),
'COMPUTER03' WITH
(
ENDPOINT_URL = 'TCP://COMPUTER03:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
BACKUP_PRIORITY = 90,
SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY,
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER03:1433' ),
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
READ_ONLY_ROUTING_LIST = NONE ),
SESSION_TIMEOUT = 10
);
GO
ALTER AVAILABIIITY GROUP [MyAg]
ADD LISTENER ‘MyAgListenerIvP6’ ( WITH IP ( ('2001:db88:f0:f00f::cf3c'),('2001:4898:e0:f213::4ce2') ) , PORT
= 60173 );
GO
Related Tasks
Create an Availability Group (Transact-SQL )
Use the Availability Group Wizard (SQL Server Management Studio)
Use the New Availability Group Dialog Box (SQL Server Management Studio)
Use the Availability Group Wizard (SQL Server Management Studio)
See Also
ALTER AVAIL ABILITY GROUP (Transact-SQL )
ALTER DATABASE SET HADR (Transact-SQL )
DROP AVAIL ABILITY GROUP (Transact-SQL )
Troubleshoot Always On Availability Groups Configuration (SQL Server)
Overview of Always On Availability Groups (SQL Server)
Availability Group Listeners, Client Connectivity, and Application Failover (SQL Server)
CREATE BROKER PRIORITY (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Defines a priority level and the set of criteria for determining which Service Broker conversations to assign the
priority level. The priority level is assigned to any conversation endpoint that uses the same combination of
contracts and services that are specified in the conversation priority. Priorities range in value from 1 (low ) to 10
(high). The default is 5.
Transact-SQL Syntax Conventions
Syntax
CREATE BROKER PRIORITY ConversationPriorityName
FOR CONVERSATION
[ SET ( [ CONTRACT_NAME = {ContractName | ANY } ]
[ [ , ] LOCAL_SERVICE_NAME = {LocalServiceName | ANY } ]
[ [ , ] REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY } ]
[ [ , ] PRIORITY_LEVEL = {PriorityValue | DEFAULT } ]
)
]
[;]
Arguments
ConversationPriorityName
Specifies the name for this conversation priority. The name must be unique in the current database, and must
conform to the rules for Database Engine identifiers.
SET
Specifies the criteria for determining if the conversation priority applies to a conversation. If specified, SET must
contain at least one criterion: CONTRACT_NAME, LOCAL_SERVICE_NAME, REMOTE_SERVICE_NAME, or
PRIORITY_LEVEL. If SET is not specified, the defaults are set for all three criteria.
CONTRACT_NAME = {ContractName | ANY }
Specifies the name of a contract to be used as a criterion for determining if the conversation priority applies to a
conversation. ContractName is a Database Engine identifier, and must specify the name of a contract in the
current database.
ContractName
Specifies that the conversation priority can be applied only to conversations where the BEGIN DIALOG statement
that started the conversation specified ON CONTRACT ContractName.
ANY
Specifies that the conversation priority can be applied to any conversation, regardless of which contract it uses.
The default is ANY.
LOCAL_SERVICE_NAME = {LocalServiceName | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to a
conversation endpoint.
LocalServiceName is a Database Engine identifier. It must specify the name of a service in the current database.
LocalServiceName
Specifies that the conversation priority can be applied to the following:
Any initiator conversation endpoint whose initiator service name matches LocalServiceName.
Any target conversation endpoint whose target service name matches LocalServiceName.
ANY
Specifies that the conversation priority can be applied to any conversation endpoint, regardless of the
name of the local service used by the endpoint.
The default is ANY.
REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to
a conversation endpoint.
RemoteServiceName is a literal of type nvarchar(256). Service Broker uses a byte-by-byte comparison to
match the RemoteServiceName string. The comparison is case-sensitive and does not consider the current
collation. The target service can be in the current instance of the Database Engine, or a remote instance of
the Database Engine.
'RemoteServiceName'
Specifies that the conversation priority can be applied to the following:
Any initiator conversation endpoint whose associated target service name matches RemoteServiceName.
Any target conversation endpoint whose associated initiator service name matches RemoteServiceName.
ANY
Specifies that the conversation priority can be applied to any conversation endpoint, regardless of the name
of the remote service associated with the endpoint.
The default is ANY.
PRIORITY_LEVEL = { PriorityValue | DEFAULT }
Specifies the priority to assign any conversation endpoint that use the contracts and services specified in
the conversation priority. PriorityValue must be an integer literal from 1 (lowest priority) to 10 (highest
priority). The default is 5.
Remarks
Service Broker assigns priority levels to conversation endpoints. The priority levels control the priority of the
operations associated with the endpoint. Each conversation has two conversation endpoints:
The initiator conversation endpoint associates one side of the conversation with the initiator service and
initiator queue. The initiator conversation endpoint is created when the BEGIN DIALOG statement is run.
The operations associated with the initiator conversation endpoint include:
Sends from the initiator service.
Receives from the initiator queue.
Getting the next conversation group from the initiator queue.
The target conversation endpoint associates the other side of the conversation with the target service and
queue. The target conversation endpoint is created when the conversation is used to send a message to the
target queue. The operations associated with the target conversation endpoint include:
Receives from the target queue.
Sends from the target service.
Getting the next conversation group from the target queue.
Service Broker assigns conversation priority levels when conversation endpoints are created. The
conversation endpoint retains the priority level until the conversation ends. New priorities or changes to
existing priorities are not applied to existing conversations.
Service Broker assigns a conversation endpoint the priority level from the conversation priority whose
contract and services criteria best match the properties of the endpoint. The following table shows the
match precedence:
Service Broker first looks for a priority whose specified contract, local service, and remote service matches those
that the operation uses. If one is not found, Service Broker looks for a priority with a contract and local service that
matches those that the operation uses, and where the remote service was specified as ANY. This continues for all
the variations that are listed in the precedence table. If no match is found, the operation is assigned the default
priority of 5.
Service Broker independently assigns a priority level to each conversation endpoint. To have Service Broker assign
priority levels to both the initiator and target conversation endpoints, you must ensure that both endpoints are
covered by conversation priorities. If the initiator and target conversation endpoints are in separate databases, you
must create conversation priorities in each database. The same priority level is usually specified for both of the
conversation endpoints for a conversation, but you can specify different priority levels.
Priority levels are always applied to operations that receive messages or conversation group identifiers from a
queue. Priority levels are also applied when transmitting messages from one instance of the Database Engine to
another.
Priority levels are not used when transmitting messages:
From a database where the HONOR_BROKER_PRIORITY database option is set to OFF. For more
information, see ALTER DATABASE SET Options (Transact-SQL ).
Between services in the same instance of the Database Engine.
All Service Broker operations in a database are assigned default priorities of 5 if no conversation priorities
have been created in the database.
Permissions
Permission for creating a conversation priority defaults to members of the db_ddladmin or db_owner fixed
database roles, and to the sysadmin fixed server role. Requires ALTER permission on the database.
Examples
A. Assigning a priority level to both directions of a conversation.
These two conversation priorities ensure that all operations that use SimpleContract between TargetService and
the InitiatorAService are assigned priority level 3.
B. Setting the priority level for all conversations that use a contract
Assigns a priority level of 7 to all operations that use a contract named SimpleContract . This assumes that there
are no other priorities that specify both SimpleContract and either a local or a remote service.
See Also
ALTER BROKER PRIORITY (Transact-SQL )
BEGIN DIALOG CONVERSATION (Transact-SQL )
CREATE CONTRACT (Transact-SQL )
CREATE QUEUE (Transact-SQL )
CREATE SERVICE (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
GET CONVERSATION GROUP (Transact-SQL )
RECEIVE (Transact-SQL )
SEND (Transact-SQL )
sys.conversation_priorities (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds a certificate to a database in SQL Server.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
This feature is incompatible with database export using Data Tier Application Framework (DACFx). You must
drop all certificates before exporting.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<existing_keys> ::=
ASSEMBLY assembly_name
| {
[ EXECUTABLE ] FILE = 'path_to_file'
[ WITH PRIVATE KEY ( <private_key_options> ) ]
}
| {
BINARY = asn_encoded_certificate
[ WITH PRIVATE KEY ( <private_key_options> ) ]
}
<generate_new_keys> ::=
[ ENCRYPTION BY PASSWORD = 'password' ]
WITH SUBJECT = 'certificate_subject_name'
[ , <date_options> [ ,...n ] ]
<private_key_options> ::=
{
FILE = 'path_to_private_key'
[ , DECRYPTION BY PASSWORD = 'password' ]
[ , ENCRYPTION BY PASSWORD = 'password' ]
}
|
{
BINARY = private_key_bits
[ , DECRYPTION BY PASSWORD = 'password' ]
[ , ENCRYPTION BY PASSWORD = 'password' ]
}
<date_options> ::=
START_DATE = 'datetime' | EXPIRY_DATE = 'datetime'
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
<generate_new_keys> ::=
WITH SUBJECT ='certificate_subject_name'
[ , <date_options> [ ,...n ] ]
<existing_keys> ::=
{
FILE ='path_to_file'
WITH PRIVATE KEY
(
FILE ='path_to_private_key'
, DECRYPTION BY PASSWORD ='password'
)
}
<date_options> ::=
START_DATE ='datetime' | EXPIRY_DATE ='datetime'
Arguments
certificate_name
Is the name for the certificate in the database.
AUTHORIZATION user_name
Is the name of the user that owns this certificate.
ASSEMBLY assembly_name
Specifies a signed assembly that has already been loaded into the database.
[ EXECUTABLE ] FILE ='path_to_file'
Specifies the complete path, including file name, to a DER -encoded file that contains the certificate. If the
EXECUTABLE option is used, the file is a DLL that has been signed by the certificate. path_to_file can be a local
path or a UNC path to a network location. The file is accessed in the security context of the SQL Server service
account. This account must have the required file-system permissions.
WITH PRIVATE KEY
Specifies that the private key of the certificate is loaded into SQL Server. This clause is only valid when the
certificate is being created from a file. To load the private key of an assembly, use ALTER CERTIFICATE.
FILE ='path_to_private_key'
Specifies the complete path, including file name, to the private key. path_to_private_key can be a local path or a
UNC path to a network location. The file is accessed in the security context of the SQL Server service account.
This account must have the necessary file-system permissions.
NOTE
This option is not available in a contained database.
asn_encoded_certificate
ASN encoded certificate bits specified as a binary constant.
BINARY =private_key_bits
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Private key bits specified as binary constant. These bits can be in encrypted form. If encrypted, the user must
provide a decryption password. Password policy checks are not performed on this password. The private key
bits should be in a PVK file format.
DECRYPTION BY PASSWORD ='key_password'
Specifies the password required to decrypt a private key that is retrieved from a file. This clause is optional if the
private key is protected by a null password. Saving a private key to a file without password protection is not
recommended. If a password is required but no password is specified, the statement fails.
ENCRYPTION BY PASSWORD ='password'
Specifies the password used to encrypt the private key. Use this option only if you want to encrypt the certificate
with a password. If this clause is omitted, the private key is encrypted using the database master key. password
must meet the Windows password policy requirements of the computer that is running the instance of SQL
Server. For more information, see Password Policy.
SUBJECT ='certificate_subject_name'
The term subject refers to a field in the metadata of the certificate as defined in the X.509 standard. The subject
should be no more than 64 characters long, and this limit is enforced for SQL Server on Linux. For SQL Server
on Windows, the subject can be up to 128 characters long. Subjects that exceed 128 characters are truncated
when they are stored in the catalog, but the binary large object (BLOB ) that contains the certificate retains the
full subject name.
START_DATE ='datetime'
Is the date on which the certificate becomes valid. If not specified, START_DATE is set equal to the current date.
START_DATE is in UTC time and can be specified in any format that can be converted to a date and time.
EXPIRY_DATE ='datetime'
Is the date on which the certificate expires. If not specified, EXPIRY_DATE is set to a date one year after
START_DATE. EXPIRY_DATE is in UTC time and can be specified in any format that can be converted to a date
and time. SQL Server Service Broker checks the expiration date. However, expiration is not enforced when the
certificate is used for encryption.
ACTIVE FOR BEGIN_DIALOG = { ON | OFF }
Makes the certificate available to the initiator of a Service Broker dialog conversation. The default value is ON.
Remarks
A certificate is a database-level securable that follows the X.509 standard and supports X.509 V1 fields.
CREATE CERTIFICATE can load a certificate from a file or assembly. This statement can also generate a key pair
and create a self-signed certificate.
The Private Key must be <= 2500 bytes in encrypted format. Private keys generated by SQL Server are 1024
bits long through SQL Server 2014 (12.x) and are 2048 bits long beginning with SQL Server 2016 (13.x).
Private keys imported from an external source have a minimum length of 384 bits and a maximum length of
4,096 bits. The length of an imported private key must be an integer multiple of 64 bits. Certificates used for
TDE are limited to a private key size of 3456 bits.
The entire Serial Number of the certificate is stored but only the first 16 bytes appear in the sys.certificates
catalog view.
The entire Issuer field of the certificate is stored but only the first 884 bytes in the sys.certificates catalog view.
The private key must correspond to the public key specified by certificate_name.
When you create a certificate from a container, loading the private key is optional. But when SQL Server
generates a self-signed certificate, the private key is always created. By default, the private key is encrypted
using the database master key. If the database master key does not exist and no password is specified, the
statement fails.
The ENCRYPTION BY PASSWORD option is not required when the private key is encrypted with the database
master key. Use this option only when the private key is encrypted with a password. If no password is specified,
the private key of the certificate will be encrypted using the database master key. If the master key of the
database cannot be opened, omitting this clause causes an error.
You do not have to specify a decryption password when the private key is encrypted with the database master
key.
NOTE
Built-in functions for encryption and signing do not check the expiration dates of certificates. Users of these functions
must decide when to check certificate expiration.
A binary description of a certificate can be created by using the CERTENCODED (Transact-SQL ) and
CERTPRIVATEKEY (Transact-SQL ) functions. For an example that uses CERTPRIVATEKEY and
CERTENCODED to copy a certificate to another database, see example B in the topic CERTENCODED
(Transact-SQL ).
Permissions
Requires CREATE CERTIFICATE permission on the database. Only Windows logins, SQL Server logins, and
application roles can own certificates. Groups and roles cannot own certificates.
Examples
A. Creating a self-signed certificate
The following example creates a certificate called Shipping04 . The private key of this certificate is protected
using a password.
Alternatively, you can create an assembly from the dll file, and then create a certificate from the assembly.
CREATE ASSEMBLY Shipping19
FROM ' c:\Shipping\Certs\Shipping19.dll'
WITH PERMISSION_SET = SAFE;
GO
CREATE CERTIFICATE Shipping19 FROM ASSEMBLY Shipping19;
GO
See Also
ALTER CERTIFICATE (Transact-SQL )
DROP CERTIFICATE (Transact-SQL )
BACKUP CERTIFICATE (Transact-SQL )
Encryption Hierarchy
EVENTDATA (Transact-SQL )
CERTENCODED (Transact-SQL )
CERTPRIVATEKEY (Transact-SQL )
CREATE COLUMNSTORE INDEX (Transact-SQL)
5/16/2018 • 26 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Convert a rowstore table to a clustered columnstore index or create a nonclustered columnstore index. Use a
columnstore index to efficiently run real-time operational analytics on an OLTP workload or to improve data
compression and query performance for data warehousing workloads.
NOTE
Starting with SQL Server 2016 (13.x), you can create the table as a clustered columnstore index. It is no longer necessary to
first create a rowstore table and then convert it to a clustered columnstore index.
TIP
For information on index design guidelines, refer to the SQL Server Index Design Guide.
Skip to examples:
Examples for converting a rowstore table to columnstore
Examples for nonclustered columnstore indexes
Go to scenarios:
Columnstore indexes for real-time operational analytics
Columnstore indexes for data warehousing
Learn more:
Columnstore indexes guide
Columnstore indexes feature summary
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<with_option> ::=
DROP_EXISTING = { ON | OFF } -- default is OFF
| MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
| COMPRESSION_DELAY = { 0 | delay [ Minutes ] }
| DATA_COMPRESSION = { COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( { partition_number_expression | range } [ ,...n ] ) ]
<on_option>::=
partition_scheme_name ( column_name )
| filegroup_name
| "default"
<filter_expression> ::=
column_name IN ( constant [ ,...n ]
| column_name { IS | IS NOT | = | <> | != | > | >= | !> | < | <= | !< } constant
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
CREATE CLUSTERED COLUMNSTORE INDEX
Create a clustered columnstore index in which all of the data is compressed and stored by column. The index
includes all of the columns in the table, and stores the entire table. If the existing table is a heap or clustered index,
the table is converted to a clustered columnstore index. If the table is already stored as a clustered columnstore
index, the existing index is dropped and rebuilt.
index_name
Specifies the name for the new index.
If the table already has a clustered columnstore index, you can specify the same name as the existing index, or you
can use the DROP EXISTING option to specify a new name.
ON [database_name. [schema_name ] . | schema_name . ] table_name
Specifies the one-, two-, or three-part name of the table to be stored as a clustered columnstore index. If the table
is a heap or clustered index the table is converted from rowstore to a columnstore. If the table is already a
columnstore, this statement rebuilds the clustered columnstore index.
WITH
DROP_EXISTING = [OFF ] | ON
DROP_EXISTING = ON specifies to drop the existing clustered columnstore index, and create a new columnstore
index.
The default, DROP_EXISTING = OFF expects the index name is the same as the existing name. An error occurs is
the specified index name already exists.
MAXDOP = max_degree_of_parallelism
Overrides the existing maximum degree of parallelism server configuration for the duration of the index operation.
Use MAXDOP to limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism values can be:
1 - Suppress parallel plan generation.
>1 - Restrict the maximum number of processors used in a parallel index operation to the specified number or
fewer based on the current system workload. For example, when MAXDOP = 4, the number of processors used
is 4 or less.
0 (default) - Use the actual number of processors or fewer based on the current system workload.
For more information, see Configure the max degree of parallelism Server Configuration Option, and
Configure Parallel Index Operations.
COMPRESSION_DEL AY = 0 | delay [ Minutes ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
For a disk-based table, delay specifies the minimum number of minutes a delta rowgroup in the CLOSED state
must remain in the delta rowgroup before SQL Server can compress it into the compressed rowgroup. Since disk-
based tables don't track insert and update times on individual rows, SQL Server applies the delay to delta
rowgroups in the CLOSED state.
The default is 0 minutes.
For recommendations on when to use COMPRESSION_DEL AY, see Get started with Columnstore for real time
operational analytics.
DATA_COMPRESSION = COLUMNSTORE | COLUMNSTORE_ARCHIVE
Applies to: SQL Server 2016 (13.x) through SQL Server 2017. Specifies the data compression option for the
specified table, partition number, or range of partitions. The options are as follows:
COLUMNSTORE
COLUMNSTORE is the default and specifies to compress with the most performant columnstore compression.
This is the typical choice.
COLUMNSTORE_ARCHIVE
COLUMNSTORE_ARCHIVE further compresses the table or partition to a smaller size. Use this option for
situations such as archival that require a smaller storage size and can afford more time for storage and retrieval.
For more information about compression, see Data Compression.
ON
With the ON options you can specify options for data storage, such as a partition scheme, a specific filegroup, or
the default filegroup. If the ON option is not specified, the index uses the settings partition or filegroup settings of
the existing table.
partition_scheme_name ( column_name )
Specifies the partition scheme for the table. The partition scheme must already exist in the database. To create the
partition scheme, see CREATE PARTITION SCHEME.
column_name specifies the column against which a partitioned index is partitioned. This column must match the
data type, length, and precision of the argument of the partition function that partition_scheme_name is using.
filegroup_name
Specifies the filegroup for storing the clustered columnstore index. If no location is specified and the table is not
partitioned, the index uses the same filegroup as the underlying table or view. The filegroup must already exist.
"default"
To create the index on the default filegroup, use "default" or [ default ].
If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current session.
QUOTED_IDENTIFIER is ON by default. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL ).
CREATE [NONCLUSTERED ] COLUMNSTORE INDEX
Create an in-memory nonclustered columnstore index on a rowstore table stored as a heap or clustered index. The
index can have a filtered condition and does not need to include all of the columns of the underlying table. The
columnstore index requires enough space to store a copy of the data. It is updateable and is updated as the
underlying table is changed. The nonclustered columnstore index on a clustered index enables real-time analytics.
index_name
Specifies the name of the index. index_name must be unique within the table, but does not have to be unique
within the database. Index names must follow the rules of identifiers.
( column [ ,...n ] )
Specifies the columns to store. A nonclustered columnstore index is limited to 1024 columns.
Each column must be of a supported data type for columnstore indexes. See Limitations and Restrictions for a list
of the supported data types.
ON [database_name. [schema_name ] . | schema_name . ] table_name
Specifies the one-, two-, or three-part name of the table that contains the index.
WITH DROP_EXISTING = [OFF ] | ON
DROP_EXISTING = ON The existing index is dropped and rebuilt. The index name specified must be the same as
a currently existing index; however, the index definition can be modified. For example, you can specify different
columns, or index options.
DROP_EXISTING = OFF An error is displayed if the specified index name already exists. The index type cannot be
changed by using DROP_EXISTING. In backward compatible syntax, WITH DROP_EXISTING is equivalent to
WITH DROP_EXISTING = ON.
MAXDOP = max_degree_of_parallelism
Overrides the Configure the max degree of parallelism Server Configuration Option configuration option for the
duration of the index operation. Use MAXDOP to limit the number of processors used in a parallel plan execution.
The maximum is 64 processors.
max_degree_of_parallelism values can be:
1 - Suppress parallel plan generation.
>1 - Restrict the maximum number of processors used in a parallel index operation to the specified number or
fewer based on the current system workload. For example, when MAXDOP = 4, the number of processors used
is 4 or less.
0 (default) - Use the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.
NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported by
the editions of SQL Server, see Editions and Supported Features for SQL Server 2016.
Permissions
Requires ALTER permission on the table.
General Remarks
A columnstore index can be created on a temporary table. When the table is dropped or the session ends, the index
is also dropped.
Filtered Indexes
A filtered index is an optimized nonclustered index, suited for queries that select a small percentage of rows from a
table. It uses a filter predicate to index a portion of the data in the table. A well-designed filtered index can improve
query performance, reduce storage costs, and reduce maintenance costs.
Required SET Options for Filtered Indexes
The SET options in the Required Value column are required whenever any of the following conditions occur:
Create a filtered index.
INSERT, UPDATE, DELETE, or MERGE operation modifies the data in a filtered index.
The filtered index is used by the query optimizer to produce the query plan.
DEFAULT
DEFAULT
DEFAULT SERVER OLE DB AND ODBC
SET OPTIONS REQUIRED VALUE VALUE VALUE DB-LIBRARY VALUE
ANSI_NULLS ON ON ON OFF
ANSI_PADDING ON ON ON OFF
DEFAULT
DEFAULT
DEFAULT SERVER OLE DB AND ODBC
SET OPTIONS REQUIRED VALUE VALUE VALUE DB-LIBRARY VALUE
ANSI_WARNINGS* ON ON ON OFF
CONCAT_NULL_YIEL ON ON ON OFF
DS_NULL
QUOTED_IDENTIFIE ON ON ON OFF
R
Metadata
All of the columns in a columnstore index are stored in the metadata as included columns. The columnstore index
does not have key columns. These system views provide information about columnstore indexes.
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
sys.partitions (Transact-SQL )
sys.column_store_segments (Transact-SQL )
sys.column_store_dictionaries (Transact-SQL )
sys.column_store_row_groups (Transact-SQL )
B. Convert a clustered index to a clustered columnstore index with the same name.
This example creates a table with clustered index, and then demonstrates the syntax of converting the clustered
index to a clustered columnstore index. This changes the storage for the entire table from rowstore to columnstore.
--SQL Server 2012 and SQL Server 2014: you need to drop the nonclustered indexes
--in order to create the columnstore index.
4. Convert the rowstore table to a columnstore table with a clustered columnstore index.
--Option 1: Convert to columnstore and name the new clustered columnstore index MyCCI.
CREATE CLUSTERED COLUMNSTORE INDEX MyCCI ON MyFactTable;
NOTE
Beginning with SQL Server 2016, use ALTER INDEX REORGANIZE instead of rebuilding with the methods described in this
example.
--Rebuild the entire index by using ALTER INDEX and the REBUILD option.
ALTER INDEX my_CCI
ON MyFactTable
REBUILD PARTITION = ALL
WITH ( DROP_EXISTING = ON );
Load data into a staging table that does not have a columnstore index. Build a columnstore index on the
staging table. Switch the staging table into an empty partition of the main table.
Switch a partition from the table with the columnstore index into an empty staging table. If there is a
columnstore index on the staging table, disable the columnstore index. Perform any updates. Build (or
rebuild) the columnstore index. Switch the staging table back into the (now empty) partition of the main
table.
--Drop the clustered columnstore index. The table continues to be distributed, but changes to a heap.
DROP INDEX cci_xdimProduct ON xdimProduct;
CREATE COLUMN ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a column encryption key with the initial set of values, encrypted with the specified column master keys.
This is a metadata operation. A CEK can have up to two values which allows for a column master key rotation.
Creating a CEK is required before any column in the database can be encrypted using the Always Encrypted
(Database Engine) feature. CEK's can also be created by using SQL Server Management Studio. Before creating a
CEK, you must define a CMK by using Management Studio or the CREATE COLUMN MASTER KEY statement.
Transact-SQL Syntax Conventions
Syntax
CREATE COLUMN ENCRYPTION KEY key_name
WITH VALUES
(
COLUMN_MASTER_KEY = column_master_key_name,
ALGORITHM = 'algorithm_name',
ENCRYPTED_VALUE = varbinary_literal
)
[, (
COLUMN_MASTER_KEY = column_master_key_name,
ALGORITHM = 'algorithm_name',
ENCRYPTED_VALUE = varbinary_literal
) ]
[;]
Arguments
key_name
Is the name by which the column encryption key will be known in the database.
column_master_key_name
Specifies the name of the custom column master key (CMK) used for encrypting the column encryption key
(CEK).
algorithm_name
Name of the encryption algorithm used to encrypt the value of the column encryption key. The algorithm for the
system providers must be RSA_OAEP.
varbinary_literal
The encrypted CEK value BLOB.
WARNING
Never pass plaintext CEK values in this statement. Doing so will comprise the benefit of this feature.
Remarks
The CREATE COLUMN ENCRYPTION KEY statement must include at least one VALUES clause and may have
up to two. If only one is provided, you can use the ALTER COLUMN ENCRYPTION KEY statement to add a
second value later. You can also use the ALTER COLUMN ENCRYPTION KEY statement to remove a VALUES
clause.
Typically, a column encryption key is created with just one encrypted value. When a column master key needs to
be rotated (the current column master key needs to be replaced with the new column master key), you can add a
new value of the column encryption key, encrypted with the new column master key. This will allow you to ensure
client applications can access data encrypted with the column encryption key, while the new column master key is
being made available to client applications. An Always Encrypted enabled driver in a client application that does
not have access to the new master key, will be able to use the column encryption key value encrypted with the old
column master key to access sensitive data.
The encryption algorithms, Always Encrypted supports, require the plaintext value to have 256 bits.
An encrypted value should be generated using a key store provider that encapsulates the key store holding the
column master key. For more information, see Always Encrypted (client development).
Use sys.columns (Transact-SQL ), sys.column_encryption_keys (Transact-SQL ) and
sys.column_encryption_key_values (Transact-SQL ) to view information about column encryption keys.
Permissions
Requires the ALTER ANY COLUMN ENCRYPTION KEY permission.
Examples
A. Creating a column encryption key
The following example creates a column encryption key called MyCEK .
See Also
ALTER COLUMN ENCRYPTION KEY (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL )
Always Encrypted (Database Engine)
sys.column_encryption_keys (Transact-SQL )
sys.column_encryption_key_values (Transact-SQL )
sys.columns (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a column master key metadata object in a database. A column master key metadata entry that represents
a key, stored in an external key store, which is used to protect (encrypt) column encryption keys when using the
Always Encrypted (Database Engine) feature. Multiple column master keys allow for key rotation; periodically
changing the key to enhance security. You can create a column master key in a key store and its corresponding
metadata object in the database by using the Object Explorer in SQL Server Management Studio or PowerShell.
For details, see Overview of Key Management for Always Encrypted.
Transact-SQL Syntax Conventions
Syntax
CREATE COLUMN MASTER KEY key_name
WITH (
KEY_STORE_PROVIDER_NAME = 'key_store_provider_name',
KEY_PATH = 'key_path'
)
[;]
Arguments
key_name
Is the name by which the column master key will be known in the database.
key_store_provider_name
Specifies the name of a key store provider, which is a client-side software component that encapsulates a key store
containing the column master key. An Always Encrypted-enabled client driver uses a key store provider name to
look up a key store provider in driver’s registry of key store providers. The driver uses the provider to decrypt
column encryption keys, protected by a column master key, stored in the underlying key store. A plaintext value of
the column encryption key is then used to encrypt query parameters, corresponding to encrypted database
columns, or to decrypt query results from encrypted columns.
Always Encrypted-enabled client driver libraries include key store providers for popular key stores.
A set of available providers depend on the type and the version of the client driver. Please refer to the Always
Encrypted documentation for particular drivers:
Develop Applications using Always Encrypted with the .NET Framework Provider for SQL Server
The below tables captures the names of system providers:
You can implement a custom key store provider, in order to store column master keys in a store for which there is
no built-in key store provider in your Always Encrypted-enabled client driver. Note that the names of custom key
store providers cannot start with 'MSSQL_', which is a prefix reserved for Microsoft key store providers.
key_path
The path of the key in the column master key store. The key path must be valid in the context of each client
application that is expected to encrypt or decrypt data stored in a column (indirectly) protected by the referenced
column master key and the client application needs to be permitted to access the key. The format of the key path
is specific to the key store provider. The following list describes the format of key paths for particular Microsoft
system key store providers.
Provider name: MSSQL_CERTIFICATE_STORE
Key path format: CertificateStoreName/CertificateStoreLocation/CertificateThumbprint
Where:
CertificateStoreLocation
Certificate store location, which must be Current User or Local Machine. For more information, see Local
Machine and Current User Certificate Stores.
CertificateStore
Certificate store name, for example 'My'.
CertificateThumbprint
Certificate thumbprint.
Examples:
N'CurrentUser/My/BBF037EC4A133ADCA89FFAEC16CA5BFA8878FB94'
N'LocalMachine/My/CA5BFA8878FB94BBF037EC4A133ADCA89FFAEC16'
Remarks
Creating a column master key metadata entry is required before a column encryption key metadata entry can be
created in the database and before any column in the database can be encrypted using Always Encrypted. Note
that, a column master key entry in the metadata does not contain the actual column master key, which must be
stored in an external column key store (outside of SQL Server). The key store provider name and the column
master key path in the metadata must be valid for a client application to be able to use the column master key to
decrypt a column encryption key encrypted with the column master key, and to query encrypted columns.
Permissions
Requires the ALTER ANY COLUMN MASTER KEY permission.
Examples
A. Creating a column master key
Creating a column master key metadata entry for a column master key stored in Certificate Store, for client
applications that use the MSSQL_CERTIFICATE_STORE provider to access the column master key:
Creating a column master key metadata entry for a column master key that is accessed by client applications that
use the MSSQL_CNG_STORE provider:
Creating a column master key stored in the Azure Key Vault, for client applications that use the
AZURE_KEY_VAULT provider, to access the column master key.
See Also
DROP COLUMN MASTER KEY (Transact-SQL )
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
sys.column_master_keys (Transact-SQL )
Always Encrypted (Database Engine)
Overview of Key Management for Always Encrypted
CREATE CONTRACT (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new contract. A contract defines the message types that are used in a Service Broker conversation and
also determines which side of the conversation can send messages of that type. Each conversation follows a
contract. The initiating service specifies the contract for the conversation when the conversation starts. The target
service specifies the contracts that the target service accepts conversations for.
Transact-SQL Syntax Conventions
Syntax
CREATE CONTRACT contract_name
[ AUTHORIZATION owner_name ]
( { { message_type_name | [ DEFAULT ] }
SENT BY { INITIATOR | TARGET | ANY }
} [ ,...n] )
[ ; ]
Arguments
contract_name
Is the name of the contract to create. A new contract is created in the current database and owned by the principal
specified in the AUTHORIZATION clause. Server, database, and schema names cannot be specified. The
contract_name can be up to 128 characters.
NOTE
Do not create a contract that uses the keyword ANY for the contract_name. When you specify ANY for a contract name in
CREATE BROKER PRIORITY, the priority is considered for all contracts. It is not limited to a contract whose name is ANY.
AUTHORIZATION owner_name
Sets the owner of the contract to the specified database user or role. When the current user is dbo or sa,
owner_name can be the name of any valid user or role. Otherwise, owner_name must be the name of the current
user, the name of a user that the current user has impersonate permissions for, or the name of a role to which the
current user belongs. When this clause is omitted, the contract belongs to the current user.
message_type_name
Is the name of a message type to be included as part of the contract.
SENT BY
Specifies which endpoint can send a message of the indicated message type. Contracts document the messages
that services can use to have specific conversations. Each conversation has two endpoints: the initiator endpoint,
the service that started the conversation, and the target endpoint, the service that the initiator is contacting.
INITIATOR
Indicates that only the initiator of the conversation can send messages of the specified message type. A service
that starts a conversation is referred to as the initiator of the conversation.
TARGET
Indicates that only the target of the conversation can send messages of the specified message type. A service that
accepts a conversation that was started by another service is referred to as the target of the conversation.
ANY
Indicates that messages of this type can be sent by both the initiator and the target.
[ DEFAULT ]
Indicates that this contract supports messages of the default message type. By default, all databases contain a
message type named DEFAULT. This message type uses a validation of NONE. In the context of this clause,
DEFAULT is not a keyword, and must be delimited as an identifier. Microsoft SQL Server also provides a
DEFAULT contract which specifies the DEFAULT message type.
Remarks
The order of message types in the contract is not significant. After the target has received the first message,
Service Broker allows either side of the conversation to send any message allowed for that side of the
conversation at any time. For example, if the initiator of the conversation can send the message type
//Adventure-Works.com/Expenses/SubmitExpense, Service Broker allows the initiator to send any number
of SubmitExpense messages during the conversation.
The message types and directions in a contract cannot be changed. To change the AUTHORIZATION for a
contract, use the ALTER AUTHORIZATION statement.
A contract must allow the initiator to send a message. The CREATE CONTRACT statement fails when the contract
does not contain at least one message type that is SENT BY ANY or SENT BY INITIATOR.
Regardless of the contract, a service can always receive the message types
http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer ,
http://schemas.microsoft.com/SQL/ServiceBroker/Error , and
http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog . Service Broker uses these message types for system
messages to the application.
A contract cannot be a temporary object. Contract names starting with # are permitted, but are permanent objects.
Permissions
By default, members of the db_ddladmin or db_owner fixed database roles and the sysadmin fixed server role
can create contracts.
By default, the owner of the contract, members of the db_ddladmin or db_owner fixed database roles, and
members of the sysadmin fixed server role have REFERENCES permission on a contract.
The user executing the CREATE CONTRACT statement must have REFERENCES permission on all message
types specified.
Examples
A. Creating a contract
The following example creates an expense reimbursement contract based on three message types.
CREATE MESSAGE TYPE
[//Adventure-Works.com/Expenses/SubmitExpense]
VALIDATION = WELL_FORMED_XML ;
CREATE CONTRACT
[//Adventure-Works.com/Expenses/ExpenseSubmission]
( [//Adventure-Works.com/Expenses/SubmitExpense]
SENT BY INITIATOR,
[//Adventure-Works.com/Expenses/ExpenseApprovedOrDenied]
SENT BY TARGET,
[//Adventure-Works.com/Expenses/ExpenseReimbursed]
SENT BY TARGET
) ;
See Also
DROP CONTRACT (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE CREDENTIAL (Transact-SQL)
5/30/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Creates a server-level credential. A credential is a record that contains the authentication information that is
required to connect to a resource outside SQL Server. Most credentials include a Windows user and password.
For example, saving a database backup to some location might require SQL Server to provide special credentials
to access that location. For more information, see Credentials (Database Engine).
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
NOTE
To make the credential at the database-level use CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL). Use a server-level
credential when you need to use the same credential for multiple databases on the server. Use a database-scoped credential
to make the database more portable. When a database is moved to a new server, the database scoped credential will move
with it. Use database scoped credentials on SQL Database.
Syntax
CREATE CREDENTIAL credential_name
WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]
[ FOR CRYPTOGRAPHIC PROVIDER cryptographic_provider_name ]
Arguments
credential_name
Specifies the name of the credential being created. credential_name cannot start with the number (#) sign.
System credentials start with ##. When using a shared access signature (SAS ), this name must match the
container path, start with https and must not contain a forward slash. See example D below.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server. When the credential is used to
access the Azure Key Vault, the IDENTITY is the name of the key vault. See example C below. When the
credential is using a shared access signature (SAS ), the IDENTITY is SHARED ACCESS SIGNATURE. See
example D below.
SECRET ='secret'
Specifies the secret required for outgoing authentication.
When the credential is used to access the Azure Key Vault the SECRET argument of CREATE CREDENTIAL
requires the <Client ID> (without hyphens) and <Secret> of a Service Principal in the Azure Active Directory to
be passed together without a space between them. See example C below. When the credential is using a shared
access signature, the SECRET is the shared access signature token. See example D below. For information about
creating a stored access policy and a shared access signature on an Azure container, see Lesson 1: Create a stored
access policy and a shared access signature on an Azure container.
FOR CRYPTOGRAPHIC PROVIDER cryptographic_provider_name
Specifies the name of an Enterprise Key Management Provider (EKM ). For more information about Key
Management, see Extensible Key Management (EKM ).
Remarks
When IDENTITY is a Windows user, the secret can be the password. The secret is encrypted using the service
master key. If the service master key is regenerated, the secret is re-encrypted using the new service master key.
After creating a credential, you can map it to a SQL Server login by using CREATE LOGIN or ALTER LOGIN. A
SQL Server login can be mapped to only one credential, but a single credential can be mapped to multiple SQL
Server logins. For more information, see Credentials (Database Engine). A server-level credential can only be
mapped to a login, not to a database user.
Information about credentials is visible in the sys.credentials catalog view.
If there is no login mapped credential for the provider, the credential mapped to SQL Server service account is
used.
A login can have multiple credentials mapped to it as long as they are used with distinctive providers. There must
be only one mapped credential per provider per login. The same credential can be mapped to other logins.
Permissions
Requires ALTER ANY CREDENTIAL permission.
Examples
A. Basic Example
The following example creates the credential called AlterEgo . The credential contains the Windows user Mary5
and a password.
IMPORTANT
The IDENTITY argument of CREATE CREDENTIAL requires the key vault name. The SECRET argument of CREATE
CREDENTIAL requires the <Client ID> (without hyphens) and <Secret> to be passed together without a space between
them.
In the following example, the Client ID ( EF5C8E09-4D2A-4A76-9998-D93440D8115D ) is stripped of the hyphens and
entered as the string EF5C8E094D2A4A769998D93440D8115D and the Secret is represented by the string
SECRET_DBEngine.
USE master;
CREATE CREDENTIAL Azure_EKM_TDE_cred
WITH IDENTITY = 'ContosoKeyVault',
SECRET = 'EF5C8E094D2A4A769998D93440D8115DSECRET_DBEngine'
FOR CRYPTOGRAPHIC PROVIDER AzureKeyVault_EKM_Prov ;
The following example creates the same credential by using variables for the Client ID and Secret strings, which
are then concatenated together to form the SECRET argument. The REPLACE function is used to remove the
hyphens from the Client ID.
USE master
CREATE CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>] --
this name must match the container path, start with https and must not contain a trailing forward slash.
WITH IDENTITY='SHARED ACCESS SIGNATURE' -- this is a mandatory string and do not change it.
, SECRET = 'sharedaccesssignature' –- this is the shared access signature token
GO
See Also
Credentials (Database Engine)
ALTER CREDENTIAL (Transact-SQL )
DROP CREDENTIAL (Transact-SQL )
CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL )
CREATE LOGIN (Transact-SQL )
ALTER LOGIN (Transact-SQL )
sys.credentials (Transact-SQL )
Lesson 2: Create a SQL Server credential using a shared access signature
Shared Access Signatures
CREATE CRYPTOGRAPHIC PROVIDER (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a cryptographic provider within SQL Server from an Extensible Key Management (EKM ) provider.
Transact-SQL Syntax Conventions
Syntax
CREATE CRYPTOGRAPHIC PROVIDER provider_name
FROM FILE = path_of_DLL
Arguments
provider_name
Is the name of the Extensible Key Management provider.
path_of_DLL
Is the path of the .dll file that implements the SQL Server Extensible Key Management interface. When using the
SQL Server Connector for Microsoft Azure Key Vault the default location is 'C:\Program Files\Microsoft
SQL Server Connector for Microsoft Azure Key Vault\Microsoft.AzureKeyVaultService.EKM.dll'.
Remarks
All keys created by a provider will reference the provider by its GUID. The GUID is retained across all versions of
the DLL.
The DLL that implements SQLEKM interface must be digitally signed by using any certificate. SQL Server will
verify the signature. This includes its certificate chain, which must have its root installed at the Trusted Root Cert
Authorities location on a Windows system. If the signature is not verified correctly, the CREATE
CRYPTOGRAPHIC PROVIDER statement will fail. For more information about certificates and certificate chains,
see SQL Server Certificates and Asymmetric Keys.
When an EKM provider dll does not implement all of the necessary methods, CREATE CRYPTOGRAPHIC
PROVIDER can return error 33085:
One or more methods cannot be found in cryptographic provider library '%.*ls'.
When the header file used to create the EKM provider dll is out of date, CREATE CRYPTOGRAPHIC PROVIDER
can return error 33032:
SQL Crypto API version '%02d.%02d' implemented by provider is not supported. Supported version is '%02d.%02d'.
Permissions
Requires CONTROL SERVER permission or membership in the sysadmin fixed server role.
Examples
The following example creates a cryptographic provider called SecurityProvider in SQL Server from a .dll file. The
.dll file is named c:\SecurityProvider\SecurityProvider_v1.dll and it is installed on the server. The provider's
certificate must first be installed on the server.
See Also
Extensible Key Management (EKM )
ALTER CRYPTOGRAPHIC PROVIDER (Transact-SQL )
DROP CRYPTOGRAPHIC PROVIDER (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Extensible Key Management Using Azure Key Vault (SQL Server)
CREATE DATABASE (SQL Server Transact-SQL)
5/3/2018 • 29 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new database and the files used to store the database, a database snapshot, or attaches a database
from the detached files of a previously created database.
Transact-SQL Syntax Conventions
Syntax
Create a database
CREATE DATABASE database_name
[ CONTAINMENT = { NONE | PARTIAL } ]
[ ON
[ PRIMARY ] <filespec> [ ,...n ]
[ , <filegroup> [ ,...n ] ]
[ LOG ON <filespec> [ ,...n ] ]
]
[ COLLATE collation_name ]
[ WITH <option> [,...n ] ]
[;]
<option> ::=
{
FILESTREAM ( <filestream_option> [,...n ] )
| DEFAULT_FULLTEXT_LANGUAGE = { lcid | language_name | language_alias }
| DEFAULT_LANGUAGE = { lcid | language_name | language_alias }
| NESTED_TRIGGERS = { OFF | ON }
| TRANSFORM_NOISE_WORDS = { OFF | ON}
| TWO_DIGIT_YEAR_CUTOFF = <two_digit_year_cutoff>
| DB_CHAINING { OFF | ON }
| TRUSTWORTHY { OFF | ON }
}
<filestream_option> ::=
{
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL }
| DIRECTORY_NAME = 'directory_name'
}
<filespec> ::=
{
(
NAME = logical_file_name ,
FILENAME = { 'os_file_name' | 'filestream_path' }
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB | % ] ]
)
}
<filegroup> ::=
{
FILEGROUP filegroup name [ [ CONTAINS FILESTREAM ] [ DEFAULT ] | CONTAINS MEMORY_OPTIMIZED_DATA ]
<filespec> [ ,...n ]
}
<service_broker_option> ::=
{
ENABLE_BROKER
| NEW_BROKER
| ERROR_BROKER_CONVERSATIONS
}
Attach a database
CREATE DATABASE database_name
ON <filespec> [ ,...n ]
FOR { { ATTACH [ WITH <attach_database_option> [ , ...n ] ] }
| ATTACH_REBUILD_LOG }
[;]
<attach_database_option> ::=
{
<service_broker_option>
| RESTRICTED_USER
| FILESTREAM ( DIRECTORY_NAME = { 'directory_name' | NULL } )
}
Arguments
database_name
Is the name of the new database. Database names must be unique within an instance of SQL Server and
comply with the rules for identifiers.
database_name can be a maximum of 128 characters, unless a logical name is not specified for the log file. If a
logical log file name is not specified, SQL Server generates the logical_file_name and the os_file_name for the
log by appending a suffix to database_name. This limits database_name to 123 characters so that the
generated logical file name is no more than 128 characters.
If data file name is not specified, SQL Server uses database_name as both the logical_file_name and as the
os_file_name. The default path is obtained from the registry. The default path can be changed by using the
Server Properties (Database Settings Page) in Management Studio. Changing the default path requires
restarting SQL Server.
CONTAINMENT = { NONE | PARTIAL }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017
Specifies the containment status of the database. NONE = non-contained database. PARTIAL = partially
contained database.
ON
Specifies that the disk files used to store the data sections of the database, data files, are explicitly defined. ON is
required when followed by a comma-separated list of <filespec> items that define the data files for the primary
filegroup. The list of files in the primary filegroup can be followed by an optional, comma-separated list of
<filegroup> items that define user filegroups and their files.
PRIMARY
Specifies that the associated <filespec> list defines the primary file. The first file specified in the <filespec>
entry in the primary filegroup becomes the primary file. A database can have only one primary file. For more
information, see Database Files and Filegroups.
If PRIMARY is not specified, the first file listed in the CREATE DATABASE statement becomes the primary file.
LOG ON
Specifies that the disk files used to store the database log, log files, are explicitly defined. LOG ON is followed
by a comma-separated list of <filespec> items that define the log files. If LOG ON is not specified, one log file is
automatically created, which has a size that is 25 percent of the sum of the sizes of all the data files for the
database, or 512 KB, whichever is larger. This file is placed in the default log-file location. For information about
this location, see View or Change the Default Locations for Data and Log Files (SQL Server Management
Studio).
LOG ON cannot be specified on a database snapshot.
COLL ATE collation_name
Specifies the default collation for the database. Collation name can be either a Windows collation name or a
SQL collation name. If not specified, the database is assigned the default collation of the instance of SQL Server.
A collation name cannot be specified on a database snapshot.
A collation name cannot be specified with the FOR ATTACH or FOR ATTACH_REBUILD_LOG clauses. For
information about how to change the collation of an attached database, visit this Microsoft Web site.
For more information about the Windows and SQL collation names, see COLL ATE (Transact-SQL ).
NOTE
Contained databases are collated differently than non-contained databases. Please see Contained Database Collations for
more information.
WITH <option>
<filestream_options>
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the level of non-transactional FILESTREAM access to the database.
VALUE DESCRIPTION
DIRECTORY_NAME = <directory_name> Applies to: SQL Server 2012 (11.x) through SQL Server
2017
A windows-compatible directory name. This name should be unique among all the Database_Directory
names in the SQL Server instance. Uniqueness comparison is case-insensitive, regardless of SQL Server
collation settings. This option should be set before creating a FileTable in this database.
The following options are allowable only when CONTAINMENT has been set to PARTIAL. If
CONTAINMENT is set to NONE, errors will occur.
DEFAULT_FULLTEXT_LANGUAGE = <lcid> | <language name> | <language alias>
Applies to: SQL Server 2012 (11.x) through SQL Server 2017
NESTED_TRIGGERS = { OFF | ON }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017
TRANSFORM_NOISE_WORDS = { OFF | ON }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017
IMPORTANT
The instance of SQL Server will recognize this setting when the cross db ownership chaining server option is 0
(OFF). When cross db ownership chaining is 1 (ON), all user databases can participate in cross-database
ownership chains, regardless of the value of this option. This option is set by using sp_configure.
To set this option, requires membership in the sysadmin fixed server role. The DB_CHAINING option
cannot be set on these system databases: master, model, tempdb.
TRUSTWORTHY { OFF | ON }
When ON is specified, database modules (for example, views, user-defined functions, or stored
procedures) that use an impersonation context can access resources outside the database.
When OFF, database modules in an impersonation context cannot access resources outside the database.
The default is OFF.
TRUSTWORTHY is set to OFF whenever the database is attached.
By default, all system databases except the msdb database have TRUSTWORTHY set to OFF. The value
cannot be changed for the model and tempdb databases. We recommend that you never set the
TRUSTWORTHY option to ON for the master database.
To set this option, requires membership in the sysadmin fixed server role.
FOR ATTACH [ WITH < attach_database_option > ] Specifies that the database is created by attaching an
existing set of operating system files. There must be a <filespec> entry that specifies the primary file. The
only other <filespec> entries required are those for any files that have a different path from when the
database was first created or last attached. A <filespec> entry must be specified for these files.
FOR ATTACH requires the following:
All data files (MDF and NDF ) must be available.
If multiple log files exist, they must all be available.
If a read/write database has a single log file that is currently unavailable, and if the database was shut
down with no users or open transactions before the attach operation, FOR ATTACH automatically
rebuilds the log file and updates the primary file. In contrast, for a read-only database, the log cannot be
rebuilt because the primary file cannot be updated. Therefore, when you attach a read-only database
with a log that is unavailable, you must provide the log files, or the files in the FOR ATTACH clause.
NOTE
A database created by a more recent version of SQL Server cannot be attached in earlier versions.
In SQL Server, any full-text files that are part of the database that is being attached will be attached with the
database. To specify a new path of the full-text catalog, specify the new location without the full-text operating
system file name. For more information, see the Examples section.
Attaching a database that contains a FILESTREAM option of "Directory name", into a SQL Server instance will
prompt SQL Server to verify that the Database_Directory name is unique. If it is not, the attach operation fails
with the error, "FILESTREAM Database_Directory name <name> is not unique in this SQL Server instance". To
avoid this error, the optional parameter, directory_name, should be passed in to this operation.
FOR ATTACH cannot be specified on a database snapshot.
FOR ATTACH can specify the RESTRICTED_USER option. RESTRICTED_USER allows for only members of the
db_owner fixed database role and dbcreator and sysadmin fixed server roles to connect to the database, but
does not limit their number. Attempts by unqualified users are refused.
If the database uses Service Broker, use the WITH <service_broker_option> in your FOR ATTACH clause:
<service_broker_option> Controls Service Broker message delivery and the Service Broker identifier for the
database. Service Broker options can only be specified when the FOR ATTACH clause is used.
ENABLE_BROKER
Specifies that Service Broker is enabled for the specified database. That is, message delivery is started, and
is_broker_enabled is set to true in the sys.databases catalog view. The database retains the existing Service
Broker identifier.
NEW_BROKER
Creates a new service_broker_guid value in both sys.databases and the restored database and ends all
conversation endpoints with clean up. The broker is enabled, but no message is sent to the remote conversation
endpoints. Any route that references the old Service Broker identifier must be re-created with the new identifier.
ERROR_BROKER_CONVERSATIONS
Ends all conversations with an error stating that the database is attached or restored. The broker is disabled
until this operation is completed and then enabled. The database retains the existing Service Broker identifier.
When you attach a replicated database that was copied instead of being detached, consider the following:
If you attach the database to the same server instance and version as the original database, no additional
steps are required.
If you attach the database to the same server instance but with an upgraded version, you must execute
sp_vupgrade_replication to upgrade replication after the attach operation is complete.
If you attach the database to a different server instance, regardless of version, you must execute
sp_removedbreplication to remove replication after the attach operation is complete.
NOTE
Attach works with the vardecimal storage format, but the SQL Server Database Engine must be upgraded to at least
SQL Server 2005 SP2. You cannot attach a database using vardecimal storage format to an earlier version of SQL Server.
For more information about the vardecimal storage format, see Data Compression.
When a database is first attached or restored to a new instance of SQL Server, a copy of the database master
key (encrypted by the service master key) is not yet stored in the server. You must use the OPEN MASTER
KEY statement to decrypt the database master key (DMK). Once the DMK has been decrypted, you have the
option of enabling automatic decryption in the future by using the ALTER MASTER KEY REGENERATE
statement to provision the server with a copy of the DMK, encrypted with the service master key (SMK). When
a database has been upgraded from an earlier version, the DMK should be regenerated to use the newer AES
algorithm. For more information about regenerating the DMK, see ALTER MASTER KEY (Transact-SQL ). The
time required to regenerate the DMK key to upgrade to AES depends upon the number of objects protected by
the DMK. Regenerating the DMK key to upgrade to AES is only necessary once, and has no impact on future
regenerations as part of a key rotation strategy. For information about how to upgrade a database by using
attach, see Upgrade a Database Using Detach and Attach (Transact-SQL ).
IMPORTANT
We recommend that you do not attach databases from unknown or untrusted sources. Such databases could contain
malicious code that might execute unintended Transact-SQL code or cause errors by modifying the schema or the physical
database structure. Before you use a database from an unknown or untrusted source, run DBCC CHECKDB on the
database on a nonproduction server, and also examine the code, such as stored procedures or other user-defined code, in
the database.
NOTE
The TRUSTWORTHY and DB_CHAINING options have no affect when attaching a database.
FOR ATTACH_REBUILD_LOG
Specifies that the database is created by attaching an existing set of operating system files. This option is limited
to read/write databases. There must be a <filespec> entry specifying the primary file. If one or more transaction
log files are missing, the log file is rebuilt. The ATTACH_REBUILD_LOG automatically creates a new, 1 MB log
file. This file is placed in the default log-file location. For information about this location, see View or Change the
Default Locations for Data and Log Files (SQL Server Management Studio).
NOTE
If the log files are available, the Database Engine uses those files instead of rebuilding the log files.
IMPORTANT
This operation breaks the log backup chain. We recommend that a full database backup be performed after the operation
is completed. For more information, see BACKUP (Transact-SQL).
Typically, FOR ATTACH_REBUILD_LOG is used when you copy a read/write database with a large log to
another server where the copy will be used mostly, or only, for read operations, and therefore requires less log
space than the original database.
FOR ATTACH_REBUILD_LOG cannot be specified on a database snapshot.
For more information about attaching and detaching databases, see Database Detach and Attach (SQL Server).
<filespec>
Controls the file properties.
NAME logical_file_name
Specifies the logical name for the file. NAME is required when FILENAME is specified, except when specifying
one of the FOR ATTACH clauses. A FILESTREAM filegroup cannot be named PRIMARY.
logical_file_name
Is the logical name used in SQL Server when referencing the file. Logical_file_name must be unique in the
database and comply with the rules for identifiers. The name can be a character or Unicode constant, or a
regular or delimited identifier.
FILENAME { 'os_file_name' | 'filestream_path' }
Specifies the operating system (physical) file name.
' os_file_name '
Is the path and file name used by the operating system when you create the file. The file must reside on one of
the following devices: the local server on which SQL Server is installed, a Storage Area Network [SAN ], or an
iSCSI-based network. The specified path must exist before executing the CREATE DATABASE statement. For
more information, see "Database Files and Filegroups" in the Remarks section.
SIZE, MAXSIZE, and FILEGROWTH parameters can be set when a UNC path is specified for the file.
If the file is on a raw partition, os_file_name must specify only the drive letter of an existing raw partition. Only
one data file can be created on each raw partition.
Data files should not be put on compressed file systems unless the files are read-only secondary files, or the
database is read-only. Log files should never be put on compressed file systems.
' filestream_path '
For a FILESTREAM filegroup, FILENAME refers to a path where FILESTREAM data will be stored. The path up
to the last folder must exist, and the last folder must not exist. For example, if you specify the path
C:\MyFiles\MyFilestreamData, C:\MyFiles must exist before you run ALTER DATABASE, but the
MyFilestreamData folder must not exist.
The filegroup and file ( <filespec> ) must be created in the same statement.
The SIZE and FILEGROWTH properties do not apply to a FILESTREAM filegroup.
SIZE size
Specifies the size of the file.
SIZE cannot be specified when the os_file_name is specified as a UNC path. SIZE does not apply to a
FILESTREAM filegroup.
size
Is the initial size of the file.
When size is not supplied for the primary file, the Database Engine uses the size of the primary file in the model
database. The default size of model is 8 MB (beginning with SQL Server 2016 (13.x)) or 1 MB (for earlier
versions). When a secondary data file or log file is specified, but size is not specified for the file, the Database
Engine makes the file 8 MB (beginning with SQL Server 2016 (13.x)) or 1 MB (for earlier versions). The size
specified for the primary file must be at least as large as the primary file of the model database.
The kilobyte (KB ), megabyte (MB ), gigabyte (GB ), or terabyte (TB ) suffixes can be used. The default is MB.
Specify a whole number; do not include a decimal. Size is an integer value. For values greater than
2147483647, use larger units.
MAXSIZE max_size
Specifies the maximum size to which the file can grow. MAXSIZE cannot be specified when the os_file_name is
specified as a UNC path.
max_size
Is the maximum file size. The KB, MB, GB, and TB suffixes can be used. The default is MB. Specify a whole
number; do not include a decimal. If max_size is not specified, the file grows until the disk is full. Max_size is an
integer value. For values greater than 2147483647, use larger units.
UNLIMITED
Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a
maximum size of 2 TB, and a data file has a maximum size of 16 TB.
NOTE
There is no maximum size when this option is specified for a FILESTREAM container. It continues to grow until the disk is
full.
FILEGROWTH growth_increment
Specifies the automatic growth increment of the file. The FILEGROWTH setting for a file cannot exceed the
MAXSIZE setting. FILEGROWTH cannot be specified when the os_file_name is specified as a UNC path.
FILEGROWTH does not apply to a FILESTREAM filegroup.
growth_increment
Is the amount of space added to the file every time new space is required.
The value can be specified in MB, KB, GB, TB, or percent (%). If a number is specified without an MB, KB, or %
suffix, the default is MB. When % is specified, the growth increment size is the specified percentage of the size of
the file at the time the increment occurs. The size specified is rounded to the nearest 64 KB, and the minimum
value is 64 KB.
A value of 0 indicates that automatic growth is off and no additional space is allowed.
If FILEGROWTH is not specified, the default values are:
VERSION DEFAULT VALUES
Beginning SQL Server 2016 (13.x) Data 64 MB. Log files 64 MB.
<filegroup>
Controls the filegroup properties. Filegroup cannot be specified on a database snapshot.
FILEGROUP filegroup_name
Is the logical name of the filegroup.
filegroup_name
filegroup_name must be unique in the database and cannot be the system-provided names PRIMARY and
PRIMARY_LOG. The name can be a character or Unicode constant, or a regular or delimited identifier. The
name must comply with the rules for identifiers.
CONTAINS FILESTREAM
Specifies that the filegroup stores FILESTREAM binary large objects (BLOBs) in the file system.
CONTAINS MEMORY_OPTIMIZED_DATA
Applies to: SQL Server 2014 (12.x) through SQL Server 2017
Specifies that the filegroup stores memory_optimized data in the file system. For more information, see In-
Memory OLTP (In-Memory Optimization). Only one MEMORY_OPTIMIZED_DATA filegroup is allowed per
database. For code samples that create a filegroup to store memory-optimized data, see Creating a Memory-
Optimized Table and a Natively Compiled Stored Procedure.
DEFAULT
Specifies the named filegroup is the default filegroup in the database.
database_snapshot_name
Is the name of the new database snapshot. Database snapshot names must be unique within an instance of
SQL Server and comply with the rules for identifiers. database_snapshot_name can be a maximum of 128
characters.
ON ( NAME =logical_file_name, FILENAME ='os_file_name') [ ,... n ]
For creating a database snapshot, specifies a list of files in the source database. For the snapshot to work, all the
data files must be specified individually. However, log files are not allowed for database snapshots.
FILESTREAM filegroups are not supported by database snapshots. If a FILESTREAM data file is included in a
CREATE DATABASE ON clause, the statement will fail and an error will be raised.
For descriptions of NAME and FILENAME and their values see the descriptions of the equivalent <filespec>
values.
NOTE
When you create a database snapshot, the other <filespec> options and the keyword PRIMARY are disallowed.
AS SNAPSHOT OF source_database_name
Specifies that the database being created is a database snapshot of the source database specified by
source_database_name. The snapshot and source database must be on the same instance.
For more information, see "Database Snapshots" in the Remarks section.
Remarks
The master database should be backed up whenever a user database is created, modified, or dropped.
The CREATE DATABASE statement must run in autocommit mode (the default transaction management mode)
and is not allowed in an explicit or implicit transaction.
You can use one CREATE DATABASE statement to create a database and the files that store the database. SQL
Server implements the CREATE DATABASE statement by using the following steps:
1. The SQL Server uses a copy of the model database to initialize the database and its metadata.
2. A service broker GUID is assigned to the database.
3. The Database Engine then fills the rest of the database with empty pages, except for pages that have
internal data that records how the space is used in the database.
A maximum of 32,767 databases can be specified on an instance of SQL Server.
Each database has an owner that can perform special activities in the database. The owner is the user that
creates the database. The database owner can be changed by using sp_changedbowner.
Some database features depend on features or capabilities present in the file system for full functionality of a
database. Some examples of features that depend on file system feature set include:
DBCC CHECKDB
FileStream
Online backups using VSS and file snapshots
Database snapshot creation
Memory Optimized Data filegroup
Database Snapshots
You can use the CREATE DATABASE statement to create a read-only, static view, a database snapshot of the
source database. A database snapshot is transactionally consistent with the source database as it existed at the
time when the snapshot was created. A source database can have multiple snapshots.
NOTE
When you create a database snapshot, the CREATE DATABASE statement cannot reference log files, offline files, restoring
files, and defunct files.
If creating a database snapshot fails, the snapshot becomes suspect and must be deleted. For more information,
see DROP DATABASE (Transact-SQL ).
Each snapshot persists until it is deleted by using DROP DATABASE.
For more information, see Database Snapshots (SQL Server).
Database Options
Several database options are automatically set whenever you create a database. For a list of these options, see
ALTER DATABASE SET Options (Transact-SQL ).
Permissions
Requires CREATE DATABASE, CREATE ANY DATABASE, or ALTER ANY DATABASE permission.
To maintain control over disk use on an instance of SQL Server, permission to create databases is typically
limited to a few login accounts.
The following example provides the permission to create a database to the database user Fay.
USE master;
GO
GRANT CREATE DATABASE TO [Fay];
GO
Attached Backed up
Detached Restored
The permissions prevent the files from being accidentally tampered with if they reside in a directory that has
open permissions.
NOTE
Microsoft SQL Server 2005 Express Edition does not set data and log file permissions.
Examples
A. Creating a database without specifying files
The following example creates the database mytest and creates a corresponding primary and transaction log
file. Because the statement has no <filespec> items, the primary database file is the size of the model database
primary file. The transaction log is set to the larger of these values: 512KB or 25% the size of the primary data
file. Because MAXSIZE is not specified, the files can grow to fill all available disk space. This example also
demonstrates how to drop the database named mytest if it exists, before creating the mytest database.
USE master;
GO
IF DB_ID (N'mytest') IS NOT NULL
DROP DATABASE mytest;
GO
CREATE DATABASE mytest;
GO
-- Verify the database files and sizes
SELECT name, size, size*1.0/128 AS [Size in MBs]
FROM sys.master_files
WHERE name = N'mytest';
GO
B. Creating a database that specifies the data and transaction log files
The following example creates the database Sales . Because the keyword PRIMARY is not used, the first file (
Sales_dat ) becomes the primary file. Because neither MB nor KB is specified in the SIZE parameter for the
Sales_dat file, it uses MB and is allocated in megabytes. The Sales_log file is allocated in megabytes because
the MB suffix is explicitly stated in the SIZE parameter.
USE master;
GO
CREATE DATABASE Sales
ON
( NAME = Sales_dat,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\saledat.mdf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 )
LOG ON
( NAME = Sales_log,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\salelog.ldf',
SIZE = 5MB,
MAXSIZE = 25MB,
FILEGROWTH = 5MB ) ;
GO
USE master;
GO
CREATE DATABASE Archive
ON
PRIMARY
(NAME = Arch1,
FILENAME = 'D:\SalesData\archdat1.mdf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20),
( NAME = Arch2,
FILENAME = 'D:\SalesData\archdat2.ndf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20),
( NAME = Arch3,
FILENAME = 'D:\SalesData\archdat3.ndf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20)
LOG ON
(NAME = Archlog1,
FILENAME = 'D:\SalesData\archlog1.ldf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20),
(NAME = Archlog2,
FILENAME = 'D:\SalesData\archlog2.ldf',
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 20) ;
GO
E. Attaching a database
The following example detaches the database Archive created in example D, and then attaches it by using the
FOR ATTACH clause. Archive was defined to have multiple data and log files. However, because the location of
the files has not changed since they were created, only the primary file has to be specified in the FOR ATTACH
clause. Beginning with SQL Server 2005, any full-text files that are part of the database that is being attached
will be attached with the database.
USE master;
GO
sp_detach_db Archive;
GO
CREATE DATABASE Archive
ON (FILENAME = 'D:\SalesData\archdat1.mdf')
FOR ATTACH ;
GO
USE master;
GO
CREATE DATABASE sales_snapshot0600 ON
( NAME = SPri1_dat, FILENAME = 'D:\SalesData\SPri1dat_0600.ss'),
( NAME = SPri2_dat, FILENAME = 'D:\SalesData\SPri2dt_0600.ss'),
( NAME = SGrp1Fi1_dat, FILENAME = 'D:\SalesData\SG1Fi1dt_0600.ss'),
( NAME = SGrp1Fi2_dat, FILENAME = 'D:\SalesData\SG1Fi2dt_0600.ss'),
( NAME = SGrp2Fi1_dat, FILENAME = 'D:\SalesData\SG2Fi1dt_0600.ss'),
( NAME = SGrp2Fi2_dat, FILENAME = 'D:\SalesData\SG2Fi2dt_0600.ss')
AS SNAPSHOT OF Sales ;
GO
USE master;
GO
IF DB_ID (N'MyOptionsTest') IS NOT NULL
DROP DATABASE MyOptionsTest;
GO
CREATE DATABASE MyOptionsTest
COLLATE French_CI_AI
WITH TRUSTWORTHY ON, DB_CHAINING ON;
GO
--Verifying collation and option settings.
SELECT name, collation_name, is_trustworthy_on, is_db_chaining_on
FROM sys.databases
WHERE name = N'MyOptionsTest';
GO
USE master;
GO
--Detach the AdventureWorks2012 database
sp_detach_db AdventureWorks2012;
GO
-- Physically move the full text catalog to the new location.
--Attach the AdventureWorks2012 database and specify the new location of the full-text catalog.
CREATE DATABASE AdventureWorks2012 ON
(FILENAME = 'c:\Program Files\Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\Data\AdventureWorks2012_data.mdf'),
(FILENAME = 'c:\Program Files\Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\Data\AdventureWorks2012_log.ldf'),
(FILENAME = 'c:\myFTCatalogs\AdvWksFtCat')
FOR ATTACH;
GO
I. Creating a database that specifies a row filegroup and two FILESTREAM filegroups
The following example creates the FileStreamDB database. The database is created with one row filegroup and
two FILESTREAM filegroups. Each filegroup contains one file:
FileStreamDB_data contains row data. It contains one file, FileStreamDB_data.mdf with the default path.
FileStreamPhotos contains FILESTREAM data. It contains two FILESTREAM data containers, FSPhotos ,
located at C:\MyFSfolder\Photos and FSPhotos2 , located at D:\MyFSfolder\Photos . It is marked as the
default FILESTREAM filegroup.
FileStreamResumes contains FILESTREAM data. It contains one FILESTREAM data container, FSResumes ,
located at C:\MyFSfolder\Resumes .
USE master;
GO
-- Get the SQL Server data path.
DECLARE @data_path nvarchar(256);
SET @data_path = (SELECT SUBSTRING(physical_name, 1, CHARINDEX(N'master.mdf', LOWER(physical_name)) - 1)
FROM master.sys.master_files
WHERE database_id = 1 AND file_id = 1);
See Also
ALTER DATABASE (Transact-SQL )
Database Detach and Attach (SQL Server)
DROP DATABASE (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_changedbowner (Transact-SQL )
sp_detach_db (Transact-SQL )
sp_removedbreplication (Transact-SQL )
Database Snapshots (SQL Server)
Move Database Files
Databases
Binary Large Object (Blob) Data (SQL Server)
CREATE DATABASE (Azure SQL Database)
5/16/2018 • 11 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a new database.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
CREATE DATABASE database_name [ COLLATE collation_name ]
{
(<edition_options> [, ...n])
}
<edition_options> ::=
{
[;]
To copy a database:
CREATE DATABASE database_name
AS COPY OF [source_server_name.] source_database_name
[ ( SERVICE_OBJECTIVE =
{ 'basic' | 'S0' | 'S1' | 'S2' | 'S3' | 'S4'| 'S6'| 'S7'| 'S9'| 'S12' |
| 'GP_GEN4_1' | 'GP_GEN4_2' | 'GP_GEN4_4' | 'GP_GEN4_8' | 'GP_GEN4_16' | 'GP_GEN4_24' |
| 'BC_GEN4_1' | 'BC_GEN4_2' | 'BC_GEN4_4' | 'BC_GEN4_8' | 'BC_GEN4_16' | 'BC_GEN4_24' |
| 'GP_GEN5_2' | 'GP_GEN5_4' | 'GP_GEN5_8' | 'GP_GEN5_16' | 'GP_GEN5_24' | 'GP_GEN5_32' | 'GP_GEN5_48' |
'GP_GEN5_80' |
| 'BC_GEN5_2' | 'BC_GEN5_4' | 'BC_GEN5_8' | 'BC_GEN5_16' | 'BC_GEN5_24' | 'BC_GEN5_32' | 'BC_GEN5_48' |
'BC_GEN5_80' |
| { ELASTIC_POOL(name = <elastic_pool_name>) } } )
]
[;]
Arguments
This syntax diagram demonstrates the supported arguments in Azure SQL Database.
database_name
The name of the new database. This name must be unique on the SQL server, which can host both Azure SQL
Database databases and SQL Data Warehouse databases, and comply with the SQL Server rules for identifiers.
For more information, see Identifiers.
Collation_name
Specifies the default collation for the database. Collation name can be either a Windows collation name or a SQL
collation name. If not specified, the database is assigned the default collation, which is
SQL_Latin1_General_CP1_CI_AS.
For more information about the Windows and SQL collation names, COLL ATE (Transact-SQL ).
CATALOG_COLL ATION
Specifies the default collation for the metadata catalog. DATABASE_DEFAULT specifies that the metadata catalog
used for system views and system tables be collated to match the default collation for the database. This is the
behavior found in SQL Server.
SQL_Latin1_General_CP1_CI_AS specifies that the metadata catalog used for system views and tables be collated
to a fixed SQL_Latin1_General_CP1_CI_AS collation. This is the default setting on Azure SQL Database if
unspecified.
EDITION
Specifies the service tier of the database. The available values are: 'basic', 'standard', 'premium', 'GeneralPurpose',
and 'BusinessCritical'. Support for 'premiumrs' has been removed. For questions, use this e-mail alias: premium-
rs@microsoft.com.
When EDITION is specified but MAXSIZE is not specified, MAXSIZE is set to the most restrictive size that the
edition supports.
MAXSIZE
Specifies the maximum size of the database. MAXSIZE must be valid for the specified EDITION (service tier)
Following are the supported MAXSIZE values and defaults (D ) for the service tiers.
DTU -based model
MAXSIZE BASIC S0-S2 S3-S12 P1-P6 P11-P15
100 MB √ √ √ √ √
250 MB √ √ √ √ √
500 MB √ √ √ √ √
1 GB √ √ √ √ √
2 GB √ (D) √ √ √ √
5 GB N/A √ √ √ √
10 GB N/A √ √ √ √
20 GB N/A √ √ √ √
30 GB N/A √ √ √ √
40 GB N/A √ √ √ √
50 GB N/A √ √ √ √
100 GB N/A √ √ √ √
150 GB N/A √ √ √ √
200 GB N/A √ √ √ √
* P11 and P15 allow MAXSIZE up to 4 TB with 1024 GB being the default size. P11 and P15 can use up to 4 TB of
included storage at no additional charge. In the Premium tier, MAXSIZE greater than 1 TB is currently available in
the following regions: US East2, West US, US Gov Virginia, West Europe, Germany Central, South East Asia,
Japan East, Australia East, Canada Central, and Canada East. For additional details regarding resource limitations
for the DTU -based model, see DTU -based resource limits.
The MAXSIZE value for the DTU -based model, if specified, has to be a valid value shown in the table above for the
service tier specified.
vCore-based model
General Purpose service tier - Generation 4 compute platform
Max data 1024 1024 1536 3072 4096 4096 4096 4096
size (GB)
PERFORMANCE
LEVEL BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16
Max data 1024 1024 1024 1024 2048 4096 4096 4096
size (GB)
If no MAXSIZE value is set when using the vCore model, the default is 32 GB. For additional details regarding
resource limitsations for vCore-based model, see vCore-based resource limits.
The following rules apply to MAXSIZE and EDITION arguments:
If EDITION is specified but MAXSIZE is not specified, the default value for the edition is used. For example, if
the EDITION is set to Standard, and the MAXSIZE is not specified, then the MAXSIZE is automatically set to
250 MB.
If neither MAXSIZE nor EDITION is specified, the EDITION is set to Standard (S0), and MAXSIZE is set to 250
GB.
SERVICE_OBJECTIVE
Specifies the performance level. Available values for service objective are: S0 , S1 , S2 , S3 , S4 , S6 , S7 , S9 ,
S12 , P1 , P2 , P4 , P6 , P11 , P15 , GP_GEN4_1 , GP_GEN4_2 , GP_GEN4_4 , GP_GEN4_8 , GP_GEN4_16 , GP_GEN4_24 ,
BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16 , BC_GEN4_24 , GP_Gen5_2 , GP_Gen5_4 , GP_Gen5_8 ,
GP_Gen5_16 , GP_Gen5_24 , GP_Gen5_32 , GP_Gen5_48 , GP_Gen5_80 , BC_Gen5_2 , BC_Gen5_4 , BC_Gen5_8 , BC_Gen5_16 ,
BC_Gen5_24 , BC_Gen5_32 , BC_Gen5_48 , BC_Gen5_80 .
For service objective descriptions and more information about the size, editions, and the service objectives
combinations, see Azure SQL Database Service Tiers. If the specified SERVICE_OBJECTIVE is not supported by
the EDITION, you receive an error. To change the SERVICE_OBJECTIVE value from one tier to another (for
example from S1 to P1), you must also change the EDITION value. For service objective descriptions and more
information about the size, editions, and the service objectives combinations, see Azure SQL Database Service
Tiers and Performance Levels, DTU -based resource limits and vCore-based resource limits. Support for PRS
service objectives have been removed. For questions, use this e-mail alias: premium-rs@microsoft.com.
EL ASTIC_POOL (name = <elastic_pool_name>)
To create a new database in an elastic database pool, set the SERVICE_OBJECTIVE of the database to
EL ASTIC_POOL and provide the name of the pool. For more information, see Create and manage a SQL
Database elastic database pool (preview ).
AS COPY OF [source_server_name.]source_database_name
For copying a database to the same or a different SQL Database server.
source_server_name
The name of the SQL Database server where the source database is located. This parameter is optional when the
source database and the destination database are to be located on the same SQL Database server.
NOTE
The AS COPY OF argument does not support the fully qualified unique domain names. In other words, if your server's fully
qualified domain name is serverName.database.windows.net , use only serverName during database copy.
source_database_name
The name of the database that is to be copied.
Azure SQL Database does not support the following arguments and options when using the CREATE DATABASE
statement:
Parameters related to the physical placement of file, such as <filespec> and <filegroup>
External access options, such as DB_CHAINING and TRUSTWORTHY
Attaching a database
Service broker options, such as ENABLE_BROKER, NEW_BROKER, and
ERROR_BROKER_CONVERSATIONS
Database snapshot
For more information about the arguments and the CREATE DATABASE statement, see CREATE DATABASE.
Remarks
Databases in Azure SQL Database have several default settings that are set when the database is created. For
more information about these default settings, see the list of values in DATABASEPROPERTYEX.
MAXSIZE provides the ability to limit the size of the database. If the size of the database reaches its MAXSIZE, you
receive error code 40544. When this occurs, you cannot insert or update data, or create new objects (such as
tables, stored procedures, views, and functions). However, you can still read and delete data, truncate tables, drop
tables and indexes, and rebuild indexes. You can then update MAXSIZE to a value larger than your current
database size or delete some data to free storage space. There may be as much as a fifteen-minute delay before
you can insert new data.
IMPORTANT
The CREATE DATABASE statement must be the only statement in a Transact-SQL batch.
To change the size, edition, or service objective values later, use ALTER DATABASE (Azure SQL Database).
The CATALOG_COLL ATION argument is only available during database creation.
Database Copies
Copying a database using the CREATE DATABASE statement is an asynchronous operation. Therefore, a connection to
the SQL Database server is not needed for the full duration of the copy process. The CREATE DATABASE statement
returns control to the user after the entry in sys.databases is created but before the database copy operation is
complete. In other words, the CREATE DATABASE statement returns successfully when the database copy is still in
progress.
Monitoring the copy process on an SQL Database server: Query the percentage_complete or
replication_state_desc columns in the dm_database_copies or the state column in the sys.databases view.
The sys.dm_operation_status view can be used as well as it returns the status of database operations including
database copy.
At the time the copy process completes successfully, the destination database is transactionally consistent with the
source database.
The following syntax and semantic rules apply to your use of the AS COPY OF argument:
The source server name and the server name for the copy target may be the same or different. When they
are the same, this parameter is optional and the server context of the current session is used by default.
The source and destination database names must be specified, unique, and comply with the SQL Server
rules for identifiers. For more information, see Identifiers.
The CREATE DATABASE statement must be executed within the context of the master database of the SQL
Database server where the new database will be created.
After the copying completes, the destination database must be managed as an independent database. You
can execute the ALTER DATABASE and DROP DATABASE statements against the new database independently of
the source database. You can also copy the new database to another new database.
The source database may continue to be accessed while the database copy is in progress.
For more information, see Create a copy of an Azure SQL database using Transact-SQL.
Permissions
To create a database, a login must be one of the following:
The server-level principal login
The Azure AD administrator for the local Azure SQL Server
A login that is a member of the dbmanager database role
Additional requirements for using CREATE DATABASE ... AS COPY OF syntax: The login executing the
statement on the local server must also be at least the db_owner on the source server. If the login is based
on SQL Server authentication, the login executing the statement on the local server must have a matching
login on the source SQL Database server, with an identical name and password.
Examples
For a quick start tutorial showing you how to connect to an Azure SQL database using SQL Server Management
Studio, see Azure SQL Database: Use SQL Server Management Studio to connect and query data.
Simple Example
A simple example for creating a database.
Creating a Copy
An example creating a copy of a database.
The following example creates a copy of the db_original database, named db_copy in an elastic pool named ep1.
This is true regardless of whether db_original is in an elastic pool or a performance level for a single database. If
db_original is in an elastic pool with a different name, then db_copy is still created in ep1.
CREATE DATABASE TestDB3 COLLATE Japanese_XJIS_140 (MAXSIZE = 100 MB, EDITION = ‘basic’)
WITH CATALOG_COLLATION = DATABASE_DEFAULT
See also
sys.dm_database_copies (Azure SQL Database)
ALTER DATABASE (Azure SQL Database)
CREATE DATABASE (Azure SQL Data Warehouse)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new database.
Syntax
CREATE DATABASE database_name [ COLLATE collation_name ]
(
[ MAXSIZE = {
250 | 500 | 750 | 1024 | 5120 | 10240 | 20480 | 30720
| 40960 | 51200 | 61440 | 71680 | 81920 | 92160 | 102400
| 153600 | 204800 | 245760
} GB ,
]
EDITION = 'datawarehouse',
SERVICE_OBJECTIVE = {
'DW100' | 'DW200' | 'DW300' | 'DW400' | 'DW500' | 'DW600'
| 'DW1000' | 'DW1200' | 'DW1500' | 'DW2000' | 'DW3000' | 'DW6000'
| 'DW1000c' | 'DW1500c' | 'DW2000c' | 'DW2500c' | 'DW3000c' | 'DW5000c'
| 'DW6000c' | 'DW7500c' | 'DW10000c' | 'DW15000c' | 'DW30000c'
}
)
[;]
Arguments
database_name
The name of the new database. This name must be unique on the SQL server, which can host both Azure SQL
Database databases and SQL Data Warehouse databases, and comply with the SQL Server rules for identifiers.
For more information, see Identifiers.
collation_name
Specifies the default collation for the database. Collation name can be either a Windows collation name or a SQL
collation name. If not specified, the database is assigned the default collation, which is
SQL_Latin1_General_CP1_CI_AS.
For more information about the Windows and SQL collation names, see COLL ATE (Transact-SQL ).
EDITION
Specifies the service tier of the database. For SQL Data Warehouse use 'datawarehouse' .
MAXSIZE
The default is 245,760 GB (240 TB ).
Applies to: Optimized for Elasticity performance tier
The maximum allowable size for the database. The database cannot grow beyond MAXSIZE.
Applies to: Optimized for Compute performance tier
The maximum allowable size for rowstore data in the database. Data stored in rowstore tables, a columnstore
index's deltastore, or a nonclustered index on a clustered columnstore index cannot grow beyond MAXSIZE. Data
compressed into columnstore format does not have a size limit and is not constrained by MAXSIZE.
SERVICE_OBJECTIVE
Specifies the performance level. For more information about service objectives for SQL Data Warehouse, see
Performance Tiers.
General Remarks
Use DATABASEPROPERTYEX (Transact-SQL ) to see the database properties.
Use ALTER DATABASE (Azure SQL Data Warehouse) to change the max size, or service objective values later.
SQL Data Warehouse is set to COMPATIBILITY_LEVEL 130 and cannot be changed. For more details, see
Improved Query Performance with Compatibility Level 130 in Azure SQL Database.
Permissions
Required permissions:
Server level principal login, created by the provisioning process, or
Member of the dbmanager database role.
Error Handling
If the size of the database reaches MAXSIZE you will receive error code 40544. When this occurs, you cannot
insert and update data, or create new objects (such as tables, stored procedures, views, and functions). You can still
read and delete data, truncate tables, drop tables and indexes, and rebuild indexes. You can then update MAXSIZE
to a value larger than your current database size or delete some data to free storage space. There may be as much
as a fifteen-minute delay before you can insert new data.
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a new database on a Parallel Data Warehouse appliance. Use this statement to create all files associated
with an appliance database and to set maximum size and auto-growth options for the database tables and
transaction log.
Transact-SQL Syntax Conventions (Transact-SQL )
Syntax
CREATE DATABASE database_name
WITH (
[ AUTOGROW = ON | OFF , ]
REPLICATED_SIZE = replicated_size [ GB ] ,
DISTRIBUTED_SIZE = distributed_size [ GB ] ,
LOG_SIZE = log_size [ GB ] )
[;]
Arguments
database_name
The name of the new database. For more information on permitted database names, see "Object Naming Rules"
and "Reserved Database Names" in the Parallel Data Warehouse product documentation.
AUTOGROW = ON | OFF
Specifies whether the replicated_size, distributed_size, and log_size parameters for this database will automatically
grow as needed beyond their specified sizes. Default value is OFF.
If AUTOGROW is ON, replicated_size, distributed_size, and log_size will grow as required (not in blocks of the
initial specified size) with each data insert, update, or other action that requires more storage than has already been
allocated.
If AUTOGROW is OFF, the sizes will not grow automatically. Parallel Data Warehouse will return an error when
attempting an action that requires replicated_size, distributed_size, or log_size to grow beyond their specified value.
AUTOGROW is either ON for all sizes or OFF for all sizes. For example, it is not possible to set AUTOGROW ON
for log_size, but not set it for replicated_size.
replicated_size [ GB ]
A positive number. Sets the size (in integer or decimal gigabytes) for the total space allocated to replicated tables
and corresponding data on each Compute node. For minimum and maximum replicated_size requirements, see
"Minimum and Maximum Values" in the Parallel Data Warehouse product documentation.
If AUTOGROW is ON, replicated tables will be permitted to grow beyond this limit.
If AUTOGROW is OFF, an error will be returned if a user attempts to create a new replicated table, insert data into
an existing replicated table, or update an existing replicated table in a manner that would increase the size beyond
replicated_size.
distributed_size [ GB ]
A positive number. The size, in integer or decimal gigabytes, for the total space allocated to distributed tables (and
corresponding data) across the appliance. For minimum and maximum distributed_size requirements, see
"Minimum and Maximum Values" in the Parallel Data Warehouse product documentation.
If AUTOGROW is ON, distributed tables will be permitted to grow beyond this limit.
If AUTOGROW is OFF, an error will be returned if a user attempts to create a new distributed table, insert data
into an existing distributed table, or update an existing distributed table in a manner that would increase the size
beyond distributed_size.
log_size [ GB ]
A positive number. The size (in integer or decimal gigabytes) for the transaction log across the appliance.
For minimum and maximum log_size requirements, see "Minimum and Maximum Values" in the Parallel Data
Warehouse product documentation.
If AUTOGROW is ON, the log file is permitted to grow beyond this limit. Use the DBCC SHRINKLOG (Azure SQL
Data Warehouse) statement to reduce the size of the log files to their original size.
If AUTOGROW is OFF, an error will be returned to the user for any action that would increase the log size on an
individual Compute node beyond log_size.
Permissions
Requires the CREATE ANY DATABASE permission in the master database, or membership in the sysadmin fixed
server role.
The following example provides the permission to create a database to the database user Fay.
USE master;
GO
GRANT CREATE ANY DATABASE TO [Fay];
GO
General Remarks
Databases are created with database compatibility level 120, which is the compatibility level for SQL Server 2014
(12.x). This ensures the database will be able to use all of the SQL Server 2014 (12.x) functionality that PDW uses.
Locking
Takes a shared lock on the DATABASE object.
Metadata
After this operation succeeds, an entry for this database will appear in the sys.databases (Transact-SQL ) and
sys.objects (Transact-SQL )metadata views.
The following example creates the database mytest with the same parameters as above, except that AUTOGROW
is turned on. This allows the database to grow outside the specified size parameters.
See Also
ALTER DATABASE (Parallel Data Warehouse)
DROP DATABASE (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION
(Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a database audit specification object using the SQL Server audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions
Syntax
CREATE DATABASE AUDIT SPECIFICATION audit_specification_name
{
FOR SERVER AUDIT audit_name
[ { ADD ( { <audit_action_specification> | audit_action_group_name } )
} [, ...n] ]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]
<audit_action_specification>::=
{
action [ ,...n ]ON [ class :: ] securable BY principal [ ,...n ]
}
Arguments
audit_specification_name
Is the name of the audit specification.
audit_name
Is the name of the audit to which this specification is applied.
audit_action_specification
Is the specification of actions on securables by principals that should be recorded in the audit.
action
Is the name of one or more database-level auditable actions. For a list of audit actions, see SQL Server Audit
Action Groups and Actions.
audit_action_group_name
Is the name of one or more groups of database-level auditable actions. For a list of audit action groups, see SQL
Server Audit Action Groups and Actions.
class
Is the class name (if applicable) on the securable.
securable
Is the table, view, or other securable object in the database on which to apply the audit action or audit action
group. For more information, see Securables.
principal
Is the name of database principal on which to apply the audit action or audit action group. For more information,
see Principals (Database Engine).
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification.
Remarks
Database audit specifications are non-securable objects that reside in a given database. When a database audit
specification is created, it is in a disabled state.
Permissions
Users with the ALTER ANY DATABASE AUDIT permission can create database audit specifications and bind them to
any audit.
After a database audit specification is created, it can be viewed by principals with the CONTROL SERVER ,
ALTER ANY DATABASE AUDIT permissions, or the sysadmin account.
Examples
The following example creates a server audit called Payrole_Security_Audit and then a database audit
specification called Payrole_Security_Audit that audits SELECT and INSERT statements by the dbo user, for the
HumanResources.EmployeePayHistory table in the AdventureWorks2012 database.
USE master ;
GO
-- Create the server audit.
CREATE SERVER AUDIT Payrole_Security_Audit
TO FILE ( FILEPATH =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA' ) ;
GO
-- Enable the server audit.
ALTER SERVER AUDIT Payrole_Security_Audit
WITH (STATE = ON) ;
GO
-- Move to the target database.
USE AdventureWorks2012 ;
GO
-- Create the database audit specification.
CREATE DATABASE AUDIT SPECIFICATION Audit_Pay_Tables
FOR SERVER AUDIT Payrole_Security_Audit
ADD (SELECT , INSERT
ON HumanResources.EmployeePayHistory BY dbo )
WITH (STATE = ON) ;
GO
See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
CREATE DATABASE ENCRYPTION KEY (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an encryption key that is used for transparently encrypting a database. For more information about
transparent database encryption, see Transparent Data Encryption (TDE ).
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
Arguments
WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
Specifies the encryption algorithm that is used for the encryption key.
NOTE
Beginning with SQL Server 2016, all algorithms other than AES_128, AES_192, and AES_256 are deprecated. To use older
algorithms (not recommended) you must set the database to database compatibility level 120 or lower.
Permissions
Requires CONTROL permission on the database and VIEW DEFINITION permission on the certificate or
asymmetric key that is used to encrypt the database encryption key.
Examples
For additional examples using TDE, see Transparent Data Encryption (TDE ), Enable TDE on SQL Server Using
EKM, and Extensible Key Management Using Azure Key Vault (SQL Server).
The following example creates a database encryption key by using the AES_256 algorithm, and protects the private
key with a certificate named MyServerCert .
USE AdventureWorks2012;
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MyServerCert;
GO
See Also
Transparent Data Encryption (TDE )
SQL Server Encryption
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
ALTER DATABASE SET Options (Transact-SQL )
ALTER DATABASE ENCRYPTION KEY (Transact-SQL )
DROP DATABASE ENCRYPTION KEY (Transact-SQL )
sys.dm_database_encryption_keys (Transact-SQL )
CREATE DATABASE SCOPED CREDENTIAL
(Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a database credential. A database credential is not mapped to a server login or database user. The
credential is used by the database to access to the external location anytime the database is performing an
operation that requires access.
Transact-SQL Syntax Conventions
Syntax
CREATE DATABASE SCOPED CREDENTIAL credential_name
WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]
Arguments
credential_name
Specifies the name of the database scoped credential being created. credential_name cannot start with the
number (#) sign. System credentials start with ##.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server. To import a file from Azure
Blob storage using share key, the identity name must be SHARED ACCESS SIGNATURE . To load data into SQL DW, any
valid value can be used for identity. For more information about shared access signatures, see Using Shared
Access Signatures (SAS ).
SECRET ='secret'
Specifies the secret required for outgoing authentication. SECRET is required to import a file from Azure Blob
storage. To load from Azure Blob storage into SQL DW, the Secret must be the Azure Storage Key.
WARNING
The SAS key value might begin with a '?' (question mark). When you use the SAS key, you must remove the leading '?'.
Otherwise your efforts might be blocked.
Remarks
A database scoped credential is a record that contains the authentication information that is required to connect
to a resource outside SQL Server. Most credentials include a Windows user and password.
Before creating a database scoped credential, the database must have a master key to protect the credential. For
more information, see CREATE MASTER KEY (Transact-SQL ).
When IDENTITY is a Windows user, the secret can be the password. The secret is encrypted using the service
master key. If the service master key is regenerated, the secret is re-encrypted using the new service master key.
Information about database scoped credentials is visible in the sys.database_scoped_credentials catalog view.
Hereare some applications of database scoped credentials:
SQL Server uses a database scoped credential to access non-public Azure blob storage or Kerberos-
secured Hadoop clusters with PolyBase. To learn more, see CREATE EXTERNAL DATA SOURCE (Transact-
SQL ).
SQL Data Warehouse uses a database scoped credential to access non-public Azure blob storage with
PolyBase. To learn more, see CREATE EXTERNAL DATA SOURCE (Transact-SQL ).
SQL Database uses database scoped credentials for its global query feature. This is the ability to query
across multiple database shards.
SQL Database uses database scoped credentials to write extended event files to Azure blob storage.
SQL Database uses database scoped credentials for elastic pools. For more information, see Tame
explosive growth with elastic databases
BULK INSERT and OPENROWSET use database scoped credentials to access data from Azure blob
storage. For more information, see Examples of Bulk Access to Data in Azure Blob Storage.
Permissions
Requires CONTROL permission on the database.
Examples
A. Creating a database scoped credential for your application.
The following example creates the database scoped credential called AppCred . The database scoped credential
contains the Windows user Mary5 and a password.
-- Create a db master key if one does not already exist, using your own password.
CREATE MASTER KEY ENCRYPTION BY PASSWORD='<EnterStrongPasswordHere>';
C. Creating a database scoped credential for PolyBase Connectivity to Azure Data Lake Store.
The following example creates a database scoped credential that can be used to create an external data source,
which can be used by PolyBase in Azure SQL Data Warehouse.
Azure Data Lake Store uses an Azure Active Directory Application for Service to Service Authentication. Please
create an AAD application and document your client_id, OAuth_2.0_Token_EndPoint, and Key before you try to
create a database scoped credential.
More information
Credentials (Database Engine)
ALTER DATABASE SCOPED CREDENTIAL (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.database_scoped_credentials
CREATE CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
CREATE DEFAULT (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an object called a default. When bound to a column or an alias data type, a default specifies a value to be
inserted into the column to which the object is bound (or into all columns, in the case of an alias data type), when
no value is explicitly supplied during an insert.
IMPORTANT
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature. Instead, use default definitions created using the DEFAULT
keyword of ALTER TABLE or CREATE TABLE.
Syntax
CREATE DEFAULT [ schema_name . ] default_name
AS constant_expression [ ; ]
Arguments
schema_name
Is the name of the schema to which the default belongs.
default_name
Is the name of the default. Default names must conform to the rules for identifiers. Specifying the default owner
name is optional.
constant_expression
Is an expression that contains only constant values (it cannot include the names of any columns or other database
objects). Any constant, built-in function, or mathematical expression can be used, except those that contain alias
data types. User-defined functions cannot be used.. Enclose character and date constants in single quotation marks
('); monetary, integer, and floating-point constants do not require quotation marks. Binary data must be preceded
by 0x, and monetary data must be preceded by a dollar sign ($). The default value must be compatible with the
column data type.
Remarks
A default name can be created only in the current database. Within a database, default names must be unique by
schema. When a default is created, use sp_bindefault to bind it to a column or to an alias data type.
If the default is not compatible with the column to which it is bound, SQL Server generates an error message
when trying to insert the default value. For example, N/A cannot be used as a default for a numeric column.
If the default value is too long for the column to which it is bound, the value is truncated.
CREATE DEFAULT statements cannot be combined with other Transact-SQL statements in a single batch.
A default must be dropped before creating a new one of the same name, and the default must be unbound by
executing sp_unbindefault before it is dropped.
If a column has both a default and a rule associated with it, the default value must not violate the rule. A default
that conflicts with a rule is never inserted, and SQL Server generates an error message each time it attempts to
insert the default.
When bound to a column, a default value is inserted when:
A value is not explicitly inserted.
Either the DEFAULT VALUES or DEFAULT keywords are used with INSERT to insert default values.
If NOT NULL is specified when creating a column and a default is not created for it, an error message is
generated when a user fails to make an entry in that column. The following table illustrates the relationship
between the existence of a default and the definition of a column as NULL or NOT NULL. The entries in the
table show the result.
ENTER NULL, NO
COLUMN DEFINITION NO ENTRY, NO DEFAULT NO ENTRY, DEFAULT DEFAULT ENTER NULL, DEFAULT
Permissions
To execute CREATE DEFAULT, at a minimum, a user must have CREATE DEFAULT permission in the current
database and ALTER permission on the schema in which the default is being created.
Examples
A. Creating a simple character default
The following example creates a character default called unknown .
USE AdventureWorks2012;
GO
CREATE DEFAULT phonedflt AS 'unknown';
B. Binding a default
The following example binds the default created in example A. The default takes effect only if no entry is specified
for the Phone column of the Contact table. Note that omitting any entry is different from explicitly stating NULL
in an INSERT statement.
Because a default named phonedflt does not exist, the following Transact-SQL statement fails. This example is for
illustration only.
USE AdventureWorks2012;
GO
sp_bindefault 'phonedflt', 'Person.PersonPhone.PhoneNumber';
See Also
ALTER TABLE (Transact-SQL )
CREATE RULE (Transact-SQL )
CREATE TABLE (Transact-SQL )
DROP DEFAULT (Transact-SQL )
DROP RULE (Transact-SQL )
Expressions (Transact-SQL )
INSERT (Transact-SQL )
sp_bindefault (Transact-SQL )
sp_help (Transact-SQL )
sp_helptext (Transact-SQL )
sp_rename (Transact-SQL )
sp_unbindefault (Transact-SQL )
CREATE ENDPOINT (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates endpoints and defines their properties, including the methods available to client applications. For related
permissions information, see GRANT Endpoint Permissions (Transact-SQL ).
The syntax for CREATE ENDPOINT can logically be broken into two parts:
The first part starts with AS and ends before the FOR clause.
In this part, you provide information specific to the transport protocol (TCP ) and set a listening port
number for the endpoint, as well as the method of endpoint authentication and/or a list of IP addresses (if
any) that you want to restrict from accessing the endpoint.
The second part starts with the FOR clause.
In this part, you define the payload that is supported on the endpoint. The payload can be one of several
supported types: Transact-SQL, service broker, database mirroring. In this part, you also include language-
specific information.
NOTE: Native XML Web Services (SOAP/HTTP endpoints) was removed in SQL Server 2012 (11.x).
Syntax
CREATE ENDPOINT endPointName [ AUTHORIZATION login ]
[ STATE = { STARTED | STOPPED | DISABLED } ]
AS { TCP } (
<protocol_specific_arguments>
)
FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING } (
<language_specific_arguments>
)
]
[ , ] ROLE = { WITNESS | PARTNER | ALL }
)
Arguments
endPointName
Is the assigned name for the endpoint you are creating. Use when updating or deleting the endpoint.
AUTHORIZATION login
Specifies a valid SQL Server or Windows login that is assigned ownership of the newly created endpoint object. If
AUTHORIZATION is not specified, by default, the caller becomes owner of the newly created object.
To assign ownership by specifying AUTHORIZATION, the caller must have IMPERSONATE permission on the
specified login.
To reassign ownership, see ALTER ENDPOINT (Transact-SQL ).
STATE = { STARTED | STOPPED | DISABLED }
Is the state of the endpoint when it is created. If the state is not specified when the endpoint is created, STOPPED
is the default.
STARTED
Endpoint is started and is actively listening for connections.
DISABLED
Endpoint is disabled. In this state, the server listens to port requests but returns errors to clients.
STOPPED
Endpoint is stopped. In this state, the server does not listen to the endpoint port or respond to any attempted
requests to use the endpoint.
To change the state, use ALTER ENDPOINT (Transact-SQL ).
AS { TCP }
Specifies the transport protocol to use.
FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING }
Specifies the payload type.
Currently, there are no Transact-SQL language-specific arguments to pass in the <language_specific_arguments>
parameter.
TCP Protocol Option
The following arguments apply only to the TCP protocol option.
LISTENER_PORT =listenerPort
Specifies the port number listened to for connections by the service broker TCP/IP protocol. By convention, 4022
is used but any number between 1024 and 32767 is valid.
LISTENER_IP = ALL | (4 -part-ip ) | ( "ip_address_v6" )
Specifies the IP address that the endpoint will listen on. The default is ALL. This means that the listener will accept
a connection on any valid IP address.
If you configure database mirroring with an IP address instead of a fully-qualified domain name (
ALTER DATABASE SET PARTNER = partner_IP_address or ALTER DATABASE SET WITNESS = witness_IP_address ), you have
to specify LISTENER_IP =IP_address instead of LISTENER_IP=ALL when you create mirroring endpoints.
SERVICE_BROKER and DATABASE_MIRRORING Options
The following AUTHENTICATION and ENCRYPTION arguments are common to the SERVICE_BROKER and
DATABASE_MIRRORING options.
NOTE
For options that are specific to SERVICE_BROKER, see "SERVICE_BROKER Options," later in this section. For options that are
specific to DATABASE_MIRRORING, see "DATABASE_MIRRORING Options," later in this section.
IMPORTANT
All mirroring connections on a server instance use a single database mirroring endpoint. Any attempt to create an additional
database mirroring endpoint will fail.
<authentication_options> ::=
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
Specifies that the endpoint is to connect using Windows Authentication protocol to authenticate the endpoints.
This is the default.
If you specify an authorization method (NTLM or KERBEROS ), that method is always used as the authentication
protocol. The default value, NEGOTIATE, causes the endpoint to use the Windows negotiation protocol to choose
either NTLM or Kerberos.
CERTIFICATE certificate_name
Specifies that the endpoint is to authenticate the connection using the certificate specified by certificate_name to
establish identity for authorization. The far endpoint must have a certificate with the public key matching the
private key of the specified certificate.
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
Specifies that endpoint is to try to connect by using Windows Authentication and, if that attempt fails, to then try
using the specified certificate.
CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
Specifies that endpoint is to try to connect by using the specified certificate and, if that attempt fails, to then try
using Windows Authentication.
ENCRYPTION = { DISABLED | SUPPORTED | REQUIRED } [ALGORITHM { AES | RC4 | AES RC4 | RC4 AES } ]
Specifies whether encryption is used in the process. The default is REQUIRED.
DISABLED
Specifies that data sent over a connection is not encrypted.
SUPPORTED
Specifies that the data is encrypted only if the opposite endpoint specifies either SUPPORTED or REQUIRED.
REQUIRED
Specifies that connections to this endpoint must use encryption. Therefore, to connect to this endpoint, another
endpoint must have ENCRYPTION set to either SUPPORTED or REQUIRED.
Optionally, you can use the ALGORITHM argument to specify the form of encryption used by the endpoint, as
follows:
AES
Specifies that the endpoint must use the AES algorithm. This is the default in SQL Server 2016 (13.x) and later.
RC4
Specifies that the endpoint must use the RC4 algorithm. This is the default through SQL Server 2014 (12.x).
NOTE
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or RC4_128
when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of the AES
algorithms instead. In SQL Server 2012 (11.x) and later versions, material encrypted using RC4 or RC4_128 can be
decrypted in any compatibility level.
AES RC4
Specifies that the two endpoints will negotiate for an encryption algorithm with this endpoint giving preference to
the AES algorithm.
RC4 AES
Specifies that the two endpoints will negotiate for an encryption algorithm with this endpoint giving preference to
the RC4 algorithm.
NOTE
The RC4 algorithm is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Do not use this
feature in new development work, and modify applications that currently use this feature as soon as possible. We
recommend that you use AES.
If both endpoints specify both algorithms but in different orders, the endpoint accepting the connection wins.
SERVICE_BROKER Options
The following arguments are specific to the SERVICE_BROKER option.
MESSAGE_FORWARDING = { ENABLED | DISABLED }
Determines whether messages received by this endpoint that are for services located elsewhere will be forwarded.
ENABLED
Forwards messages if a forwarding address is available.
DISABLED
Discards messages for services located elsewhere. This is the default.
MESSAGE_FORWARD_SIZE =forward_size
Specifies the maximum amount of storage in megabytes to allocate for the endpoint to use when storing
messages that are to be forwarded.
DATABASE_MIRRORING Options
The following argument is specific to the DATABASE_MIRRORING option.
ROLE = { WITNESS | PARTNER | ALL }
Specifies the database mirroring role or roles that the endpoint supports.
WITNESS
Enables the endpoint to perform in the role of a witness in the mirroring process.
NOTE
For SQL Server 2005 Express Edition, WITNESS is the only option available.
PARTNER
Enables the endpoint to perform in the role of a partner in the mirroring process.
ALL
Enables the endpoint to perform in the role of both a witness and a partner in the mirroring process.
For more information about these roles, see Database Mirroring (SQL Server).
NOTE
There is no default port for DATABASE_MIRRORING.
Remarks
ENDPOINT DDL statements cannot be executed inside a user transaction. ENDPOINT DDL statements do not
fail even if an active snapshot isolation level transaction is using the endpoint being altered.
Requests can be executed against an ENDPOINT by the following:
Members of sysadmin fixed server role
The owner of the endpoint
Users or groups that have been granted CONNECT permission on the endpoint
Permissions
Requires CREATE ENDPOINT permission, or membership in the sysadmin fixed server role. For more
information, see GRANT Endpoint Permissions (Transact-SQL ).
Example
Creating a database mirroring endpoint
The following example creates a database mirroring endpoint. The endpoint uses port number 7022 , although
any available port number would work. The endpoint is configured to use Windows Authentication using only
Kerberos. The ENCRYPTION option is configured to the nondefault value of SUPPORTED to support encrypted or
unencrypted data. The endpoint is being configured to support both the partner and witness roles.
See also
ALTER ENDPOINT (Transact-SQL )
Choose an Encryption Algorithm
DROP ENDPOINT (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE EVENT NOTIFICATION (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an object that sends information about a database or server event to a service broker service. Event
notifications are created only by using Transact-SQL statements.
Transact-SQL Syntax Conventions
Syntax
CREATE EVENT NOTIFICATION event_notification_name
ON { SERVER | DATABASE | QUEUE queue_name }
[ WITH FAN_IN ]
FOR { event_type | event_group } [ ,...n ]
TO SERVICE 'broker_service' , { 'broker_instance_specifier' | 'current database' }
[ ; ]
Arguments
event_notification_name
Is the name of the event notification. An event notification name must comply with the rules for identifiers and
must be unique within the scope in which they are created: SERVER, DATABASE, or object_name.
SERVER
Applies the scope of the event notification to the current instance of SQL Server. If specified, the notification fires
whenever the specified event in the FOR clause occurs anywhere in the instance of SQL Server.
NOTE
This option is not available in a contained database.
DATABASE
Applies the scope of the event notification to the current database. If specified, the notification fires whenever the
specified event in the FOR clause occurs in the current database.
QUEUE
Applies the scope of the notification to a specific queue in the current database. QUEUE can be specified only if
FOR QUEUE_ACTIVATION or FOR BROKER_QUEUE_DISABLED is also specified.
queue_name
Is the name of the queue to which the event notification applies. queue_name can be specified only if QUEUE is
specified.
WITH FAN_IN
Instructs SQL Server to send only one message per event to any specified service for all event notifications that:
Are created on the same event.
Are created by the same principal (as identified by the same SID ).
Specify the same service and broker_instance_specifier.
Specify WITH FAN_IN.
For example, three event notifications are created. All event notifications specify FOR ALTER_TABLE, WITH
FAN_IN, the same TO SERVICE clause, and are created by the same SID. When an ALTER TABLE statement
is run, the messages that are created by these three event notifications are merged into one. Therefore, the
target service receives only one message of the event.
event_type
Is the name of an event type that causes the event notification to execute. event_type can be a Transact-SQL
DDL event type, a SQL Trace event type, or a Service Broker event type. For a list of qualifying Transact-SQL
DDL event types, see DDL Events. Service Broker event types are QUEUE_ACTIVATION and
BROKER_QUEUE_DISABLED. For more information, see Event Notifications.
event_group
Is the name of a predefined group of Transact-SQL or SQL Trace event types. An event notification can fire
after execution of any event that belongs to an event group. For a list of DDL event groups, the Transact-
SQL events they cover, and the scope at which they can be defined, see DDL Event Groups.
event_group also acts as a macro, when the CREATE EVENT NOTIFICATION statement finishes, by adding
the event types it covers to the sys.events catalog view.
' broker_service '
Specifies the target service that receives the event instance data. SQL Server opens one or more
conversations to the target service for the event notification. This service must honor the same SQL Server
Events message type and contract that is used to send the message.
The conversations remain open until the event notification is dropped. Certain errors could cause the
conversations to close earlier. Ending some or all conversations explicitly might prevent the target service
from receiving more messages.
{ 'broker_instance_specifier' | 'current database' }
Specifies a service broker instance against which broker_service is resolved. The value for a specific service
broker can be acquired by querying the service_broker_guid column of the sys.databases catalog view.
Use 'current database' to specify the service broker instance in the current database. 'current database'
is a case-insensitive string literal.
NOTE
This option is not available in a contained database.
Remarks
Service Broker includes a message type and contract specifically for event notifications. Therefore, a Service Broker
initiating service does not have to be created because one already exists that specifies the following contract name:
http://schemas.microsoft.com/SQL/Notifications/PostEventNotification
The target service that receives event notifications must honor this preexisting contract.
IMPORTANT
Service Broker dialog security should be configured for event notifications that send messages to a service broker on a
remote server. Dialog security must be configured manually according to the full security model. For more information, see
Configure Dialog Security for Event Notifications.
If an event transaction that activates a notification is rolled back, the sending of the event notification is also rolled
back. Event notifications do not fire by an action defined in a trigger when the transaction is committed or rolled
back inside the trigger. Because trace events are not bound by transactions, event notifications based on trace
events are sent regardless of whether the transaction that activates them is rolled back.
If the conversation between the server and the target service is broken after an event notification fires, an error is
reported and the event notification is dropped.
The event transaction that originally started the notification is not affected by the success or failure of the sending
of the event notification.
Any failure to send an event notification is logged.
Permissions
To create an event notification that is scoped to the database (ON DATABASE ), requires CREATE DATABASE DDL
EVENT NOTIFICATION permission in the current database.
To create an event notification on a DDL statement that is scoped to the server (ON SERVER ), requires CREATE
DDL EVENT NOTIFICATION permission in the server.
To create an event notification on a trace event, requires CREATE TRACE EVENT NOTIFICATION permission in
the server.
To create an event notification that is scoped to a queue, requires ALTER permission on the queue.
Examples
NOTE
In Examples A and B below, the GUID in the TO SERVICE 'NotifyService' clause ('8140a771-3c4b-4479-8ac0-
81008ab17984') is specific to the computer on which the example was set up. For that instance, that was the GUID for the
AdventureWorks2012 database.
To copy and run these examples, you need to replace this GUID with one from your computer and SQL Server instance. As
explained in the Arguments section above, you can acquire the 'broker_instance_specifier' by querying the
service_broker_guid column of the sys.databases catalog view.
See Also
Event Notifications
DROP EVENT NOTIFICATION (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.event_notifications (Transact-SQL )
sys.server_event_notifications (Transact-SQL )
sys.events (Transact-SQL )
sys.server_events (Transact-SQL )
CREATE EVENT SESSION (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an Extended Events session that identifies the source of the events, the event session targets, and the event
session options.
Transact-SQL Syntax Conventions.
Syntax
CREATE EVENT SESSION event_session_name
ON SERVER
{
<event_definition> [ ,...n]
[ <event_target_definition> [ ,...n] ]
[ WITH ( <event_session_options> [ ,...n] ) ]
}
;
<event_definition>::=
{
ADD EVENT [event_module_guid].event_package_name.event_name
[ ( {
[ SET { event_customizable_attribute = <value> [ ,...n] } ]
[ ACTION ( { [event_module_guid].event_package_name.action_name [ ,...n] } ) ]
[ WHERE <predicate_expression> ]
} ) ]
}
<predicate_expression> ::=
{
[ NOT ] <predicate_factor> | {( <predicate_expression> ) }
[ { AND | OR } [ NOT ] { <predicate_factor> | ( <predicate_expression> ) } ]
[ ,...n ]
}
<predicate_factor>::=
{
<predicate_leaf> | ( <predicate_expression> )
}
<predicate_leaf>::=
{
<predicate_source_declaration> { = | < > | ! = | > | > = | < | < = } <value>
| [event_module_guid].event_package_name.predicate_compare_name ( <predicate_source_declaration>, <value>
)
}
<predicate_source_declaration>::=
{
event_field_name | ( [event_module_guid].event_package_name.predicate_source_name )
}
<value>::=
{
number | 'string'
}
<event_target_definition>::=
{
ADD TARGET [event_module_guid].event_package_name.target_name
[ ( SET { target_parameter_name = <value> [ ,...n] } ) ]
}
<event_session_options>::=
{
[ MAX_MEMORY = size [ KB | MB ] ]
[ [,] EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS | ALLOW_MULTIPLE_EVENT_LOSS | NO_EVENT_LOSS } ]
[ [,] MAX_DISPATCH_LATENCY = { seconds SECONDS | INFINITE } ]
[ [,] MAX_EVENT_SIZE = size [ KB | MB ] ]
[ [,] MEMORY_PARTITION_MODE = { NONE | PER_NODE | PER_CPU } ]
[ [,] TRACK_CAUSALITY = { ON | OFF } ]
[ [,] STARTUP_STATE = { ON | OFF } ]
}
Arguments
event_session_name
Is the user-defined name for the event session. event_session_name is alphanumeric, can be up to 128 characters,
must be unique within an instance of SQL Server, and must comply with the rules for Identifiers.
ADD EVENT [ event_module_guid ].event_package_name.event_name
Is the event to associate with the event session, where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the action object.
event_name is the event object.
Events appear in the sys.dm_xe_objects view as object_type 'event'.
SET { event_customizable_attribute= <value> [ ,...n] }
Allows customizable attributes for the event to be set. Customizable attributes appear in the
sys.dm_xe_object_columns view as column_type 'customizable ' and object_name = event_name.
ACTION ( { [event_module_guid].event_package_name.action_name [ ,...n] })
Is the action to associate with the event session, where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the action object.
action_name is the action object.
Actions appear in the sys.dm_xe_objects view as object_type 'action'.
WHERE <predicate_expression> Specifies the predicate expression used to determine if an event should be
processed. If <predicate_expression> is true, the event is processed further by the actions and targets for
the session. If <predicate_expression> is false, the event is dropped by the session before being processed
by the actions and targets for the session. Predicate expressions are limited to 3000 characters, which limits
string arguments.
event_field_name
Is the name of the event field that identifies the predicate source.
[event_module_guid].event_package_name.predicate_source_name
Is the name of the global predicate source where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the predicate object.
predicate_source_name is defined in the sys.dm_xe_objects view as object_type 'pred_source'.
[event_module_guid].event_package_name.predicate_compare_name
Is the name of the predicate object to associate with the event, where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the predicate object.
predicate_compare_name is a global source defined in the sys.dm_xe_objects view as object_type
'pred_compare'.
number
Is any numeric type including decimal. Limitations are the lack of available physical memory or a number
that is too large to be represented as a 64-bit integer.
'string'
Either an ANSI or Unicode string as required by the predicate compare. No implicit string type conversion
is performed for the predicate compare functions. Passing the wrong type results in an error.
ADD TARGET [event_module_guid].event_package_name.target_name
Is the target to associate with the event session, where:
event_module_guid is the GUID for the module that contains the event.
event_package_name is the package that contains the action object.
target_name is the target. Targets appear in sys.dm_xe_objects view as object_type 'target'.
SET { target_parameter_name= <value> [, ...n] }
Sets a target parameter. Target parameters appear in the sys.dm_xe_object_columns view as column_type
'customizable' and object_name = target_name.
IMPORTANT
If you are using the ring buffer target, we recommend that you set the max_memory target parameter to 2048 kilobytes
(KB) to help avoid possible data truncation of the XML output. For more information about when to use the different target
types, see SQL Server Extended Events Targets.
WITH ( <event_session_options> [ ,...n] ) Specifies options to use with the event session.
MAX_MEMORY =size [ KB | MB ]
Specifies the maximum amount of memory to allocate to the session for event buffering. The default is 4 MB. size
is a whole number and can be a kilobyte (KB ) or a megabyte (MB ) value.
EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS | ALLOW_MULTIPLE_EVENT_LOSS |
NO_EVENT_LOSS }
Specifies the event retention mode to use for handling event loss.
ALLOW_SINGLE_EVENT_LOSS
An event can be lost from the session. A single event is only dropped when all the event buffers are full. Losing a
single event when event buffers are full allows for acceptable SQL Server performance characteristics, while
minimizing the loss of data in the processed event stream.
ALLOW_MULTIPLE_EVENT_LOSS
Full event buffers containing multiple events can be lost from the session. The number of events lost is dependant
upon the memory size allocated to the session, the partitioning of the memory, and the size of the events in the
buffer. This option minimizes performance impact on the server when event buffers are quickly filled, but large
numbers of events can be lost from the session.
NO_EVENT_LOSS
No event loss is allowed. This option ensures that all events raised will be retained. Using this option forces all
tasks that fire events to wait until space is available in an event buffer. This may cause detectable performance
issues while the event session is active. User connections may stall while waiting for events to be flushed from the
buffer.
MAX_DISPATCH_L ATENCY = { seconds SECONDS | INFINITE }
Specifies the amount of time that events will be buffered in memory before being dispatched to event session
targets. By default, this value is set to 30 seconds.
seconds SECONDS
The time, in seconds, to wait before starting to flush buffers to targets. seconds is a whole number. The minimum
latency value is 1 second. However, 0 can be used to specify INFINITE latency.
INFINITE
Flush buffers to targets only when the buffers are full, or when the event session closes.
NOTE
MAX_DISPATCH_LATENCY = 0 SECONDS is equivalent to MAX_DISPATCH_LATENCY = INFINITE.
MAX_EVENT_SIZE =size [ KB | MB ]
Specifies the maximum allowable size for events. MAX_EVENT_SIZE should only be set to allow single events
larger than MAX_MEMORY; setting it to less than MAX_MEMORY will raise an error. size is a whole number and
can be a kilobyte (KB ) or a megabyte (MB ) value. If size is specified in kilobytes, the minimum allowable size is 64
KB. When MAX_EVENT_SIZE is set, two buffers of size are created in addition to MAX_MEMORY. This means that
the total memory used for event buffering is MAX_MEMORY + 2 * MAX_EVENT_SIZE.
MEMORY_PARTITION_MODE = { NONE | PER_NODE | PER_CPU }
Specifies the location where event buffers are created.
NONE
A single set of buffers are created within the SQL Server instance.
PER_NODE
A set of buffers are created for each NUMA node.
PER_CPU
A set of buffers are created for each CPU.
TRACK_CAUSALITY = { ON | OFF }
Specifies whether or not causality is tracked. If enabled, causality allows related events on different server
connections to be correlated together.
STARTUP_STATE = { ON | OFF }
Specifies whether or not to start this event session automatically when SQL Server starts.
NOTE
If STARTUP_STATE = ON, the event session will only start if SQL Server is stopped and then restarted.
ON
The event session is started at startup.
OFF
The event session is not started at startup.
Remarks
The order of precedence for the logical operators is NOT (highest), followed by AND, followed by OR.
Permissions
Requires the ALTER ANY EVENT SESSION permission.
Examples
The following example shows how to create an event session named test_session . This example adds two events
and uses the Event Tracing for Windows target.
See Also
ALTER EVENT SESSION (Transact-SQL )
DROP EVENT SESSION (Transact-SQL )
sys.server_event_sessions (Transact-SQL )
sys.dm_xe_objects (Transact-SQL )
sys.dm_xe_object_columns (Transact-SQL )
CREATE EXTERNAL DATA SOURCE (Transact-SQL)
5/16/2018 • 13 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an external data source for PolyBase, or Elastic Database queries. Depending on the scenario, the syntax
differs significantly. An external data source created for PolyBase cannot be used for Elastic Database queries.
Similarly, an external data source created for Elastic Database queries cannot be used for PolyBase, etc.
NOTE
PolyBase is supported only on SQL Server 2016 (or higher), Azure SQL Data Warehouse, and Parallel Data Warehouse.
Elastic Database queries are supported only on Azure SQL Database v12 or later.
For PolyBase scenarios, the external data source is either a Hadoop File System (HDFS ), an Azure storage blob
container, or Azure Data Lake Store. For more information, see Get started with PolyBase.
For Elastic Database query scenarios, the external source is either a shard map manager (on Azure SQL
Database), or a remote database (on Azure SQL Database). Use sp_execute_remote (Azure SQL Database) after
creating an external data source. For more information, see Elastic Database query.
The Azure Blob storage external data source supports BULK INSERT and OPENROWSET syntax, and is different
than Azure Blob storage for PolyBase.
Transact-SQL Syntax Conventions
Syntax
-- PolyBase only: Hadoop cluster as data source
-- (on SQL Server 2016)
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = HADOOP,
LOCATION = 'hdfs://NameNode_URI[:port]'
[, RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI[:port]' ]
[, CREDENTIAL = credential_name ]
)
[;]
-- Elastic Database query only: a remote database on Azure SQL Database as data source
-- (only on Azure SQL Database)
CREATE EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = RDBMS,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = '<Remote_Database_Name>',
CREDENTIAL = <SQL_Credential>
)
[;]
Arguments
data_source_name Specifies the user-defined name for the data source. The name must be unique within the
database in SQL Server, Azure SQL Database, and Azure SQL Data Warehouse. The name must be unique
within the server in Parallel Data Warehouse.
TYPE = [ HADOOP | SHARD_MAP_MANAGER | RDBMS | BLOB_STORAGE ]
Specifies the data source type. Use HADOOP when the external data source is Hadoop or Azure Storage blob
for Hadoop. Use SHARD_MAP_MANAGER when creating an external data source for Elastic Database query
for sharding on Azure SQL Database. Use RDBMS with external data sources for cross-database queries with
Elastic Database query on Azure SQL Database. Use BLOB_STORAGE when performing bulk operations using
BULK INSERT or OPENROWSET with SQL Server 2017 (14.x).
LOCATION = <location_path> HADOOP
For HADOOP, specifies the Uniform Resource Indicator (URI) for a Hadoop cluster.
LOCATION = 'hdfs:\/\/*NameNode\_URI*\[:*port*\]'
NameNode_URI: The machine name or IP address of the Hadoop cluster Namenode.
port: The Namenode IPC port. This is indicated by the fs.default.name configuration parameter in Hadoop. If
the value is not specified, 8020 will be used by default.
Example: LOCATION = 'hdfs://10.10.10.10:8020'
For Azure blob storage with Hadoop, specifies the URI for connecting to Azure blob storage.
LOCATION = 'wasb[s]://container@account_name.blob.core.windows.net'
wasb[s]: Specifies the protocol for Azure blob storage. The [s] is optional and specifies a secure SSL connection;
data sent from SQL Server is securely encrypted through the SSL protocol. We strongly recommend using
'wasbs' instead of 'wasb'. Note that the location can use asv[s] instead of wasb[s]. The asv[s] syntax is
deprecated and will be removed in a future release.
container: Specifies the name of the Azure blob storage container. To specify the root container of a domain’s
storage account, use the domain name instead of the container name. Root containers are read-only, so data
cannot be written back to the container.
account_name: The fully qualified domain name (FQDN ) of the Azure storage account.
Example: LOCATION = 'wasbs://dailylogs@myaccount.blob.core.windows.net/'
For Azure Data Lake Store, location specifies the URI for connecting to your Azure Data Lake Store.
SHARD_MAP_MANAGER
For SHARD_MAP_MANAGER, specifies the logical server name that hosts the shard map manager in Azure
SQL Database or a SQL Server database on an Azure virtual machine.
For a step-by-step tutorial, see Getting started with elastic queries for sharding (horizontal partitioning).
RDBMS
For RDBMS, specifies the logical server name of the remote database in Azure SQL Database.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
For a step-by-step tutorial on RDBMS, see Getting started with cross-database queries (vertical partitioning).
BLOB_STORAGE
For bulk operations only, LOCATION must be valid the URL to Azure Blob storage and container. Do not put /,
file name, or shared access signature parameters at the end of the LOCATION URL.
The credential used, must be created using SHARED ACCESS SIGNATURE as the identity. For more information on
shared access signatures, see Using Shared Access Signatures (SAS ). For an example of accessing blob storage,
see example F of BULK INSERT.
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI[:port]'
Specifies the Hadoop resource manager location. When specified, the query optimizer can make a cost-based
decision to pre-process data for a PolyBase query by using Hadoop’s computation capabilities with MapReduce.
Called predicate pushdown, this can significantly reduce the volume of data transferred between Hadoop and
SQL, and therefore improve query performance.
When this is not specified, pushing compute to Hadoop is disabled for PolyBase queries.
If the port is not specified, the default value is determined using the current setting for ‘hadoop connectivity’
configuration.
1 50300
2 50300
3 8021
4 8032
5 8050
6 8032
7 8050
For a complete list of Hadoop distributions and versions supported by each connectivity value, see PolyBase
Connectivity Configuration (Transact-SQL ).
IMPORTANT
The RESOURCE_MANAGER_LOCATION value is a string and is not validated when you create the external data source.
Entering an incorrect value can cause future delays when accessing the location.
Hadoop examples:
Hortonworks HDP 2.0, 2.1, 2.2. 2.3 on Windows:
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:8032'
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:50300'
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:8050'
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:50300'
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:8021'
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI:8032'
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source. For an example, see
C. Create an Azure blob storage external data source. To create a credential, see CREATE CREDENTIAL
(Transact-SQL ). Note that CREDENTIAL is not required for public data sets that allow anonymous
access.
DATABASE_NAME = 'QueryDatabaseName'
The name of the database that functions as the shard map manager (for SHARD_MAP_MANAGER ) or
the remote database (for RDBMS ).
SHARD_MAP_NAME = 'ShardMapName'
For SHARD_MAP_MANAGER only. The name of the shard map. For more information about creating a
shard map, see Getting started with Elastic Database query
PolyBase-specific notes
For a complete list of supported external data sources, see PolyBase Connectivity Configuration (Transact-SQL ).
To use PolyBase, you need to create these three objects:
An external data source.
An external file format, and
An external table that references the external data source and external file format.
Permissions
Requires CONTROL permission on database in SQL DW, SQL Server, APS 2016, and SQL DB.
IMPORTANT
In previous releases of PDW, create external data source required ALTER ANY EXTERNAL DATA SOURCE permissions.
Error Handling
A runtime error will occur if the external Hadoop data sources are inconsistent about having
RESOURCE_MANAGER_LOCATION defined. That is, you cannot specify two external data sources that
reference the same Hadoop cluster and then providing resource manager location for one and not for the other.
The SQL engine does not verify the existence of the external data source when it creates the external data
source object. If the data source does not exist during query execution, an error will occur.
General Remarks
For PolyBase, the external data source is database-scoped in SQL Server and SQL Data Warehouse. It is
server-scoped in Parallel Data Warehouse.
For PolyBase, when RESOURCE_MANAGER_LOCATION or JOB_TRACKER_LOCATION is defined, the query
optimizer will consider optimizing each query by initiating a map reduce job on the external Hadoop source and
pushing down computation. This is entirely a cost-based decision.
To ensure successful PolyBase queries in the event of Hadoop NameNode failover, consider using a virtual IP
address for the NameNode of the Hadoop cluster. If you do not use a virtual IP address for the Hadoop
NameNode, in the event of a Hadoop NameNode failover you will have to ALTER EXTERNAL DATA SOURCE
object to point to the new location.
Locking
Takes a shared lock on the EXTERNAL DATA SOURCE object.
-- Create a database master key if one does not already exist, using your own password. This key is used to
encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo';
-- Create a database scoped credential with Kerberos user name and password.
CREATE DATABASE SCOPED CREDENTIAL HadoopUser1
WITH IDENTITY = '<hadoop_user_name>',
SECRET = '<hadoop_password>';
-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH IDENTITY = 'myaccount',
SECRET = '<azure_storage_account_key>';
-- If you do not have a Master Key on your DW you will need to create one.
CREATE MASTER KEY
-- These values come from your Azure Active Directory Application used to authenticate to ADLS
CREATE DATABASE SCOPED CREDENTIAL ADLUser
WITH IDENTITY = '<clientID>@<OAuth2.0TokenEndPoint>',
SECRET = '<KEY>' ;
See Also
ALTER EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL )
CREATE EXTERNAL TABLE AS SELECT (Transact-SQL )
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
sys.external_data_sources (Transact-SQL )
CREATE EXTERNAL LIBRARY (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Uploads R packages to a database from the specified byte stream or file path.
This statement serves as a generic mechanism for the database administrator to upload artifacts needed for any
new external language runtimes (R, Python, Java, etc.) and OS platforms supported by SQL Server.
Currently only the R language and Windows platform are supported. Support for Python and Linux is planned for
a later release.
Syntax
CREATE EXTERNAL LIBRARY library_name
[ AUTHORIZATION owner_name ]
FROM <file_spec> [,…2]
WITH ( LANGUAGE = 'R' )
[ ; ]
<file_spec> ::=
{
(CONTENT = { <client_library_specifier> | <library_bits> }
[, PLATFORM = WINDOWS ])
}
<client_library_specifier> :: =
'[\\computer_name\]share_name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'
| '<relative_path_in_external_data_source>'
<library_bits> :: =
{ varbinary_literal | varbinary_expression }
Arguments
library_name
Libraries are added to the database scoped to the user. Library names must be unique within the context of a
specific user or owner. For example, two users RUser1 and RUser2 can both individually and separately upload
the R library ggplot2 . However, if RUser1 wanted to upload a newer version of ggplot2 , the second instance
must be named differently or must replace the existing library.
Library names cannot be arbitrarily assigned; the library name should be the same as the name required to load
the R library from R.
owner_name
Specifies the name of the user or role that owns the external library. If not specified, ownership is given to the
current user.
The libraries owned by database owner are considered global to the database and runtime. In other words,
database owners can create libraries that contain a common set of libraries or packages that are shared by many
users. When an external library is created by a user other than the dbo user, the external library is private to that
user only.
When the user RUser1 executes an R script, the value of libPath can contain multiple paths. The first path is
always the path to the shared library created by the database owner. The second part of libPath specifies the path
containing packages uploaded individually by RUser1.
file_spec
Specifies the content of the package for a specific platform. Only one file artifact per platform is supported.
The file can be specified in the form of a local path, or network path.
Optionally, an OS platform for the file can be specified. Only one file artifact or content is permitted for each OS
platform for a specific language or runtime.
library_bits
Specifies the content of the package as a hex literal, similar to assemblies.
This option is useful if you need to create a library or alter an existing library (and have the required permissions
to do so), but the file system on the server is restricted and you cannot copy the library files to a location that the
server can access.
PLATFORM = WINDOWS
Specifies the platform for the content of the library. The value defaults to the host platform on which SQL Server
is running. Therefore, the user doesn’t have to specify the value. It is required in case where multiple platforms are
supported, or the user needs to specify a different platform.
in SQL Server 2017, Windows is the only supported platform.
Remarks
For the R language, when using a file, packages must be prepared in the form of zipped archive files with the .ZIP
extension for Windows. Currently, only the Windows platform is supported.
The CREATE EXTERNAL LIBRARY statement uploads the library bits to the database. The library is installed when a
user runs an external script using sp_execute_external_script and calls the package or library.
Libraries uploaded to the instance can be either public or private. If the library is created by a member of dbo , the
library is public and can be shared with all users. Otherwise, the library is private to that user only.
Permissions
Requires the CREATE EXTERNAL LIBRARY permission. By default, any user who has dbo who is a member of the
db_owner role has permissions to create an external library. For all other users, you must explicitly give them
permission using a GRANT statement, specifying CREATE EXTERNAL LIBRARY as the privilege.
To modify a library requires the separate permission, ALTER ANY EXTERNAL LIBRARY .
Examples
A. Add an external library to a database
The following example adds an external library called customPackage to a database.
After the library has been successfully uploaded to the instance, a user executes the sp_execute_external_script
procedure, to install the library.
EXEC sp_execute_external_script
@language =N'R',
@script=N'library(customPackage)'
To succeed in installing packageA , you must create libraries for packageB and packageC at the same time that you
add packageA to SQL Server. Be sure to check the required package versions as well.
In practice, package dependencies for popular packages are usually much more complicated than this simple
example. For example, ggplot2 might require over 30 packages, and those packages might require additional
packages that are not available on the server. Any missing package or wrong package version can cause
installation to fail.
Because it can be difficult to determine all dependencies just from looking at the package manifest, we recommend
that you use a package such as miniCRAN to identify all packages that might be required to complete installation
successfully.
Upload the target package and its dependencies. All files must be in a folder that is accessible to the server.
CREATE EXTERNAL LIBRARY customLibrary FROM (CONTENT = 0xabc123) WITH (LANGUAGE = 'R');
NOTE
This code sample only demonstrates the syntax; the binary value in CONTENT = has been truncated for readability and does
not create a working library. The actual contents of the binary variable would be much longer.
See also
ALTER EXTERNAL LIBRARY (Transact-SQL )
DROP EXTERNAL LIBRARY (Transact-SQL )
sys.external_library_files
sys.external_libraries
CREATE EXTERNAL FILE FORMAT (Transact-SQL)
5/4/2018 • 12 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an External File Format object defining external data stored in Hadoop, Azure Blob Storage, or Azure
Data Lake Store. Creating an external file format is a prerequisite for creating an External Table. By creating an
External File Format, you specify the actual layout of the data referenced by an external table.
PolyBase supports the following file formats:
Delimited Text
Hive RCFile
Hive ORC
Parquet
To create an External Table, see CREATE EXTERNAL TABLE (Transact-SQL ).
Transact-SQL Syntax Conventions
Syntax
-- Create an external file format for PARQUET files.
CREATE EXTERNAL FILE FORMAT file_format_name
WITH (
FORMAT_TYPE = PARQUET
[ , DATA_COMPRESSION = {
'org.apache.hadoop.io.compress.SnappyCodec'
| 'org.apache.hadoop.io.compress.GzipCodec' }
]);
<format_options> ::=
{
FIELD_TERMINATOR = field_terminator
| STRING_DELIMITER = string_delimiter
| First_Row = integer -- ONLY AVAILABLE SQL DW
| DATE_FORMAT = datetime_format
| USE_TYPE_DEFAULT = { TRUE | FALSE }
| Encoding = {'UTF8' | 'UTF16'}
}
Arguments
file_format_name
Specifies a name for the external file format.
FORMAT_TYPE = [ PARQUET | ORC | RCFILE | PARQUET] Specifies the format of the external data.
PARQUET Specifies a Parquet format.
ORC
Specifies an Optimized Row Columnar (ORC ) format. This option requires Hive version 0.11 or higher on
the external Hadoop cluster. In Hadoop, the ORC file format offers better compression and performance
than the RCFILE file format.
RCFILE (in combination with SERDE_METHOD = SERDE_method) Specifies a Record Columnar file
format (RcFile). This option requires you to specify a Hive Serializer and Deserializer (SerDe) method. This
requirement is the same if you use Hive/HiveQL in Hadoop to query RC files. Note, the SerDe method is
case-sensitive.
Examples of specifying RCFile with the two SerDe methods that PolyBase supports.
FORMAT_TYPE = RCFILE, SERDE_METHOD =
'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'
FORMAT_TYPE = RCFILE, SERDE_METHOD =
'org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe'
DELIMITEDTEXT Specifies a text format with column delimiters, also called field terminators.
FIELD_TERMINATOR = field_terminator
Applies only to delimited text files. The field terminator specifies one or more characters that mark the end
of each field (column) in the text-delimited file. The default is the pipe character ꞌ|ꞌ. For guaranteed support,
we recommend using one or more ascii characters.
Examples:
FIELD_TERMINATOR = '|'
FIELD_TERMINATOR = ' '
FIELD_TERMINATOR = ꞌ\tꞌ
FIELD_TERMINATOR = '~|~'
STRING_DELIMITER = string_delimiter
Specifies the field terminator for data of type string in the text-delimited file. The string delimiter is one or
more characters in length and is enclosed with single quotes. The default is the empty string "". For
guaranteed support, we recommend using one or more ascii characters.
Examples:
STRING_DELIMITER = '"'
STRING_DELIMITER = '0x22' -- Double quote hex
STRING_DELIMITER = '*'
STRING_DELIMITER = ꞌ,ꞌ
STRING_DELIMITER = '0x7E0x7E' -- Two tildes (for example, ~~)
FIRST_ROW = First_row_int
Specifies the row number that is read first in all files during a PolyBase load. This parameter can take
values 1-15. If the value is set to two, the first row in every file (header row ) is skipped when the data is
loaded. Rows are skipped based on the existence of row terminators (/r/n, /r, /n). When this option is used
for export, rows are added to the data to make sure the file can be read with no data loss. If the value is set
to >2, the first row exported is the Column names of the external table.
DATE_FORMAT = datetime_format
Specifies a custom format for all date and time data that might appear in a delimited text file. If the source
file uses default datetime formats, this option isn't necessary. Only one custom datetime format is allowed
per file. You can't specify more than one custom datetime formats per file. However, you can use more than
one datetime formats if each one is the default format for its respective data type in the external table
definition.
PolyBase only uses the custom date format for importing the data. It doesn't use the custom format for writing
data to an external file.
When DATE_FORMAT isn't specified or is the empty string, PolyBase uses the following default formats:
DateTime: 'yyyy-MM -dd HH:mm:ss'
SmallDateTime: 'yyyy-MM -dd HH:mm'
Date: 'yyyy-MM -dd'
DateTime2: 'yyyy-MM -dd HH:mm:ss'
DateTimeOffset: 'yyyy-MM -dd HH:mm:ss'
Time: 'HH:mm:ss'
Example date formats are in the following table:
Notes about the table:
Year, month, and day can have a variety of formats and orders. The table shows only the ymd format.
Month can have one or two digits, or three characters. Day can have one or two digits. Year can have two
or four digits.
Milliseconds (fffffff ) are not required.
Am, pm (tt) isn't required. The default is AM.
Details:
To separate month, day and year values, you can use '–', '/', or '.'. For simplicity, the table uses only the ' – '
separator.
To specify the month as text, use three or more characters. Months with one or two characters are
interpreted as a number.
To separate time values, use the ':' symbol.
Letters enclosed in square brackets are optional.
The letters 'tt' designate [AM|PM|am|pm]. AM is the default. When 'tt' is specified, the hour value (hh)
must be in the range of 0 to 12.
The letters 'zzz' designate the time zone offset for the system's current time zone in the format {+|-}HH:ss].
USE_TYPE_DEFAULT = { TRUE | FALSE }
Specifies how to handle missing values in delimited text files when PolyBase retrieves data from the text
file.
TRUE
When retrieving data from the text file, store each missing value by using the default value for the data
type of the corresponding column in the external table definition. For example, replace a missing value
with:
0 if the column is defined as a numeric column.
Empty string "" if the column is a string column.
1900-01-01 if the column is a date column.
FALSE
Store all missing values as NULL. Any NULL values that are stored by using the word NULL in the
delimited text file are imported as the string 'NULL'.
Encoding = {'UTF8' | 'UTF16'}
In Azure SQL Data Warehouse, PolyBase can read UTF8 and UTF16-LE encoded delimited text files. In
SQL Server and PDW, PolyBase doesn't support reading UTF16 encoded files.
DATA_COMPRESSION = data_compression_method
Specifies the data compression method for the external data. When DATA_COMPRESSION isn't specified,
the default is uncompressed data. To work properly, Gzip compressed files must have the ".gz" file
extension.
The DELIMITEDTEXT format type supports these compression methods:
DATA COMPRESSION = 'org.apache.hadoop.io.compress.DefaultCodec'
DATA COMPRESSION = 'org.apache.hadoop.io.compress.GzipCodec'
The RCFILE format type supports this compression method:
DATA COMPRESSION = 'org.apache.hadoop.io.compress.DefaultCodec'
The ORC file format type supports these compression methods:
DATA COMPRESSION = 'org.apache.hadoop.io.compress.DefaultCodec'
DATA COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
The PARQUET file format type supports the following compression methods:
DATA COMPRESSION = 'org.apache.hadoop.io.compress.GzipCodec'
DATA COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
Permissions
Requires ALTER ANY EXTERNAL FILE FORMAT permission.
General Remarks
The external file format is database-scoped in SQL Server and SQL Data Warehouse. It is server-scoped in
Parallel Data Warehouse.
The format options are all optional and only apply to delimited text files.
When the data is stored in one of the compressed formats, PolyBase first decompresses the data before returning
the data records.
Locking
Takes a shared lock on the EXTERNAL FILE FORMAT object.
Performance
Using compressed files always comes with the tradeoff between transferring less data between the external data
source and SQL Server while increasing the CPU usage to compress and decompress the data.
Gzip compressed text files are not splittable. To improve performance for Gzip compressed text files, we
recommend generating multiple files that are all stored in the same directory within the external data source. This
file structure allows PolyBase to read and decompress the data faster by using multiple reader and
decompression processes. The ideal number of compressed files is the maximum number of data reader
processes per compute node. In SQL Server and Parallel Data Warehouse, the maximum number of data reader
processes is 8 per node in the current release. In SQL Data Warehouse, the maximum number of data reader
processes per node varies by SLO. See Azure SQL Data Warehouse loading patterns and strategies for details.
Examples
A. Create a DELIMITEDTEXT external file format
This example creates an external file format named textdelimited1 for a text-delimited file. The options listed for
FORMAT_OPTIONS specify that the fields in the file should be separated using a pipe character '|'. The text file is
also compressed with the Gzip codec. If DATA_COMPRESSION isn't specified, the text file is uncompressed.
For a delimited text file, the data compression method can either be the default Codec,
'org.apache.hadoop.io.compress.DefaultCodec', or the Gzip Codec, 'org.apache.hadoop.io.compress.GzipCodec'.
E. Create a Delimited Text File Skipping Header Row (Azure SQL DW Only)
This example creates an external file format for CSV file with a single header row.
See Also
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL )
CREATE EXTERNAL TABLE AS SELECT (Transact-SQL )
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
sys.external_file_formats (Transact-SQL )
CREATE EXTERNAL RESOURCE POOL (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Applies to: SQL Server 2016 (13.x) R Services (In-Database) and SQL Server 2017 (14.x) Machine Learning
Services (In-Database)
Creates an external pool used to define resources for external processes. A resource pool represents a subset of
the physical resources (memory and CPUs) of an instance of the Database Engine. Resource Governor enables a
database administrator to distribute server resources among resource pools, up to a maximum of 64 pools.
For R Services (In-Database) in SQL Server 2016 (13.x), the external pool governs rterm.exe ,
BxlServer.exe , and other processes spawned by them.
For Machine Learning Services (In-Database) in SQL Server 2017 (14.x), the external pool governs the R
processes listed for SQL Server 2016, as well as python.exe , BxlServer.exe , and other processes spawned
by them.
Transact-SQL Syntax Conventions.
Syntax
CREATE EXTERNAL RESOURCE POOL pool_name
[ WITH (
[ MAX_CPU_PERCENT = value ]
[ [ , ] AFFINITY CPU =
{
AUTO
| ( <cpu_range_spec> )
| NUMANODE = ( <NUMA_node_id> )
} ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MAX_PROCESSES = value ]
)
]
[ ; ]
<CPU_range_spec> ::=
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]
Arguments
pool_name
Is the user-defined name for the external resource pool. pool_name is alphanumeric, can be up to 128 characters,
must be unique within an instance of SQL Server, and must comply with the rules for identifiers.
MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in the external resource pool can receive when
there is CPU contention. value is an integer with a default setting of 100. The allowed range for value is from 1
through 100.
AFFINITY {CPU = AUTO | ( <CPU_range_spec> ) | NUMANODE = (<NUMA_node_range_spec>)} Attach the
external resource pool to specific CPUs. The default value is AUTO.
AFFINITY CPU = ( <CPU_range_spec> ) maps the external resource pool to the SQL Server CPUs identified by
the given CPU_IDs.
When you use AFFINITY NUMANODE = ( <NUMA_node_range_spec> ), the external resource pool is affinitized
to the SQL Server physical CPUs that correspond to the given NUMA node or range of nodes.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this external resource pool. value is an integer
with a default setting of 100. The allowed range for value is from 1 through 100.
MAX_PROCESSES =value
Specifies the maximum number of processes allowed for the external resource pool. Specify 0 to set an unlimited
threshold for the pool, which is thereafter bound only by computer resources. The default is 0.
Remarks
The Database Engine implements the resource pool when you execute the ALTER RESOURCE GOVERNOR
RECONFIGURE statement.
For general information about resource pools, see Resource Governor Resource Pool,
sys.resource_governor_external_resource_pools (Transact-SQL ), and
sys.dm_resource_governor_external_resource_pool_affinity (Transact-SQL ).
For information specific to managing external resource pools used for machine learning, see Resource governance
for machine learning in SQL Server.
Permissions
Requires CONTROL SERVER permission.
Examples
The following statement defines an external pool that restricts CPU usage to 75 percent and the maximum
memory to 30 percent of the available memory on the computer.
See also
external scripts enabled Server Configuration Option
sp_execute_external_script (Transact-SQL )
ALTER EXTERNAL RESOURCE POOL (Transact-SQL )
DROP EXTERNAL RESOURCE POOL (Transact-SQL )
CREATE RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
Resource Governor Resource Pool
sys.resource_governor_external_resource_pools (Transact-SQL )
sys.dm_resource_governor_external_resource_pool_affinity (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL)
5/16/2018 • 17 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an external table for PolyBase, or Elastic Database queries. Depending on the scenario, the syntax differs
significantly. An external table created for PolyBase cannot be used for Elastic Database queries. Similarly, an
external table created for Elastic Database queries cannot be used for PolyBase, etc.
NOTE
PolyBase is supported only on SQL Server 2016 (or higher), Azure SQL Data Warehouse, and Parallel Data Warehouse.
Elastic Database queries are supported only on Azure SQL Database v12 or later.
SQL Server uses external tables to access data stored in a Hadoop cluster or Azure blob storagea PolyBase
external table that references data stored in a Hadoop cluster or Azure blob storage. Can also be used to
create an external table for Elastic Database query.
Use an external table to:
Query Hadoop or Azure blob storage data with Transact-SQL statements.
Import and store data from Hadoop or Azure blob storage into your SQL Server database.
Create an external table for use with an Elastic Database
query.
Import and store data from Azure Data Lake Store into Azure SQL Data Warehouse
See also CREATE EXTERNAL DATA SOURCE (Transact-SQL ) and DROP EXTERNAL TABLE (Transact-
SQL ).
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
<reject_options> ::=
{
| REJECT_TYPE = value | percentage
| REJECT_VALUE = reject_value
| REJECT_SAMPLE_VALUE = reject_sample_value
<sharded_external_table_options> ::=
DATA_SOURCE = external_data_source_name,
SCHEMA_NAME = N'nonescaped_schema_name',
OBJECT_NAME = N'nonescaped_object_name',
[DISTRIBUTION = SHARDED(sharding_column_name) | REPLICATED | ROUND_ROBIN]]
)
[;]
<sharded_external_table_options> ::=
DATA_SOURCE = external_data_source_name,
SCHEMA_NAME = N'nonescaped_schema_name',
OBJECT_NAME = N'nonescaped_object_name',
[DISTRIBUTION = SHARDED(sharding_column_name) | REPLICATED | ROUND_ROBIN]]
)
[;]
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
<reject_options> ::=
{
| REJECT_TYPE = value | percentage,
| REJECT_VALUE = reject_value,
| REJECT_SAMPLE_VALUE = reject_sample_value,
| REJECTED_ROW_LOCATION = '\REJECT_Directory'
Arguments
database_name . [ schema_name ] . | schema_name. ] table_name
The one to three-part name of the table to create. For an external table, only the table metadata is stored in SQL
along with basic statistics about the file and or folder referenced in Hadoop or Azure blob storage. No actual data
is moved or stored in SQL Server.
<column_definition> [ ,...n ] CREATE EXTERNAL TABLE allows one or more column definitions. Both CREATE
EXTERNAL TABLE and CREATE TABLE use the same syntax for defining a column. An exception to this, you
cannot use the DEFAULT CONSTRAINT on external tables. For the full details about column definitions and their
data types, see CREATE TABLE (Transact-SQL ) and CREATE TABLE on Azure SQL Database.
The column definitions, including the data types and number of columns must match the data in the external files.
If there is a mismatch, the file rows will be rejected when querying the actual data.
For external tables that reference files in external data sources, the column and type definitions must map to the
exact schema of the external file. When defining data types that reference data stored in Hadoop/Hive, use the
following mappings between SQL and Hive data types and cast the type into a SQL data type when selecting
from it. The types include all versions of Hive unless stated otherwise.
NOTE
SQL Server does not support the Hive infinity data value in any conversion. PolyBase will fail with a data type conversion
error.
HADOOP/JAVA DATA
SQL DATA TYPE .NET DATA TYPE HIVE DATA TYPE TYPE COMMENTS
Char[]
Char[]
Char[]
Char[]
LOCATION = 'folder_or_filepath'
Specifies the folder or the file path and file name for the actual data in Hadoop or Azure blob storage. The location
starts from the root folder; the root folder is the data location specified in the external data source.
In SQL Server, the CREATE EXTERNAL TABLE statement creates the path and folder if it does not already exist.
You can then use INSERT INTO to export data from a local SQL Server table to the external data source. For
more information, see Polybase Queries.
In SQL Data Warehouse and Analytics Platform System, the CREATE EXTERNAL TABLE AS SELECT statement
creates the path and folder if it does not exist. In these two products, CREATE EXTERNAL TABLE does not create
the path and folder.
If you specify LOCATION to be a folder, a PolyBase query that selects from the external table will retrieve files
from the folder and all of its subfolders. Just like Hadoop, PolyBase does not return hidden folders. It also does
not return files for which the file name begins with an underline (_) or a period (.).
In this example, if LOCATION='/webdata/', a PolyBase query will return rows from mydata.txt and mydata2.txt. It
will not return mydata3.txt because it is a subfolder of a hidden folder. It will not return _hidden.txt because it is a
hidden file.
To change the default and only read from the root folder, set the attribute <polybase.recursive.traversal> to 'false'
in the core-site.xml configuration file. This file is located under
<SqlBinRoot>\Polybase\Hadoop\Conf with SqlBinRoot the bin root of SQl Server . For example,
C:\\Program Files\\Microsoft SQL Server\\MSSQL13.XD14\\MSSQL\\Binn .
DATA_SOURCE = external_data_source_name
Specifies the name of the external data source that contains the location of the external data. This location is either
a Hadoop or Azure blob storage. To create an external data source, use CREATE EXTERNAL DATA SOURCE
(Transact-SQL ).
FILE_FORMAT = external_file_format_name
Specifies the name of the external file format object that stores the file type and compression method for the
external data. To create an external file format, use CREATE EXTERNAL FILE FORMAT (Transact-SQL ).
Reject Options
You can specify reject parameters that determine how PolyBase will handle dirty records it retrieves from the
external data source. A data record is considered ‘dirty’ if it actual data types or the number of columns do not
match the column definitions of the external table.
When you do not specify or change reject values, PolyBase uses default values. This information about the reject
parameters is stored as additional metadata when you create an external table with CREATE EXTERNAL TABLE
statement. When a future SELECT statement or SELECT INTO SELECT statement selects data from the external
table , PolyBase will use the reject options to determine the number or percentage of rows that can be rejected
before the actual query fails. . The query will return (partial) results until the reject threshold is exceeded; it then
fails with the appropriate error message.
REJECT_TYPE = value | percentage
Clarifies whether the REJECT_VALUE option is specified as a literal value or a percentage.
value
REJECT_VALUE is a literal value, not a percentage. The PolyBase query will fail when the number of rejected rows
exceeds reject_value.
For example, if REJECT_VALUE = 5 and REJECT_TYPE = value, the PolyBase SELECT query will fail after 5 rows
have been rejected.
percentage
REJECT_VALUE is a percentage, not a literal value. A PolyBase query will fail when the percentage of failed rows
exceeds reject_value. The percentage of failed rows is calculated at intervals.
REJECT_VALUE = reject_value
Specifies the value or the percentage of rows that can be rejected before the query fails.
For REJECT_TYPE = value, reject_value must be an integer between 0 and 2,147,483,647.
For REJECT_TYPE = percentage, reject_value must be a float between 0 and 100.
REJECT_SAMPLE_VALUE = reject_sample_value
This attribute is required when you specify REJECT_TYPE = percentage. It determines the number of rows to
attempt to retrieve before the PolyBase recalculates the percentage of rejected rows.
The reject_sample_value parameter must be an integer between 0 and 2,147,483,647.
For example, if REJECT_SAMPLE_VALUE = 1000, PolyBase will calculate the percentage of failed rows after it
has attempted to import 1000 rows from the external data file. If the percentage of failed rows is less than
reject_value, PolyBase will attempt to retrieve another 1000 rows. It continues to recalculate the percentage of
failed rows after it attempts to import each additional 1000 rows.
NOTE
Since PolyBase computes the percentage of failed rows at intervals, the actual percentage of failed rows can exceed
reject_value.
Example:
This example shows how the three REJECT options interact with each other. For example, if REJECT_TYPE =
percentage, REJECT_VALUE = 30, and REJECT_SAMPLE_VALUE = 100, the following scenario could occur:
PolyBase attempts to retrieve the first 100 rows; 25 fail and 75 succeed.
Percent of failed rows is calculated as 25%, which is less than the reject value of 30%. Hence, PolyBase will
continue retrieving data from the external data source.
PolyBase attempts to load the next 100 rows; this time 25 succeed and 75 fail.
Percent of failed rows is recalculated as 50%. The percentage of failed rows has exceeded the 30% reject
value.
The PolyBase query fails with 50% rejected rows after attempting to return the first 200 rows. Note that
matching rows have been returned before the PolyBase query detects the reject threshold has been
exceeded.
REJECTED_ROW_LOCATION = Directory Location
Specifies the directory within the External Data Source that the rejected rows and the corresponding error file
should be written. If the specified path does not exist, PolyBase will create one on your behalf. A child directory is
created with the name “rejectedrows”. The “” character ensures that the directory is escaped for other data
processing unless explicitly named in the location parameter. Within this directory, there is a folder created based
on the time of load submission in the format YearMonthDay -HourMinuteSecond (Ex. 20180330-173205). In this
folder, two types of files are written, the _reason file and the data file.
The reason files and the data files both have the queryID associated with the CTAS statement. Because the data
and the reason are in separate files corresponding files have a matching suffix.
Sharded external table options
Specifies the external data source (a non-SQL Server data source) and a distribution method for the Elastic
Database query.
DATA_SOURCE
An external data source such as data stored in a Hadoop File System, Azure blob storage, or a shard map
manager.
SCHEMA_NAME
The SCHEMA_NAME clause provides the ability to map the external table definition to a table in a different
schema on the remote database. Use this to disambiguate between schemas that exist on both the local and
remote databases.
OBJECT_NAME
The OBJECT_NAME clause provides the ability to map the external table definition to a table with a different
name on the remote database. Use this to disambiguate between object names that exist on both the local and
remote databases.
DISTRIBUTION
Optional. This is only required only for databases of type SHARD_MAP_MANAGER. This controls whether a
table is treated as a sharded table or a replicated table. With SHARDED (column name) tables, the data from
different tables do not overlap. REPLICATED specifies that tables have the same data on every shard.
ROUND_ROBIN indicates that an application-specific method is used to distribute the data.
Permissions
Requires these user permissions:
CREATE TABLE
ALTER ANY SCHEMA
ALTER ANY EXTERNAL DATA SOURCE
ALTER ANY EXTERNAL FILE FORMAT
CONTROL DATABASE
Note, the login that creates the external data source must have permission to read and write to the external
data source, located in Hadoop or Azure blob storage.
IMPORTANT
The ALTER ANY EXTERNAL DATA SOURCE permission grants any principal the ability to create and modify any external data
source object, and therefore, it also grants the ability to access all database scoped credentials on the database. This
permission must be considered as highly privileged, and therefore must be granted only to trusted principals in the system.
Error Handling
While executing the CREATE EXTERNAL TABLE statement, PolyBase attempts to connect to the external data
source. If the attempt to connect fails, the statement will fail and the external table will not be created. It can take a
minute or more for the command to fail since PolyBase retries the connection before eventually failing the query.
General Remarks
In ad-hoc query scenarios, i.e. SELECT FROM EXTERNAL TABLE, PolyBase stores the rows retrieved from the
external data source in a temporary table. After the query completes, PolyBase removes and deletes the
temporary table. No permanent data is stored in SQL tables.
In contrast, in the import scenario, i.e. SELECT INTO FROM EXTERNAL TABLE, PolyBase stores the rows
retrieved from the external data source as permanent data in the SQL table. The new table is created during query
execution when Polybase retrieves the external data.
PolyBase can push some of the query computation to Hadoop to improve query performance. This is called
predicate pushdown. To enable this, specify the Hadoop resource manager location option in CREATE EXTERNAL
DATA SOURCE (Transact-SQL ).
You can create numerous external tables that reference the same or different external data sources.
Locking
Shared lock on the SCHEMARESOLUTION object.
Security
The data files for an external table is stored in Hadoop or Azure blob storage. These data files are created and
managed by your own processes. It is your responsibility to manage the security of the external data.
Examples
A. Create an external table with data in text-delimited format.
This example shows all the steps required to create an external table that has data formatted in text-delimited files.
It defines an external data source mydatasource and an external file format myfileformat. These database-level
objects are then referenced in the CREATE EXTERNAL TABLE statement. For more information, see CREATE
EXTERNAL DATA SOURCE (Transact-SQL ) and CREATE EXTERNAL FILE FORMAT (Transact-SQL ).
SELECT url.description
FROM ClickStream cs
JOIN UrlDescription url ON cs.url = url.name
WHERE cs.url = 'msdn.microsoft.com'
;
SELECT url.description
FROM ClickStream cs
JOIN UrlDescription url ON cs.url = url.name
WHERE cs.url = 'msdn.microsoft.com'
;
See Also
Common Metadata Query Examples (SQL Server PDW )
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
CREATE EXTERNAL TABLE AS SELECT (Transact-SQL )
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
CREATE EXTERNAL TABLE AS SELECT (Transact-
SQL)
5/16/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates an external table and then exports, in parallel, the results of a Transact-SQL SELECT statement to Hadoop
or Azure Storage Blob.
Transact-SQL Syntax Conventions (Transact-SQL )
Syntax
CREATE EXTERNAL TABLE [ [database_name . [ schema_name ] . ] | schema_name . ] table_name
WITH (
LOCATION = 'hdfs_folder',
DATA_SOURCE = external_data_source_name,
FILE_FORMAT = external_file_format_name
[ , <reject_options> [ ,...n ] ]
)
AS <select_statement>
[;]
<reject_options> ::=
{
| REJECT_TYPE = value | percentage
| REJECT_VALUE = reject_value
| REJECT_SAMPLE_VALUE = reject_sample_value
}
<select_statement> ::=
[ WITH <common_table_expression> [ ,...n ] ]
SELECT <select_criteria>
Arguments
[ [ database_name . [ schema_name ] . ] | schema_name . ] table_name
The one to three-part name of the table to create in the database. For an external table, only the table metadata is
stored in the relational database.
LOCATION = 'hdfs_folder'
Specifies where to write the results of the SELECT statement on the external data source. The location is a folder
name and can optionally include a path that is relative to the root folder of the Hadoop Cluster or Azure Storage
Blob. PolyBase will create the path and folder if it does not already exist.
The external files are written to hdfs_folder and named QueryID_date_time_ID.format, where ID is an incremental
identifier and format is the exported data format. For example, QID776_20160130_182739_0.orc.
DATA_SOURCE = external_data_source_name
Specifies the name of the external data source object that contains the location where the external data is stored
or will be stored. The location is either a Hadoop Cluster or an Azure Storage Blob. To create an external data
source, use CREATE EXTERNAL DATA SOURCE (Transact-SQL ).
FILE_FORMAT = external_file_format_name
Specifies the name of the external file format object that contains the format for the external data file. To create an
external file format, use CREATE EXTERNAL FILE FORMAT (Transact-SQL ).
Reject Options
The reject options do not apply at the time this CREATE EXTERNAL TABLE AS SELECT statement is run. Instead,
they are specified here so that the database can use them at a later time when it imports data from the external
table. Later, when the CREATE TABLE AS SELECT statement selects data from the external table, the database will
use the reject options to determine the number or percentage of rows that can fail to import before it stops the
import.
REJECT_VALUE = reject_value
Specifies the value or the percentage of rows that can fail to import before database halts the import.
REJECT_TYPE = value | percentage
Clarifies whether the REJECT_VALUE option is specified as a literal value or a percentage.
value
REJECT_VALUE is a literal value, not a percentage. The database will stop importing rows from the external data
file when the number of failed rows exceeds reject_value.
For example, if REJECT_VALUE = 5 and REJECT_TYPE = value, the database will stop importing rows after 5
rows have failed to import.
percentage
REJECT_VALUE is a percentage, not a literal value. The database will stop importing rows from the external data
file when the percentage of failed rows exceeds reject_value. The percentage of failed rows is calculated at
intervals.
REJECT_SAMPLE_VALUE = reject_sample_value
Required when REJECT_TYPE = percentage, this specifies the number of rows to attempt to import before the
database recalculates the percentage of failed rows.
For example, if REJECT_SAMPLE_VALUE = 1000, the database will calculate the percentage of failed rows after it
has attempted to import 1000 rows from the external data file. If the percentage of failed rows is less than
reject_value, the database will attempt to load another 1000 rows. The database continues to recalculate the
percentage of failed rows after it attempts to import each additional 1000 rows.
NOTE
Since the database computes the percentage of failed rows at intervals, the actual percentage of failed rows can exceed
reject_value.
Example:
This example shows how the three REJECT options interact with each other. For example, if REJECT_TYPE =
percentage, REJECT_VALUE = 30, and REJECT_SAMPLE_VALUE = 100, the following scenario could occur:
The database attempts to load the first 100 rows; 25 fail and 75 succeed.
Percent of failed rows is calculated as 25%, which is less than the reject value of 30%. So, no need to halt
the load.
The database attempts to load the next 100 rows; this time 25 succeed and 75 fail.
Percent of failed rows is recalculated as 50%. The percentage of failed rows has exceeded the 30% reject
value.
The load fails with 50% failed rows after attempting to load 200 rows, which is larger than the specified
30% limit.
WITH common_table_expression
Specifies a temporary named result set, known as a common table expression (CTE ). For more information,
see WITH common_table_expression (Transact-SQL ).
SELECT <select_criteria> Populates the new table with the results from a SELECT statement. select_criteria
is the body of the SELECT statement that determines which data to copy to the new table. For information
about SELECT statements, see SELECT (Transact-SQL ).
Permissions
To run this command the database user needs all of these permissions or memberships:
ALTER SCHEMA permission on the local schema that will contain the new table or membership in the
db_ddladmin fixed database role.
CREATE TABLE permission or membership in the db_ddladmin fixed database role.
SELECT permission on any objects referenced in the select_criteria.
The login needs all of these permissions:
ADMINISTER BULK OPERATIONS permission
ALTER ANY EXTERNAL DATA SOURCE permission
ALTER ANY EXTERNAL FILE FORMAT permission
The login must have write permission to read and write to the external folder on the Hadoop Cluster or
Azure Storage Blob.
IMPORTANT
The ALTER ANY EXTERNAL DATA SOURCE permission grants any principal the ability to create and modify any
external data source object, and therefore, it also grants the ability to access all database scoped credentials on the
database. This permission must be considered as highly privileged, and therefore must be granted only to trusted
principals in the system.
Error Handling
When CREATE EXTERNAL TABLE AS SELECT exports data to a text-delimited file, there is no rejection file for
rows that fail to export.
When creating the external table, the database attempts to connect to the external Hadoop cluster or Azure
Storage Blob. If the connection fails, the command will fail and the external table will not be created. It can take a
minute or more for the command to fail since the database retries the connection at least 3 times.
If CREATE EXTERNAL TABLE AS SELECT is cancelled or fails, the database will make a one-time attempt to
remove any new files and folders already created on the external data source.
The database will report any Java errors that occur on the external data source during the data export.
General Remarks
After the CETAS statement finishes, you can run Transact-SQL queries on the external table. These operations will
import data into the database for the duration of the query unless you import by using the CREATE TABLE AS
SELECT statement.
The external table name and definition are stored in the database metadata. The data is stored in the external data
source.
The external files are named QueryID_date_time_ID.format, where ID is an incremental identifier and format is
the exported data format. For example, QID776_20160130_182739_0.orc.
The CETAS statement always creates a non-partitioned table, even if the source table is partitioned.
For query plans, created with EXPL AIN, the databaseuses these query plan operations for external tables:
External shuffle move
External broadcast move
External partition move
APPLIES TO: Parallel Data WarehouseAs a prerequisite for creating an external table, the appliance
administrator needs to configure hadoop connectivity. For more information, see Configure Connectivity to
External Data (Analytics Platform System) in the APS documentation which you can download from here.
Locking
Takes a shared lock on the SCHEMARESOLUTION object.
Examples
A. Create a Hadoop table using CREATE EXTERNAL TABLE AS SELECT (CETAS )
The following example creates a new external table named hdfsCustomer , using the column definitions and data
from the source table dimCustomer .
The table definition is stored in the database, and the results of the SELECT statement are exported to the
'/pdwdata/customer.tbl' file on the Hadoop external data source customer_ds. The file is formatted according to
the external file format customer_ff.
The file name is generated by the database, and contains the query ID for ease of aligning the file with the query
that generated it.
The path hdfs://xxx.xxx.xxx.xxx:5000/files/ preceding the Customer directory must already exist. However, if
the Customer directory does not exist, the database will create the directory.
NOTE
This example specifies for 5000. If the port is not specified, the database uses 8020 as the default port.
NOTE
This example specifies for 5000. If the port is not specified, the database uses 8020 as the default port.
See Also
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL )
CREATE TABLE (Azure SQL Data Warehouse, Parallel Data Warehouse)
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
DROP TABLE (Transact-SQL )
ALTER TABLE (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a full-text catalog for a database. One full-text catalog can have several full-text indexes, but a full-text
index can only be part of one full-text catalog. Each database can contain zero or more full-text catalogs.
You cannot create full-text catalogs in the master, model, or tempdb databases.
IMPORTANT
Beginning with SQL Server 2008, a full-text catalog is a virtual object and does not belong to any filegroup. A full-text
catalog is a logical concept that refers to a group of full-text indexes.
Syntax
CREATE FULLTEXT CATALOG catalog_name
[ON FILEGROUP filegroup ]
[IN PATH 'rootpath']
[WITH <catalog_option>]
[AS DEFAULT]
[AUTHORIZATION owner_name ]
<catalog_option>::=
ACCENT_SENSITIVITY = {ON|OFF}
Arguments
catalog_name
Is the name of the new catalog. The catalog name must be unique among all catalog names in the current
database. Also, the name of the file that corresponds to the full-text catalog (see ON FILEGROUP ) must be unique
among all files in the database. If the name of the catalog is already used for another catalog in the database, SQL
Server returns an error.
The length of the catalog name cannot exceed 120 characters.
ON FILEGROUP filegroup
Beginning with SQL Server 2008, this clause has no effect.
IN PATH 'rootpath'
NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.
Remarks
Full-text catalog IDs start at 00005 and are incremented by one for each new catalog created.
Permissions
User must have CREATE FULLTEXT CATALOG permission on the database, or be a member of the db_owner, or
db_ddladmin fixed database roles.
Examples
The following example creates a full-text catalog and also a full-text index.
USE AdventureWorks2012;
GO
CREATE FULLTEXT CATALOG ftCatalog AS DEFAULT;
GO
CREATE FULLTEXT INDEX ON HumanResources.JobCandidate(Resume) KEY INDEX PK_JobCandidate_JobCandidateID;
GO
See Also
sys.fulltext_catalogs (Transact-SQL )
ALTER FULLTEXT CATALOG (Transact-SQL )
DROP FULLTEXT CATALOG (Transact-SQL )
Full-Text Search
CREATE FULLTEXT INDEX (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a full-text index on a table or indexed view in a database in SQL Server. Only one full-text index is allowed
per table or indexed view, and each full-text index applies to a single table or indexed view. A full-text index can
contain up to 1024 columns.
Transact-SQL Syntax Conventions
Syntax
CREATE FULLTEXT INDEX ON table_name
[ ( { column_name
[ TYPE COLUMN type_column_name ]
[ LANGUAGE language_term ]
[ STATISTICAL_SEMANTICS ]
} [ ,...n]
) ]
KEY INDEX index_name
[ ON <catalog_filegroup_option> ]
[ WITH [ ( ] <with_option> [ ,...n] [ ) ] ]
[;]
<catalog_filegroup_option>::=
{
fulltext_catalog_name
| ( fulltext_catalog_name, FILEGROUP filegroup_name )
| ( FILEGROUP filegroup_name, fulltext_catalog_name )
| ( FILEGROUP filegroup_name )
}
<with_option>::=
{
CHANGE_TRACKING [ = ] { MANUAL | AUTO | OFF [, NO POPULATION ] }
| STOPLIST [ = ] { OFF | SYSTEM | stoplist_name }
| SEARCH PROPERTY LIST [ = ] property_list_name
}
Arguments
table_name
Is the name of the table or indexed view that contains the column or columns included in the full-text index.
column_name
Is the name of the column included in the full-text index. Only columns of type char, varchar, nchar, nvarchar,
text, ntext, image, xml, and varbinary(max) can be indexed for full-text search. To specify multiple columns,
repeat the column_name clause as follows:
CREATE FULLTEXT INDEX ON table_name (column_name1 […], column_name2 […]) …
TYPE COLUMN type_column_name
Specifies the name of a table column, type_column_name, that is used to hold the document type for a
varbinary(max) or image document. This column, known as the type column, contains a user-supplied file
extension (.doc, .pdf, .xls, and so forth). The type column must be of type char, nchar, varchar, or nvarchar.
Specify TYPE COLUMN type_column_name only if column_name specifies a varbinary(max) or image column,
in which data is stored as binary data; otherwise, SQL Server returns an error.
NOTE
At indexing time, the Full-Text Engine uses the abbreviation in the type column of each table row to identify which full-text
search filter to use for the document in column_name. The filter loads the document as a binary stream, removes the
formatting information, and sends the text from the document to the word-breaker component. For more information, see
Configure and Manage Filters for Search.
L ANGUAGE language_term
Is the language of the data stored in column_name.
language_term is optional and can be specified as a string, integer, or hexadecimal value corresponding to the
locale identifier (LCID ) of a language. If no value is specified, the default language of the SQL Server instance is
used.
If language_term is specified, the language it represents will be used to index data stored in char, nchar, varchar,
nvarchar, text, and ntext columns. This language is the default language used at query time if language_term is
not specified as part of a full-text predicate against the column.
When specified as a string, language_term corresponds to the alias column value in the syslanguages system
table. The string must be enclosed in single quotation marks, as in 'language_term'. When specified as an integer,
language_term is the actual LCID that identifies the language. When specified as a hexadecimal value,
language_term is 0x followed by the hex value of the LCID. The hex value must not exceed eight digits, including
leading zeros.
If the value is in double-byte character set (DBCS ) format, SQL Server will convert it to Unicode.
Resources, such as word breakers and stemmers, must be enabled for the language specified as language_term. If
such resources do not support the specified language, SQL Server returns an error.
Use the sp_configure stored procedure to access information about the default full-text language of the Microsoft
SQL Server instance. For more information, see sp_configure (Transact-SQL ).
For non-BLOB and non-XML columns containing text data in multiple languages, or for cases when the language
of the text stored in the column is unknown, it might be appropriate for you to use the neutral (0x0) language
resource. However, first you should understand the possible consequences of using the neutral (0x0) language
resource. For information about the possible solutions and consequences of using the neutral (0x0) language
resource, see Choose a Language When Creating a Full-Text Index.
For documents stored in XML - or BLOB -type columns, the language encoding within the document will be used at
indexing time. For example, in XML columns, the xml:lang attribute in XML documents will identify the language.
At query time, the value previously specified in language_term becomes the default language used for full-text
queries unless language_term is specified as part of a full-text query.
STATISTICAL_SEMANTICS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Creates the additional key phrase and document similarity indexes that are part of statistical semantic indexing.
For more information, see Semantic Search (SQL Server).
KEY INDEX index_name
Is the name of the unique key index on table_name. The KEY INDEX must be a unique, single-key, non-nullable
column. Select the smallest unique key index for the full-text unique key. For the best performance, we recommend
an integer data type for the full-text key.
fulltext_catalog_name
Is the full-text catalog used for the full-text index. The catalog must already exist in the database. This clause is
optional. If it is not specified, a default catalog is used. If no default catalog exists, SQL Server returns an error.
FILEGROUP filegroup_name
Creates the specified full-text index on the specified filegroup. The filegroup must already exist. If the FILEGROUP
clause is not specified, the full-text index is placed in the same filegroup as base table or view for a nonpartitioned
table or in the primary filegroup for a partitioned table.
CHANGE_TRACKING [ = ] { MANUAL | AUTO | OFF [ , NO POPUL ATION ] }
Specifies whether changes (updates, deletes or inserts) made to table columns that are covered by the full-text
index will be propagated by SQL Server to the full-text index. Data changes through WRITETEXT and
UPDATETEXT are not reflected in the full-text index, and are not picked up with change tracking.
MANUAL
Specifies that the tracked changes must be propagated manually by calling the ALTER FULLTEXT INDEX … START
UPDATE POPUL ATION Transact-SQL statement (manual population). You can use SQL Server Agent to call this
Transact-SQL statement periodically.
AUTO
Specifies that the tracked changes will be propagated automatically as data is modified in the base table
(automatic population). Although changes are propagated automatically, these changes might not be reflected
immediately in the full-text index. AUTO is the default.
OFF [ , NO POPUL ATION ]
Specifies that SQL Server does not keep a list of changes to the indexed data. When NO POPUL ATION is not
specified, SQL Server populates the index fully after it is created.
The NO POPUL ATION option can be used only when CHANGE_TRACKING is OFF. When NO POPUL ATION is
specified, SQL Server does not populate an index after it is created. The index is only populated after the user
executes the ALTER FULLTEXT INDEX command with the START FULL POPUL ATION or START INCREMENTAL
POPUL ATION clause.
STOPLIST [ = ] { OFF | SYSTEM | stoplist_name }
Associates a full-text stoplist with the index. The index is not populated with any tokens that are part of the
specified stoplist. If STOPLIST is not specified, SQL Server associates the system full-text stoplist with the index.
OFF
Specifies that no stoplist be associated with the full-text index.
SYSTEM
Specifies that the default full-text system STOPLIST should be used for this full-text index.
stoplist_name
Specifies the name of the stoplist to be associated with the full-text index.
SEARCH PROPERTY LIST [ = ] property_list_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Associates a search property list with the index.
OFF
Specifies that no property list be associated with the full-text index.
property_list_name
Specifies the name of the search property list to associate with the full-text index.
Remarks
For more information about full-text indexes, see Create and Manage Full-Text Indexes.
On xml columns, you can create a full-text index that indexes the content of the XML elements, but ignores the
XML markup. Attribute values are full-text indexed unless they are numeric values. Element tags are used as token
boundaries. Well-formed XML or HTML documents and fragments containing multiple languages are supported.
For more information, see Use Full-Text Search with XML Columns.
We recommend that the index key column is an integer data type. This provides optimizations at query execution
time.
For more information about populating full-text indexes, see Populate Full-Text Indexes.
Permissions
User must have REFERENCES permission on the full-text catalog and have ALTER permission on the table or
indexed view, or be a member of the sysadmin fixed server role, or db_owner, or db_ddladmin fixed database roles.
If SET STOPLIST is specified, the user must have REFERENCES permission on the specified stoplist. The owner of
the STOPLIST can grant this permission.
NOTE
The public is granted REFERENCE permission to the default stoplist that is shipped with SQL Server.
Examples
A. Creating a unique index, a full-text catalog, and a full-text index
The following example creates a unique index on the JobCandidateID column of the HumanResources.JobCandidate
table of the AdventureWorks2012 sample database. The example then creates a default full-text catalog, ft .
Finally, the example creates a full-text index on the Resume column, using the ft catalog and the system stoplist.
CREATE UNIQUE INDEX ui_ukJobCand ON HumanResources.JobCandidate(JobCandidateID);
CREATE FULLTEXT CATALOG ft AS DEFAULT;
CREATE FULLTEXT INDEX ON HumanResources.JobCandidate(Resume)
KEY INDEX ui_ukJobCand
WITH STOPLIST = SYSTEM;
GO
The example specifies the SYSTEM stoplist. It also specifies a search property list, DocumentPropertyList ; for an
example that creates this property list, see CREATE SEARCH PROPERTY LIST (Transact-SQL ).
The example specifies that change tracking is off with no population. Later, during off-peak hours, the example uses
an ALTER FULLTEXT INDEX statement to start a full population on the new index and enable automatic change
tracking.
See Also
Create and Manage Full-Text Indexes
ALTER FULLTEXT INDEX (Transact-SQL )
DROP FULLTEXT INDEX (Transact-SQL )
Full-Text Search
GRANT (Transact-SQL )
sys.fulltext_indexes (Transact-SQL )
Search Document Properties with Search Property Lists
CREATE FULLTEXT STOPLIST (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new full-text stoplist in the current database.
Stopwords are managed in databases by using objects called stoplists. A stoplist is a list of stopwords that, when
associated with a full-text index, is applied to full-text queries on that index. For more information, see Configure
and Manage Stopwords and Stoplists for Full-Text Search.
IMPORTANT
CREATE FULLTEXT STOPLIST, ALTER FULLTEXT STOPLIST, and DROP FULLTEXT STOPLIST are supported only under
compatibility level 100. Under compatibility levels 80 and 90, these statements are not supported. However, under all
compatibility levels the system stoplist is automatically associated with new full-text indexes.
Syntax
CREATE FULLTEXT STOPLIST stoplist_name
[ FROM { [ database_name.]source_stoplist_name } | SYSTEM STOPLIST ]
[ AUTHORIZATION owner_name ]
;
Arguments
stoplist_name
Is the name of the stoplist. stoplist_name can be a maximum of 128 characters. stoplist_name must be unique
among all stoplists in the current database, and conform to the rules for identifiers.
stoplist_name will be used when the full-text index is created.
database_name
Is the name of the database where the stoplist specified by source_stoplist_name is located. If not specified,
database_name defaults to the current database.
source_stoplist_name
Specifies that the new stoplist is created by copying an existing stoplist. If source_stoplist_name does not exist, or
the database user does not have correct permissions, CREATE FULLTEXT STOPLIST fails with an error. If any
languages specified in the stop words of the source stoplist are not registered in the current database, CREATE
FULLTEXT STOPLIST succeeds, but warning(s) are returned and the corresponding stop words are not added.
SYSTEM STOPLIST
Specifies that the new stoplist is created from the stoplist that exists by default in the Resource database.
AUTHORIZATION owner_name
Specifies the name of a database principal to own of the stoplist. owner_name must either be the name of a
principal of which the current user is a member, or the current user must have IMPERSONATE permission on
owner_name. If not specified, ownership is given to the current user.
Remarks
The creator of a stoplist is its owner.
Permissions
To create a STOPLIST requires CREATE FULLTEXT CATALOG permissions. The stoplist owner can grant
CONTROL permission explicitly on a stoplist to allow users to add and remove words and to drop the stoplist.
NOTE
Using a stoplist with a full-text index requires REFERENCE permission.
Examples
A. Creating a new full-text stoplist
The following example creates a new full-text stoplist named myStoplist .
See Also
ALTER FULLTEXT STOPLIST (Transact-SQL )
DROP FULLTEXT STOPLIST (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
sys.fulltext_stoplists (Transact-SQL )
sys.fulltext_stopwords (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
CREATE FUNCTION (Transact-SQL)
5/3/2018 • 27 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a user-defined function in SQL Server and Azure SQL Database. A user-defined function is a Transact-
SQL or common language runtime (CLR ) routine that accepts parameters, performs an action, such as a complex
calculation, and returns the result of that action as a value. The return value can either be a scalar (single) value or
a table. Use this statement to create a reusable routine that can be used in these ways:
In Transact-SQL statements such as SELECT
In applications calling the function
In the definition of another user-defined function
To parameterize a view or improve the functionality of an indexed view
To define a column in a table
To define a CHECK constraint on a column
To replace a stored procedure
Use an inline function as a filter predicate for a security policy
NOTE
The integration of .NET Framework CLR into SQL Server is discussed in this topic. CLR integration does not apply to Azure
SQL Database.
Syntax
-- Transact-SQL Scalar Function Syntax
CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ = default ] [ READONLY ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
[ ; ]
-- Transact-SQL Inline Table-Valued Function Syntax
CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type
[ = default ] [ READONLY ] }
[ ,...n ]
]
)
RETURNS TABLE
[ WITH <function_option> [ ,...n ] ]
[ AS ]
RETURN [ ( ] select_stmt [ ) ]
[ ; ]
<table_type_definition>:: =
( { <column_definition> <column_constraint>
| <computed_column_definition> }
[ <table_constraint> ] [ ,...n ]
)
<column_definition>::=
{
{ column_name data_type }
[ [ DEFAULT constant_expression ]
[ COLLATE collation_name ] | [ ROWGUIDCOL ]
]
| [ IDENTITY [ (seed , increment ) ] ]
[ <column_constraint> [ ...n ] ]
}
<column_constraint>::=
{
[ NULL | NOT NULL ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
[ ON { filegroup | "default" } ]
| [ CHECK ( logical_expression ) ] [ ,...n ]
}
<computed_column_definition>::=
column_name AS computed_column_expression
<table_constraint>::=
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
( column_name [ ASC | DESC ] [ ,...n ] )
[ WITH FILLFACTOR = fillfactor
| WITH ( <index_option> [ , ...n ] )
| [ CHECK ( logical_expression ) ] [ ,...n ]
}
<index_option>::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS ={ ON | OFF }
}
-- CLR Scalar Function Syntax
CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( { @parameter_name [AS] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
)
RETURNS { return_data_type }
[ WITH <clr_function_option> [ ,...n ] ]
[ AS ] EXTERNAL NAME <method_specifier>
[ ; ]
<method_specifier>::=
assembly_name.class_name.method_name
<clr_function_option>::=
}
[ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
| [ EXECUTE_AS_Clause ]
}
<clr_table_type_definition>::=
( { column_name data_type } [ ,...n ] )
-- In-Memory OLTP: Syntax for natively compiled, scalar user-defined function
CREATE [ OR ALTER ] FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ NULL | NOT NULL ] [ = default ] [ READONLY ] }
[ ,...n ]
]
)
RETURNS return_data_type
WITH <function_option> [ ,...n ]
[ AS ]
BEGIN ATOMIC WITH (set_option [ ,... n ])
function_body
RETURN scalar_expression
END
<function_option>::=
{
| NATIVE_COMPILATION
| SCHEMABINDING
| [ EXECUTE_AS_Clause ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
}
Arguments
OR ALTER
Applies to: Azure SQL Database, SQL Server (starting with SQL Server 2016 (13.x) SP1).
Conditionally alters the function only if it already exists.
NOTE
Optional [OR ALTER] syntax for CLR is available starting with SQL Server 2016 (13.x) SP1 CU1.
schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the name of the user-defined function. Function names must comply with the rules for identifiers and must be
unique within the database and to its schema.
NOTE
Parentheses are required after the function name even if a parameter is not specified.
@parameter_name
Is a parameter in the user-defined function. One or more parameters can be declared.
A function can have a maximum of 2,100 parameters. The value of each declared parameter must be supplied by
the user when the function is executed, unless a default for the parameter is defined.
Specify a parameter name by using an at sign (@) as the first character. The parameter name must comply with
the rules for identifiers. Parameters are local to the function; the same parameter names can be used in other
functions. Parameters can take the place only of constants; they cannot be used instead of table names, column
names, or the names of other database objects.
NOTE
ANSI_WARNINGS is not honored when you pass parameters in a stored procedure, user-defined function, or when you
declare and set variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger
than three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.
[ type_schema_name. ] parameter_data_type
Is the parameter data type, and optionally the schema to which it belongs. For Transact-SQL functions, all data
types, including CLR user-defined types and user-defined table types, are allowed except the timestamp data
type. For CLR functions, all data types, including CLR user-defined types, are allowed except text, ntext, image,
user-defined table types and timestamp data types. The nonscalar types, cursor and table, cannot be specified as
a parameter data type in either Transact-SQL or CLR functions.
If type_schema_name is not specified, the Database Engine looks for the scalar_parameter_data_type in the
following order:
The schema that contains the names of SQL Server system data types.
The default schema of the current user in the current database.
The dbo schema in the current database.
[ =default ]
Is a default value for the parameter. If a default value is defined, the function can be executed without
specifying a value for that parameter.
NOTE
Default parameter values can be specified for CLR functions except for the varchar(max) and varbinary(max) data types.
When a parameter of the function has a default value, the keyword DEFAULT must be specified when the function
is called to retrieve the default value. This behavior is different from using parameters with default values in
stored procedures in which omitting the parameter also implies the default value. However, the DEFAULT
keyword is not required when invoking a scalar function by using the EXECUTE statement.
READONLY
Indicates that the parameter cannot be updated or modified within the definition of the function. If the parameter
type is a user-defined table type, READONLY should be specified.
return_data_type
Is the return value of a scalar user-defined function. For Transact-SQL functions, all data types, including CLR
user-defined types, are allowed except the timestamp data type. For CLR functions, all data types, including CLR
user-defined types, are allowed except the text, ntext, image, and timestamp data types. The nonscalar types,
cursor and table, cannot be specified as a return data type in either Transact-SQL or CLR functions.
function_body
Specifies that a series of Transact-SQL statements, which together do not produce a side effect such as modifying
a table, define the value of the function. function_body is used only in scalar functions and multistatement table-
valued functions.
In scalar functions, function_body is a series of Transact-SQL statements that together evaluate to a scalar value.
In multistatement table-valued functions, function_body is a series of Transact-SQL statements that populate a
TABLE return variable.
scalar_expression
Specifies the scalar value that the scalar function returns.
TABLE
Specifies that the return value of the table-valued function is a table. Only constants and @local_variables can be
passed to table-valued functions.
In inline table-valued functions, the TABLE return value is defined through a single SELECT statement. Inline
functions do not have associated return variables.
In multistatement table-valued functions, @return_variable is a TABLE variable, used to store and accumulate the
rows that should be returned as the value of the function. @return_variable can be specified only for Transact-
SQL functions and not for CLR functions.
WARNING
Joining to a multistatement table valued function in a FROM clause is possible, but can give poor performance. SQL Server
is unable to use all the optimized techniques against some statements that can be included in a multistatement function,
resulting in a suboptimal query plan. To obtain the best possible performance, whenever possible use joins between base
tables instead of functions.
select_stmt
Is the single SELECT statement that defines the return value of an inline table-valued function.
ORDER (<order_clause>) Specifies the order in which results are being returned from the table-valued function.
For more information, see the section, "Guidance on Using Sort Order," later in this topic.
EXTERNAL NAME <method_specifier> assembly_name.class_name.method_name Applies to: SQL Server
2008 through SQL Server 2017.
Specifies the assembly and method to which the created function name shall refer.
assembly_name - must match a value in the name column of
SELECT * FROM sys.assemblies; .
This is the name that was used on the CREATE ASSEMBLY statement.
class_name - must match a value in the assembly_name column of
SELECT * FROM sys.assembly_modules; .
Often the value contains an embedded period or dot. In such cases the Transact-SQL syntax requires that
the value be bounded with a pair of straight brackets [], or with a pair of double quotation marks "".
method_name - must match a value in the method_name column of
SELECT * FROM sys.assembly_modules; .
The method must be static.
In a typical example, for MyFood.DLL, in which all types are in the MyFood namespace, the EXTERNAL NAME
value could be:
MyFood.[MyFood.MyClass].MyStaticMethod
NOTE
By default, SQL Server cannot execute CLR code. You can create, modify, and drop database objects that reference common
language runtime modules; however, you cannot execute these references in SQL Server until you enable the clr enabled
option. To enable this option, use sp_configure.
NOTE
This option is not available in a contained database.
NOTE
EXECUTE AS cannot be specified for inline user-defined functions.
Best Practices
If a user-defined function is not created with the SCHEMABINDING clause, changes that are made to underlying
objects can affect the definition of the function and produce unexpected results when it is invoked. We
recommend that you implement one of the following methods to ensure that the function does not become
outdated because of changes to its underlying objects:
Specify the WITH SCHEMABINDING clause when you are creating the function. This ensures that the
objects referenced in the function definition cannot be modified unless the function is also modified.
Execute the sp_refreshsqlmodule stored procedure after modifying any object that is specified in the
definition of the function.
Data Types
If parameters are specified in a CLR function, they should be SQL Server types as defined previously for
scalar_parameter_data_type. For information about comparing SQL Server system data types to CLR integration
data types or .NET Framework common language runtime data types, see Mapping CLR Parameter Data.
For SQL Server to reference the correct method when it is overloaded in a class, the method indicated in
<method_specifier> must have the following characteristics:
Receive the same number of parameters as specified in [ ,...n ].
Receive all the parameters by value, not by reference.
Use parameter types that are compatible with those specified in the SQL Server function.
If the return data type of the CLR function specifies a table type (RETURNS TABLE ), the return data type of
the method in <method_specifier> should be of type IEnumerator or IEnumerable, and it is assumed
that the interface is implemented by the creator of the function. Unlike Transact-SQL functions, CLR
functions cannot include PRIMARY KEY, UNIQUE, or CHECK constraints in <table_type_definition>. The
data types of columns specified in <table_type_definition> must match the types of the corresponding
columns of the result set returned by the method in <method_specifier> at execution time. This type-
checking is not performed at the time the function is created.
For more information about how to program CLR functions, see CLR User-Defined Functions.
General Remarks
Scalar-valued functions can be invoked where scalar expressions are used. This includes computed columns and
CHECK constraint definitions. Scalar-valued functions can also be executed by using the EXECUTE statement.
Scalar-valued functions must be invoked by using at least the two-part name of the function. For more
information about multipart names, see Transact-SQL Syntax Conventions (Transact-SQL ). Table-valued functions
can be invoked where table expressions are allowed in the FROM clause of SELECT, INSERT, UPDATE, or
DELETE statements. For more information, see Execute User-defined Functions.
Interoperability
The following statements are valid in a function:
Assignment statements.
Control-of-Flow statements except TRY...CATCH statements.
DECL ARE statements defining local data variables and local cursors.
SELECT statements that contain select lists with expressions that assign values to local variables.
Cursor operations referencing local cursors that are declared, opened, closed, and deallocated in the
function. Only FETCH statements that assign values to local variables using the INTO clause are allowed;
FETCH statements that return data to the client are not allowed.
INSERT, UPDATE, and DELETE statements modifying local table variables.
EXECUTE statements calling extended stored procedures.
For more information, see Create User-defined Functions (Database Engine).
Computed Column Interoperability
Functions have the following properties. The values of these properties determine whether functions can be used
in computed columns that can be persisted or indexed.
UserDataAccess Function accesses user data in the local Includes user-defined tables and temp
instance of SQL Server. tables, but not table variables.
The precision and determinism properties of Transact-SQL functions are determined automatically by SQL
Server. The data access and determinism properties of CLR functions can be specified by the user. For more
information, see Overview of CLR Integration Custom Attributes.
To display the current values for these properties, use OBJECTPROPERTYEX.
Functions must be created with schema binding to be deterministic.
A computed column that invokes a user-defined function can be used in an index when the user-defined function
has the following property values:
IsDeterministic = true
IsSystemVerified = true (unless the computed column is persisted)
UserDataAccess = false
SystemDataAccess = false
For more information, see Indexes on Computed Columns.
Calling Extended Stored Procedures from Functions
The extended stored procedure, when it is called from inside a function, cannot return result sets to the client. Any
ODS APIs that return result sets to the client will return FAIL. The extended stored procedure could connect back
to an instance of SQL Server; however, it should not try to join the same transaction as the function that invoked
the extended stored procedure.
Similar to invocations from a batch or stored procedure, the extended stored procedure will be executed in the
context of the Windows security account under which SQL Server is running. The owner of the stored procedure
should consider this when giving EXECUTE permission on it to users.
Limitations and Restrictions
User-defined functions cannot be used to perform actions that modify the database state.
User-defined functions cannot contain an OUTPUT INTO clause that has a table as its target.
The following Service Broker statements cannot be included in the definition of a Transact-SQL user-defined
function:
BEGIN DIALOG CONVERSATION
END CONVERSATION
GET CONVERSATION GROUP
MOVE CONVERSATION
RECEIVE
SEND
User-defined functions can be nested; that is, one user-defined function can call another. The nesting level is
incremented when the called function starts execution, and decremented when the called function finishes
execution. User-defined functions can be nested up to 32 levels. Exceeding the maximum levels of nesting
causes the whole calling function chain to fail. Any reference to managed code from a Transact-SQL user-
defined function counts as one level against the 32-level nesting limit. Methods invoked from within
managed code do not count against this limit.
Using Sort Order in CLR Table -valued Functions
When using the ORDER clause in CLR table-valued functions, follow these guidelines:
You must ensure that results are always ordered in the specified order. If the results are not in the specified
order, SQL Server will generate an error message when the query is executed.
If an ORDER clause is specified, the output of the table-valued function must be sorted according to the
collation of the column (explicit or implicit). For example, if the column collation is Chinese (either specified
in the DDL for the table-valued function or obtained from the database collation), the returned results must
be sorted according to Chinese sorting rules.
The ORDER clause, if specified, is always verified by SQL Server while returning results, whether or not it
is used by the query processor to perform further optimizations. Only use the ORDER clause if you know
it is useful to the query processor.
The SQL Server query processor takes advantage of the ORDER clause automatically in following cases:
Insert queries where the ORDER clause is compatible with an index.
ORDER BY clauses that are compatible with the ORDER clause.
Aggregates, where GROUP BY is compatible with ORDER clause.
DISTINCT aggregates where the distinct columns are compatible with the ORDER clause.
The ORDER clause does not guarantee ordered results when a SELECT query is executed, unless ORDER
BY is also specified in the query. See sys.function_order_columns (Transact-SQL ) for information on how to
query for columns included in the sort-order for table-valued functions.
Metadata
The following table lists the system catalog views that you can use to return metadata about user-defined
functions.
Permissions
Requires CREATE FUNCTION permission in the database and ALTER permission on the schema in which the
function is being created. If the function specifies a user-defined type, requires EXECUTE permission on the type.
Examples
A. Using a scalar-valued user-defined function that calculates the ISO week
The following example creates the user-defined function ISOweek . This function takes a date argument and
calculates the ISO week number. For this function to calculate correctly, SET DATEFIRST 1 must be invoked before
the function is called.
The example also shows using the EXECUTE AS clause to specify the security context in which a stored procedure
can be executed. In the example, the option CALLER specifies that the procedure will be executed in the context of
the user that calls it. The other options that you can specify are SELF, OWNER, and user_name.
Here is the function call. Notice that DATEFIRST is set to 1 .
GO
For an example of how to create a CLR table-valued function, see CLR Table-Valued Functions.
E. Displaying the definition of Transact-SQL user-defined functions
The definition of functions created by using the ENCRYPTION option cannot be viewed by using
sys.sql_modules; however, other information about the encrypted functions is displayed.
See Also
ALTER FUNCTION (Transact-SQL )
DROP FUNCTION (Transact-SQL )
OBJECTPROPERTYEX (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.assembly_modules (Transact-SQL )
EXECUTE (Transact-SQL )
CLR User-Defined Functions
EVENTDATA (Transact-SQL )
CREATE SECURITY POLICY (Transact-SQL )
CREATE FUNCTION (SQL Data Warehouse)
5/4/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a user-defined function in SQL Data Warehouse. A user-defined function is a Transact-SQL routine that
accepts parameters, performs an action, such as a complex calculation, and returns the result of that action as a
value. The return value must be a scalar (single) value. Use this statement to create a reusable routine that can be
used in these ways:
In Transact-SQL statements such as SELECT
In applications calling the function
In the definition of another user-defined function
To define a CHECK constraint on a column
To replace a stored procedure
Transact-SQL Syntax Conventions
Syntax
--Transact-SQL Scalar Function Syntax
CREATE FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
[ ; ]
<function_option>::=
{
[ SCHEMABINDING ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
}
Arguments
schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the name of the user-defined function. Function names must comply with the rules for identifiers and must be
unique within the database and to its schema.
NOTE
Parentheses are required after the function name even if a parameter is not specified.
@parameter_name
Is a parameter in the user-defined function. One or more parameters can be declared.
A function can have a maximum of 2,100 parameters. The value of each declared parameter must be supplied by
the user when the function is executed, unless a default for the parameter is defined.
Specify a parameter name by using an at sign (@) as the first character. The parameter name must comply with the
rules for identifiers. Parameters are local to the function; the same parameter names can be used in other functions.
Parameters can take the place only of constants; they cannot be used instead of table names, column names, or the
names of other database objects.
NOTE
ANSI_WARNINGS is not honored when you pass parameters in a stored procedure, user-defined function, or when you
declare and set variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger
than three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.
parameter_data_type
Is the parameter data type. For Transact-SQL functions, all scalar data types supported in SQL Data Warehouse are
allowed. The timestamp (rowversion) data type is not a supported type.
[ =default ]
Is a default value for the parameter. If a default value is defined, the function can be executed without specifying a
value for that parameter.
When a parameter of the function has a default value, the keyword DEFAULT must be specified when the function
is called to retrieve the default value. This behavior is different from using parameters with default values in stored
procedures in which omitting the parameter also implies the default value.
return_data_type
Is the return value of a scalar user-defined function. For Transact-SQL functions, all scalar data types supported in
SQL Data Warehouse are allowed. The timestamp (rowversion) data type is not a supported type. The cursor and
table nonscalar types are not allowed.
function_body
Series of Transact-SQL statements. The function_body cannot contain a SELECT statement and cannot reference
database data. The function_body cannot reference tables or views. The function body can call other deterministic
functions but cannot call nondeterministic functions.
In scalar functions, function_body is a series of Transact-SQL statements that together evaluate to a scalar value.
scalar_expression
Specifies the scalar value that the scalar function returns.
<function_option>::=
Specifies that the function will have one or more of the following options.
SCHEMABINDING
Specifies that the function is bound to the database objects that it references. When SCHEMABINDING is
specified, the base objects cannot be modified in a way that would affect the function definition. The function
definition itself must first be modified or dropped to remove dependencies on the object that is to be modified.
The binding of the function to the objects it references is removed only when one of the following actions occurs:
The function is dropped.
The function is modified by using the ALTER statement with the SCHEMABINDING option not specified.
A function can be schema bound only if the following conditions are true:
Any user-defined functions referenced by the function are also schema-bound.
The functions and other UDFs referenced by the function are referenced using a one-part or two-part name.
Only built-in functions and other UDFs in the same database can be referenced within the body of UDFs.
The user who executed the CREATE FUNCTION statement has REFERENCES permission on the database
objects that the function references.
To remove SCHEMABINDING use ALTER
RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT
Specifies the OnNULLCall attribute of a scalar-valued function. If not specified, CALLED ON NULL INPUT
is implied by default. This means that the function body executes even if NULL is passed as an argument.
Best Practices
If a user-defined function is not created with the SCHEMABINDING clause, changes that are made to underlying
objects can affect the definition of the function and produce unexpected results when it is invoked. We recommend
that you implement one of the following methods to ensure that the function does not become outdated because
of changes to its underlying objects:
Specify the WITH SCHEMABINDING clause when you are creating the function. This ensures that the objects
referenced in the function definition cannot be modified unless the function is also modified.
Interoperability
The following statements are valid in a function:
Assignment statements.
Control-of-Flow statements except TRY...CATCH statements.
DECL ARE statements defining local data variables.
Metadata
This section lists the system catalog views that you can use to return metadata about user-defined functions.
sys.sql_modules : Displays the definition of Transact-SQL user-defined functions. For example:
SELECT definition, type
FROM sys.sql_modules AS m
JOIN sys.objects AS o
ON m.object_id = o.object_id
AND type = ('FN');
GO
Permissions
Requires CREATE FUNCTION permission in the database and ALTER permission on the schema in which the
function is being created.
See Also
ALTER FUNCTION (SQL Server PDW )
DROP FUNCTION (SQL Server PDW )
CREATE INDEX (Transact-SQL)
5/30/2018 • 41 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a relational index on a table or view. Also called a rowstore index because it is either a clustered or
nonclustered B -tree index. You can create a rowstore index before there is data in the table. Use a rowstore
index to improve query performance, especially when the queries select from specific columns or require
values to be sorted in a particular order.
NOTE
SQL Data Warehouse and Parallel Data Warehouse currently do not support Unique constraints. Any examples
referencing Unique Constraints are only applicable to SQL Server and SQL Database.
TIP
For information on index design guidelines, refer to the SQL Server Index Design Guide.
Simple examples:
--Create a clustered index on a table and use a 3-part name for the table
CREATE CLUSTERED INDEX i1 ON d1.s1.t1 (col1);
Key scenarios:
Starting with SQL Server 2016 (13.x) and SQL Database, use a nonclustered index on a columnstore index
to improve data warehousing query performance. For more information, see Columnstore Indexes - Data
Warehouse.
Need to create a different type of index?
CREATE XML INDEX (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE COLUMNSTORE INDEX (Transact-SQL )
Transact-SQL Syntax Conventions
Syntax
Syntax for SQL Server and Azure SQL Database
[ ; ]
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}
<relational_index_option> ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| STATISTICS_INCREMENTAL = { ON | OFF }
| DROP_EXISTING = { ON | OFF }
| ONLINE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE | ROW | PAGE}
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]
}
<filter_predicate> ::=
<conjunct> [ AND <conjunct> ]
<conjunct> ::=
<disjunct> | <comparison>
<disjunct> ::=
column_name IN (constant ,...n)
<comparison> ::=
column_name <comparison_op> constant
<comparison_op> ::=
{ IS | IS NOT | = | <> | != | > | >= | !> | < | <= | !< }
<range> ::=
<partition_number_expression> TO <partition_number_expression>
IMPORTANT
The backward compatible relational index syntax structure will be removed in a future version of SQL Server. Avoid using
this syntax structure in new development work, and plan to modify applications that currently use the feature. Use the
syntax structure specified in <relational_index_option> instead.
CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name
ON <object> ( column_name [ ASC | DESC ] [ ,...n ] )
[ WITH <backward_compatible_index_option> [ ,...n ] ]
[ ON { filegroup_name | "default" } ]
<object> ::=
{
[ database_name. [ owner_name ] . | owner_name. ]
table_or_view_name
}
<backward_compatible_index_option> ::=
{
PAD_INDEX
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB
| IGNORE_DUP_KEY
| STATISTICS_NORECOMPUTE
| DROP_EXISTING
}
Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
UNIQUE
Creates a unique index on a table or view. A unique index is one in which no two rows are permitted to have
the same index key value. A clustered index on a view must be unique.
The Database Engine does not allow creating a unique index on columns that already include duplicate values,
whether or not IGNORE_DUP_KEY is set to ON. If this is tried, the Database Engine displays an error
message. Duplicate values must be removed before a unique index can be created on the column or columns.
Columns that are used in a unique index should be set to NOT NULL, because multiple null values are
considered duplicates when a unique index is created.
CLUSTERED
Creates an index in which the logical order of the key values determines the physical order of the
corresponding rows in a table. The bottom, or leaf, level of the clustered index contains the actual data rows of
the table. A table or view is allowed one clustered index at a time.
A view with a unique clustered index is called an indexed view. Creating a unique clustered index on a view
physically materializes the view. A unique clustered index must be created on a view before any other indexes
can be defined on the same view. For more information, see Create Indexed Views.
Create the clustered index before creating any nonclustered indexes. Existing nonclustered indexes on tables
are rebuilt when a clustered index is created.
If CLUSTERED is not specified, a nonclustered index is created.
NOTE
Because the leaf level of a clustered index and the data pages are the same by definition, creating a clustered index and
using the ON partition_scheme_name or ON filegroup_name clause effectively moves a table from the filegroup on
which the table was created to the new partition scheme or filegroup. Before creating tables or indexes on specific
filegroups, verify which filegroups are available and that they have enough empty space for the index.
In some cases creating a clustered index can enable previously disabled indexes. For more information, see
Enable Indexes and Constraints and Disable Indexes and Constraints.
NONCLUSTERED
Creates an index that specifies the logical ordering of a table. With a nonclustered index, the physical order of
the data rows is independent of their indexed order.
Each table can have up to 999 nonclustered indexes, regardless of how the indexes are created: either implicitly
with PRIMARY KEY and UNIQUE constraints, or explicitly with CREATE INDEX.
For indexed views, nonclustered indexes can be created only on a view that has a unique clustered index
already defined.
If not otherwise specified, the default index type is NONCLUSTERED.
index_name
Is the name of the index. Index names must be unique within a table or view but do not have to be unique
within a database. Index names must follow the rules of identifiers.
column
Is the column or columns on which the index is based. Specify two or more column names to create a
composite index on the combined values in the specified columns. List the columns to be included in the
composite index, in sort-priority order, inside the parentheses after table_or_view_name.
Up to 32 columns can be combined into a single composite index key. All the columns in a composite index key
must be in the same table or view. The maximum allowable size of the combined index values is 900 bytes for
a clustered index, or 1,700 for a nonclustered index. The limits are 16 columns and 900 bytes for versions
before SQL Database V12 and SQL Server 2016 (13.x).
Columns that are of the large object (LOB ) data types ntext, text, varchar(max), nvarchar(max),
varbinary(max), xml, or image cannot be specified as key columns for an index. Also, a view definition
cannot include ntext, text, or image columns, even if they are not referenced in the CREATE INDEX
statement.
You can create indexes on CLR user-defined type columns if the type supports binary ordering. You can also
create indexes on computed columns that are defined as method invocations off a user-defined type column,
as long as the methods are marked deterministic and do not perform data access operations. For more
information about indexing CLR user-defined type columns, see CLR User-defined Types.
[ ASC | DESC ]
Determines the ascending or descending sort direction for the particular index column. The default is ASC.
INCLUDE (column [ ,... n ] )
Specifies the non-key columns to be added to the leaf level of the nonclustered index. The nonclustered index
can be unique or non-unique.
Column names cannot be repeated in the INCLUDE list and cannot be used simultaneously as both key and
non-key columns. Nonclustered indexes always contain the clustered index columns if a clustered index is
defined on the table. For more information, see Create Indexes with Included Columns.
All data types are allowed except text, ntext, and image. The index must be created or rebuilt offline
(ONLINE = OFF ) if any one of the specified non-key columns are varchar(max), nvarchar(max), or
varbinary(max) data types.
Computed columns that are deterministic and either precise or imprecise can be included columns. Computed
columns derived from image, ntext, text, varchar(max), nvarchar(max), varbinary(max), and xml data
types can be included in non-key columns as long as the computed column data types is allowable as an
included column. For more information, see Indexes on Computed Columns.
For information on creating an XML index, see CREATE XML INDEX (Transact-SQL ).
WHERE <filter_predicate> Creates a filtered index by specifying which rows to include in the index. The
filtered index must be a nonclustered index on a table. Creates filtered statistics for the data rows in the filtered
index.
The filter predicate uses simple comparison logic and cannot reference a computed column, a UDT column, a
spatial data type column, or a hierarchyID data type column. Comparisons using NULL literals are not allowed
with the comparison operators. Use the IS NULL and IS NOT NULL operators instead.
Here are some examples of filter predicates for the Production.BillOfMaterials table:
WHERE StartDate > '20000101' AND EndDate <= '20000630'
Filtered indexes do not apply to XML indexes and full-text indexes. For UNIQUE indexes, only the
selected rows must have unique index values. Filtered indexes do not allow the IGNORE_DUP_KEY
option.
ON partition_scheme_name ( column_name )
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the partition scheme that defines the filegroups onto which the partitions of a partitioned index will
be mapped. The partition scheme must exist within the database by executing either CREATE PARTITION
SCHEME or ALTER PARTITION SCHEME. column_name specifies the column against which a partitioned
index will be partitioned. This column must match the data type, length, and precision of the argument of the
partition function that partition_scheme_name is using. column_name is not restricted to the columns in the
index definition. Any column in the base table can be specified, except when partitioning a UNIQUE index,
column_name must be chosen from among those used as the unique key. This restriction allows the Database
Engine to verify uniqueness of key values within a single partition only.
NOTE
When you partition a non-unique, clustered index, the Database Engine by default adds the partitioning column to the
list of clustered index keys, if it is not already specified. When partitioning a non-unique, nonclustered index, the
Database Engine adds the partitioning column as a non-key (included) column of the index, if it is not already specified.
If partition_scheme_name or filegroup is not specified and the table is partitioned, the index is placed in the
same partition scheme, using the same partitioning column, as the underlying table.
NOTE
You cannot specify a partitioning scheme on an XML index. If the base table is partitioned, the XML index uses the same
partition scheme as the table.
For more information about partitioning indexes, Partitioned Tables and Indexes.
ON filegroup_name
Applies to: SQL Server 2008 through SQL Server 2017.
Creates the specified index on the specified filegroup. If no location is specified and the table or view is not
partitioned, the index uses the same filegroup as the underlying table or view. The filegroup must already exist.
ON "default"
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Creates the specified index on the default filegroup.
The term default, in this context, is not a keyword. It is an identifier for the default filegroup and must be
delimited, as in ON "default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must
be ON for the current session. This is the default setting. For more information, see SET
QUOTED_IDENTIFIER (Transact-SQL ).
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the placement of FILESTREAM data for the table when a clustered index is created. The
FILESTREAM_ON clause allows FILESTREAM data to be moved to a different FILESTREAM filegroup or
partition scheme.
filestream_filegroup_name is the name of a FILESTREAM filegroup. The filegroup must have one file defined
for the filegroup by using a CREATE DATABASE or ALTER DATABASE statement; otherwise, an error is raised.
If the table is partitioned, the FILESTREAM_ON clause must be included and must specify a partition scheme
of FILESTREAM filegroups that uses the same partition function and partition columns as the partition
scheme for the table. Otherwise, an error is raised.
If the table is not partitioned, the FILESTREAM column cannot be partitioned. FILESTREAM data for the table
must be stored in a single filegroup that is specified in the FILESTREAM_ON clause.
FILESTREAM_ON NULL can be specified in a CREATE INDEX statement if a clustered index is being created
and the table does not contain a FILESTREAM column.
For more information, see FILESTREAM (SQL Server).
<object>::=
Is the fully qualified or nonfully qualified object to be indexed.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or_view_name
Is the name of the table or view to be indexed.
The view must be defined with SCHEMABINDING to create an index on it. A unique clustered index must be
created on a view before any nonclustered index is created. For more information about indexed views, see the
Remarks section.
Beginning with SQL Server 2016 (13.x), the object can be a table stored with a clustered columnstore index.
Azure SQL Database supports the three-part name format database_name.[schema_name].object_name
when the database_name is the current database or the database_name is tempdb and the object_name starts
with #.
<relational_index_option>::=
Specifies the options to use when you create the index.
PAD_INDEX = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by fillfactor is applied to the intermediate-level pages of the
index.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity, leaving sufficient space for at least one row of the
maximum size the index can have, considering the set of keys on the intermediate pages.
The PAD_INDEX option is useful only when FILLFACTOR is specified, because PAD_INDEX uses the
percentage specified by FILLFACTOR. If the percentage specified for FILLFACTOR is not large enough to
allow for one row, the Database Engine internally overrides the percentage to allow for the minimum. The
number of rows on an intermediate index page is never less than two, regardless of how low the value of
fillfactor.
In backward compatible syntax, WITH PAD_INDEX is equivalent to WITH PAD_INDEX = ON.
FILLFACTOR =fillfactor
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index
page during index creation or rebuild. fillfactor must be an integer value from 1 to 100. If fillfactor is 100, the
Database Engine creates indexes with leaf pages filled to capacity.
The FILLFACTOR setting applies only when the index is created or rebuilt. The Database Engine does not
dynamically keep the specified percentage of empty space in the pages. To view the fill factor setting, use the
sys.indexes catalog view.
IMPORTANT
Creating a clustered index with a FILLFACTOR less than 100 affects the amount of storage space the data occupies
because the Database Engine redistributes the data when it creates the clustered index.
IMPORTANT
Disabling automatic recomputation of distribution statistics may prevent the query optimizer from picking optimal
execution plans for queries involving the table.
NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are
supported by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016.
ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. This enables queries or updates to the
underlying table and indexes to proceed. At the start of the operation, a Shared (S ) lock is held on the source
object for a very short period of time. At the end of the operation, for a short period of time, an S (Shared) lock
is acquired on the source if a nonclustered index is being created; or an SCH-M (Schema Modification) lock is
acquired when a clustered index is created or dropped online and when a clustered or nonclustered index is
being rebuilt. ONLINE cannot be set to ON when an index is being created on a local temporary table.
OFF
Table locks are applied for the duration of the index operation. An offline index operation that creates, rebuilds,
or drops a clustered index, or rebuilds or drops a nonclustered index, acquires a Schema modification (Sch-M )
lock on the table. This prevents all user access to the underlying table for the duration of the operation. An
offline index operation that creates a nonclustered index acquires a Shared (S ) lock on the table. This prevents
updates to the underlying table but allows read operations, such as SELECT statements.
For more information, see How Online Index Operations Work.
Indexes, including indexes on global temp tables, can be created online with the following exceptions:
XML index
Index on a local temp table.
Initial unique clustered index on a view.
Disabled clustered indexes.
Clustered index if the underlying table contains LOB data types: image, ntext, text, and spatial types.
varchar(max) and varbinary(max) columns cannot be part of an index. In SQL Server (beginning with
SQL Server 2012 (11.x)) and in SQL Database, when a table contains varchar(max) or varbinary(max)
columns, a clustered index containing other columns, can be built or rebuilt using the ONLINE option. SQL
Database does not permit the ONLINE option when the base table contains varchar(max) or
varbinary(max) columns.
For more information, see Perform Index Operations Online.
ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when accessing the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Overrides the max degree of parallelism configuration option for the duration of the index operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP
to limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism can be:
1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number or
fewer based on the current system workload.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.
NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are
supported by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016 and Editions and
Supported Features for SQL Server 2017.
DATA_COMPRESSION
Specifies the data compression option for the specified index, partition number, or range of partitions. The
options are as follows:
NONE
Index or specified partitions are not compressed.
ROW
Index or specified partitions are compressed by using row compression.
PAGE
Index or specified partitions are compressed by using page compression.
For more information about compression, see Data Compression.
ON PARTITIONS ( { <partition_number_expression> | <range> } [ ,...n ] )
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Specifies the partitions to which the DATA_COMPRESSION setting applies. If the index is not partitioned, the
ON PARTITIONS argument will generate an error. If the ON PARTITIONS clause is not provided, the
DATA_COMPRESSION option applies to all partitions of a partitioned index.
<partition_number_expression> can be specified in the following ways:
Provide the number for a partition, for example: ON PARTITIONS (2).
Provide the partition numbers for several individual partitions separated by commas, for example: ON
PARTITIONS (1, 5).
Provide both ranges and individual partitions, for example: ON PARTITIONS (2, 4, 6 TO 8).
<range> can be specified as partition numbers separated by the word TO, for example: ON
PARTITIONS (6 TO 8).
To set different types of data compression for different partitions, specify the DATA_COMPRESSION
option more than once, for example:
REBUILD WITH
(
DATA_COMPRESSION = NONE ON PARTITIONS (1),
DATA_COMPRESSION = ROW ON PARTITIONS (2, 4, 6 TO 8),
DATA_COMPRESSION = PAGE ON PARTITIONS (3, 5)
);
Remarks
The CREATE INDEX statement is optimized like any other query. To save on I/O operations, the query
processor may choose to scan another index instead of performing a table scan. The sort operation may be
eliminated in some situations. On multiprocessor computers CREATE INDEX can use more processors to
perform the scan and sort operations associated with creating the index, in the same way as other queries do.
For more information, see Configure Parallel Index Operations.
The create index operation can be minimally logged if the database recovery model is set to either bulk-logged
or simple.
Indexes can be created on a temporary table. When the table is dropped or the session ends, the indexes are
dropped.
Indexes support extended properties.
Clustered Indexes
Creating a clustered index on a table (heap) or dropping and re-creating an existing clustered index requires
additional workspace to be available in the database to accommodate data sorting and a temporary copy of
the original table or existing clustered index data. For more information about clustered indexes, see Create
Clustered Indexes.
Nonclustered Indexes
Beginning with SQL Server 2016 (13.x) and in Azure SQL Database, you can create a nonclustered index on a
table stored as a clustered columnstore index. If you first create a nonclustered index on a table stored as a
heap or clustered index, the index will persist if you later convert the table to a clustered columnstore index. It
is also not necessary to drop the nonclustered index when you rebuild the clustered columnstore index.
Limitations and Restrictions:
The FILESTREAM_ON option is not valid when you create a nonclustered index on a table stored as a
clustered columnstore index.
Unique Indexes
When a unique index exists, the Database Engine checks for duplicate values each time data is added by a
insert operations. Insert operations that would generate duplicate key values are rolled back, and the Database
Engine displays an error message. This is true even if the insert operation changes many rows but causes only
one duplicate. If an attempt is made to enter data for which there is a unique index and the
IGNORE_DUP_KEY clause is set to ON, only the rows violating the UNIQUE index fail.
Partitioned Indexes
Partitioned indexes are created and maintained in a similar manner to partitioned tables, but like ordinary
indexes, they are handled as separate database objects. You can have a partitioned index on a table that is not
partitioned, and you can have a nonpartitioned index on a table that is partitioned.
If you are creating an index on a partitioned table, and do not specify a filegroup on which to place the index,
the index is partitioned in the same manner as the underlying table. This is because indexes, by default, are
placed on the same filegroups as their underlying tables, and for a partitioned table in the same partition
scheme that uses the same partitioning columns. When the index uses the same partition scheme and
partitioning column as the table, the index is aligned with the table.
WARNING
Creating and rebuilding nonaligned indexes on a table with more than 1,000 partitions is possible, but is not supported.
Doing so may cause degraded performance or excessive memory consumption during these operations. We recommend
using only aligned indexes when the number of partitions exceed 1,000.
When partitioning a non-unique, clustered index, the Database Engine by default adds any partitioning
columns to the list of clustered index keys, if not already specified.
Indexed views can be created on partitioned tables in the same manner as indexes on tables. For more
information about partitioned indexes, see Partitioned Tables and Indexes.
In SQL Server 2017, statistics are not created by scanning all the rows in the table when a partitioned index is
created or rebuilt. Instead, the query optimizer uses the default sampling algorithm to generate statistics. To
obtain statistics on partitioned indexes by scanning all the rows in the table, use CREATE STATISTICS or
UPDATE STATISTICS with the FULLSCAN clause.
Filtered Indexes
A filtered index is an optimized nonclustered index, suited for queries that select a small percentage of rows
from a table. It uses a filter predicate to index a portion of the data in the table. A well-designed filtered index
can improve query performance, reduce storage costs, and reduce maintenance costs.
Required SET Options for Filtered Indexes
The SET options in the Required Value column are required whenever any of the following conditions occur:
Create a filtered index.
INSERT, UPDATE, DELETE, or MERGE operation modifies the data in a filtered index.
The filtered index is used by the query optimizer to produce the query plan.
DEFAULT
DEFAULT
DEFAULT SERVER OLE DB AND ODBC
SET OPTIONS REQUIRED VALUE VALUE VALUE DB-LIBRARY VALUE
ANSI_NULLS ON ON ON OFF
ANSI_PADDING ON ON ON OFF
ANSI_WARNINGS* ON ON ON OFF
CONCAT_NULL_YIE ON ON ON OFF
LDS_NULL
QUOTED_IDENTIFIE ON ON ON OFF
R
XML Indexes
For information about XML indexes see, CREATE XML INDEX (Transact-SQL ) and XML Indexes (SQL Server).
NOTE
When tables are partitioned, if the partitioning key columns are not already present in a non-unique clustered index,
they are added to the index by the Database Engine. The combined size of the indexed columns (not counting included
columns), plus any added partitioning columns cannot exceed 1800 bytes in a non-unique clustered index.
Computed Columns
Indexes can be created on computed columns. In addition, computed columns can have the property
PERSISTED. This means that the Database Engine stores the computed values in the table, and updates them
when any other columns on which the computed column depends are updated. The Database Engine uses
these persisted values when it creates an index on the column, and when the index is referenced in a query.
To index a computed column, the computed column must deterministic and precise. However, using the
PERSISTED property expands the type of indexable computed columns to include:
Computed columns based on Transact-SQL and CLR functions and CLR user-defined type methods that
are marked deterministic by the user.
Computed columns based on expressions that are deterministic as defined by the Database Engine but
imprecise.
Persisted computed columns require the following SET options to be set as shown in the previous
section "Required SET Options for Indexed Views".
The UNIQUE or PRIMARY KEY constraint can contain a computed column as long as it satisfies all
conditions for indexing. Specifically, the computed column must be deterministic and precise or
deterministic and persisted. For more information about determinism, see Deterministic and
Nondeterministic Functions.
Computed columns derived from image, ntext, text, varchar(max), nvarchar(max),
varbinary(max), and xml data types can be indexed either as a key or included non-key column as
long as the computed column data type is allowable as an index key column or non-key column. For
example, you cannot create a primary XML index on a computed xml column. If the index key size
exceeds 900 bytes, a warning message is displayed.
Creating an index on a computed column may cause the failure of an insert or update operation that
previously worked. Such a failure may take place when the computed column results in arithmetic error.
For example, in the following table, although computed column c results in an arithmetic error, the
INSERT statement works.
If, instead, after creating the table, you create an index on computed column c , the same INSERT statement
will now fail.
DROP_EXISTING Clause
You can use the DROP_EXISTING clause to rebuild the index, add or drop columns, modify options, modify
column sort order, or change the partition scheme or filegroup.
If the index enforces a PRIMARY KEY or UNIQUE constraint and the index definition is not altered in any way,
the index is dropped and re-created preserving the existing constraint. However, if the index definition is
altered the statement fails. To change the definition of a PRIMARY KEY or UNIQUE constraint, drop the
constraint and add a constraint with the new definition.
DROP_EXISTING enhances performance when you re-create a clustered index, with either the same or
different set of keys, on a table that also has nonclustered indexes. DROP_EXISTING replaces the execution of
a DROP INDEX statement on the old clustered index followed by the execution of a CREATE INDEX statement
for the new clustered index. The nonclustered indexes are rebuilt once, and then only if the index definition has
changed. The DROP_EXISTING clause does not rebuild the nonclustered indexes when the index definition
has the same index name, key and partition columns, uniqueness attribute, and sort order as the original index.
Whether the nonclustered indexes are rebuilt or not, they always remain in their original filegroups or partition
schemes and use the original partition functions. If a clustered index is rebuilt to a different filegroup or
partition scheme, the nonclustered indexes are not moved to coincide with the new location of the clustered
index. Therefore, even the nonclustered indexes previously aligned with the clustered index, they may no
longer be aligned with it. For more information about partitioned index alignment, see.
The DROP_EXISTING clause will not sort the data again if the same index key columns are used in the same
order and with the same ascending or descending order, unless the index statement specifies a nonclustered
index and the ONLINE option is set to OFF. If the clustered index is disabled, the CREATE INDEX WITH
DROP_EXISTING operation must be performed with ONLINE set to OFF. If a nonclustered index is disabled
and is not associated with a disabled clustered index, the CREATE INDEX WITH DROP_EXISTING operation
can be performed with ONLINE set to OFF or ON.
When indexes with 128 extents or more are dropped or rebuilt, the Database Engine defers the actual page
deallocations, and their associated locks, until after the transaction commits.
ONLINE Option
The following guidelines apply for performing index operations online:
The underlying table cannot be altered, truncated, or dropped while an online index operation is in process.
Additional temporary disk space is required during the index operation.
Online operations can be performed on partitioned indexes and indexes that contain persisted
computed columns, or included columns.
For more information, see Perform Index Operations Online.
Data Compression
Data compression is described in the topic Data Compression. The following are key points to consider:
Compression can allow more rows to be stored on a page, but does not change the maximum row size.
Non-leaf pages of an index are not page compressed but can be row compressed.
Each nonclustered index has an individual compression setting, and does not inherit the compression
setting of the underlying table.
When a clustered index is created on a heap, the clustered index inherits the compression state of the
heap unless an alternative compression state is specified.
The following restrictions apply to partitioned indexes:
You cannot change the compression setting of a single partition if the table has nonaligned indexes.
The ALTER INDEX <index> ... REBUILD PARTITION ... syntax rebuilds the specified partition of the index.
The ALTER INDEX <index> ... REBUILD WITH ... syntax rebuilds all partitions of the index.
To evaluate how changing the compression state will affect a table, an index, or a partition, use the
sp_estimate_data_compression_savings stored procedure.
Permissions
Requires ALTER permission on the table or view. User must be a member of the sysadmin fixed server role or
the db_ddladmin and db_owner fixed database roles.
Metadata
To view information on existing indexes, you can query the sys.indexes (Transact-SQL ) catalog view.
Version Notes
SQL Database does not support filegroup and filestream options.
The following query tests the uniqueness constraint by attempting to insert a row with the same value as that
in an existing row.
Server: Msg 3604, Level 16, State 1, Line 5 Duplicate key was ignored.
Number of rows
--------------
38
Notice that the rows inserted from the Production.UnitMeasure table that did not violate the uniqueness
constraint were successfully inserted. A warning was issued and the duplicate row ignored, but the entire
transaction was not rolled back.
The same statements are executed again, but with IGNORE_DUP_KEY set to OFF .
Number of rows
--------------
1
Notice that none of the rows from the Production.UnitMeasure table were inserted into the table even though
only one row in the table violated the UNIQUE index constraint.
G. Using DROP_EXISTING to drop and re -create an index
The following example drops and re-creates an existing index on the ProductID column of the
Production.WorkOrder table in the AdventureWorks2012 database by using the DROP_EXISTING option. The
options FILLFACTOR and PAD_INDEX are also set.
CREATE NONCLUSTERED INDEX IX_WorkOrder_ProductID
ON Production.WorkOrder(ProductID)
WITH (FILLFACTOR = 80,
PAD_INDEX = ON,
DROP_EXISTING = ON);
GO
The following example creates an index on a partitioned table by using row compression on all partitions of
the index.
The following example creates an index on a partitioned table by using page compression on partition 1 of
the index and row compression on partitions 2 through 4 of the index.
CREATE CLUSTERED INDEX IX_PartTab2Col1
ON PartitionTable1 (Col1)
WITH (DATA_COMPRESSION = PAGE ON PARTITIONS(1),
DATA_COMPRESSION = ROW ON PARTITIONS (2 TO 4 ) ) ;
GO
See Also
SQL Server Index Design Guide
Indexes and ALTER TABLE
ALTER INDEX (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
CREATE TABLE (Transact-SQL )
CREATE XML INDEX (Transact-SQL )
Data Types (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP INDEX (Transact-SQL )
XML Indexes (SQL Server)
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
sys.xml_indexes (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE LOGIN (Transact-SQL)
5/25/2018 • 20 min to read • Edit Online
Creates a login for SQL Server, SQL Database, SQL Data Warehouse, or Parallel Data Warehouse databases.
Click one of the following tabs for the syntax, arguments, remarks, permissions, and examples for a particular
version.
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.
SQL Server
SQL Database
SQL Data Warehouse
SQL Parallel Data Warehouse
Syntax
-- Syntax for SQL Server
CREATE LOGIN login_name { WITH <option_list1> | FROM <sources> }
<option_list1> ::=
PASSWORD = { 'password' | hashed_password HASHED } [ MUST_CHANGE ]
[ , <option_list2> [ ,... ] ]
<option_list2> ::=
SID = sid
| DEFAULT_DATABASE = database
| DEFAULT_LANGUAGE = language
| CHECK_EXPIRATION = { ON | OFF}
| CHECK_POLICY = { ON | OFF}
| CREDENTIAL = credential_name
<sources> ::=
WINDOWS [ WITH <windows_options>[ ,... ] ]
| CERTIFICATE certname
| ASYMMETRIC KEY asym_key_name
<windows_options> ::=
DEFAULT_DATABASE = database
| DEFAULT_LANGUAGE = language
Arguments
login_name
Specifies the name of the login that is created. There are four types of logins: SQL Server logins, Windows logins,
certificate-mapped logins, and asymmetric key-mapped logins. When you are creating logins that are mapped
from a Windows domain account, you must use the pre-Windows 2000 user logon name in the format
[<domainName>\<login_name>]. You cannot use a UPN in the format login_name@DomainName. For an
example, see example D later in this article. Authentication logins are type sysname and must conform to the
rules for Identifiers and cannot contain a '\'. Windows logins can contain a '\'. Logins based on Active Directory
users, are limited to names of less than 21 characters.
PASSWORD ='password*' Applies to SQL Server logins only. Specifies the password for the login that is being
created. You should use a strong password. For more information, see Strong Passwords and Password Policy.
Beginning with SQL Server 2012 (11.x),, stored password information is calculated using SHA-512 of the salted
password.
Passwords are case-sensitive. Passwords should always be at least 8 characters long, and cannot exceed 128
characters. Passwords can include a-z, A-Z, 0-9, and most non-alphanumeric characters. Passwords cannot
contain single quotes, or the login_name.
PASSWORD =hashed_password
Applies to the HASHED keyword only. Specifies the hashed value of the password for the login that is being
created.
HASHED Applies to SQL Server logins only. Specifies that the password entered after the PASSWORD
argument is already hashed. If this option is not selected, the string entered as password is hashed before it is
stored in the database. This option should only be used for migrating databases from one server to another. Do
not use the HASHED option to create new logins. The HASHED option cannot be used with hashes created by
SQL 7 or earlier.
MUST_CHANGE Applies to SQL Server logins only. If this option is included, SQL Server prompts the user for a
new password the first time the new login is used.
CREDENTIAL =credential_name
The name of a credential to be mapped to the new SQL Server login. The credential must already exist in the
server. Currently this option only links the credential to a login. A credential cannot be mapped to the System
Administrator (sa) login.
SID = sid
Used to recreate a login. Applies to SQL Server authentication logins only, not Windows authentication logins.
Specifies the SID of the new SQL Server authentication login. If this option is not used, SQL Server
automatically assigns a SID. The SID structure depends on the SQL Server version. SQL Server login SID: a 16
byte (binary(16)) literal value based on a GUID. For example, SID = 0x14585E90117152449347750164BA00A7 .
DEFAULT_DATABASE =database
Specifies the default database to be assigned to the login. If this option is not included, the default database is set
to master.
DEFAULT_L ANGUAGE =language
Specifies the default language to be assigned to the login. If this option is not included, the default language is set
to the current default language of the server. If the default language of the server is later changed, the default
language of the login remains unchanged.
CHECK_EXPIRATION = { ON | OFF }
Applies to SQL Server logins only. Specifies whether password expiration policy should be enforced on this
login. The default value is OFF.
CHECK_POLICY = { ON | OFF }
Applies to SQL Server logins only. Specifies that the Windows password policies of the computer on which SQL
Server is running should be enforced on this login. The default value is ON.
If the Windows policy requires strong passwords, passwords must contain at least three of the following four
characteristics:
An uppercase character (A-Z ).
A lowercase character (a-z).
A digit (0-9).
One of the non-alphanumeric characters, such as a space, _, @, *, ^, %, !, $, #, or &.
WINDOWS
Specifies that the login be mapped to a Windows login.
CERTIFICATE certname
Specifies the name of a certificate to be associated with this login. This certificate must already occur in the
master database.
ASYMMETRIC KEY asym_key_name
Specifies the name of an asymmetric key to be associated with this login. This key must already occur in the
master database.
Remarks
Passwords are case-sensitive.
Prehashing of passwords is supported only when you are creating SQL Server logins.
If MUST_CHANGE is specified, CHECK_EXPIRATION and CHECK_POLICY must be set to ON. Otherwise,
the statement will fail.
A combination of CHECK_POLICY = OFF and CHECK_EXPIRATION = ON is not supported.
When CHECK_POLICY is set to OFF, lockout_time is reset and CHECK_EXPIRATION is set to OFF.
IMPORTANT
CHECK_EXPIRATION and CHECK_POLICY are only enforced on Windows Server 2003 and later. For more information, see
Password Policy.
Logins created from certificates or asymmetric keys are used only for code signing. They cannot be used to
connect to SQL Server. You can create a login from a certificate or asymmetric key only when the certificate or
asymmetric key already exists in master.
For a script to transfer logins, see How to transfer the logins and the passwords between instances of SQL
Server 2005 and SQL Server 2008.
Creating a login automatically enables the new login and grants the login the server level CONNECT SQL
permission.
The server's authentication mode must match the login type to permit access.
For information about designing a permissions system, see Getting Started with Database Engine
Permissions.
Permissions
Only users with ALTER ANY LOGIN permission on the server or membership in the securityadmin fixed
server role can create logins. For more information, see Server-Level Roles and ALTER SERVER
ROLE.https://docs.microsoft.com/en-us/azure/sql-database/sql-database-manage-logins#additional-server-
level-administrative-roles.
If the CREDENTIAL option is used, also requires ALTER ANY CREDENTIAL permission on the server.
Examples
A. Creating a login with a password
The following example creates a login for a particular user and assigns a password.
NOTE
The MUST_CHANGE option cannot be used when CHECK_EXPIRATION is OFF.
USE MASTER;
CREATE CERTIFICATE <certificateName>
WITH SUBJECT = '<login_name> certificate in master database',
EXPIRY_DATE = '12/05/2025';
GO
CREATE LOGIN <login_name> FROM CERTIFICATE <certificateName>;
GO
My query returns 0x241C11948AEEB749B0D22646DB1A19F2 as the SID. Your query will return a different
value. The following statements delete the login, and then recreate the login. Use the SID from your previous
query.
See Also
Getting Started with Database Engine Permissions
Principals (Database Engine)
Password Policy
ALTER LOGIN
DROP LOGIN
EVENTDATA
Create a Login
CREATE MASTER KEY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a database master key.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Parallel Data Warehouse
-- Syntax for Azure SQL Database and Azure SQL Data Warehouse
Arguments
PASSWORD ='password'
Is the password that is used to encrypt the master key in the database. password must meet the Windows
password policy requirements of the computer that is running the instance of SQL Server. password is optional
in SQL Database and SQL Data Warehouse.
Remarks
The database master key is a symmetric key used to protect the private keys of certificates and asymmetric keys
that are present in the database. When it is created, the master key is encrypted by using the AES_256 algorithm
and a user-supplied password. In SQL Server 2008 and SQL Server 2008 R2, the Triple DES algorithm is used.
To enable the automatic decryption of the master key, a copy of the key is encrypted by using the service master
key and stored in both the database and in master. Typically, the copy stored in master is silently updated
whenever the master key is changed. This default can be changed by using the DROP ENCRYPTION BY
SERVICE MASTER KEY option of ALTER MASTER KEY. A master key that is not encrypted by the service
master key must be opened by using the OPEN MASTER KEY statement and a password.
The is_master_key_encrypted_by_server column of the sys.databases catalog view in master indicates whether
the database master key is encrypted by the service master key.
Information about the database master key is visible in the sys.symmetric_keys catalog view.
For SQL Server and Parallel Data Warehouse, the Master Key is typically protected by the Service Master Key
and at least one password. In case of the database being physically moved to a different server (log shipping,
restoring backup, etc.), the database will contain a copy of the master Key encrypted by the original server
Service Master Key (unless this encryption was explicitly removed using ALTER MASTER KEY DDL ), and a copy
of it encrypted by each password specified during either CREATE MASTER KEY or subsequent ALTER MASTER
KEY DDL operations. In order to recover the Master Key, and all the data encrypted using the Master Key as the
root in the key hierarchy after the database has been moved, the user will have either use OPEN MASTER KEY
statement using one of the password used to protect the Master Key, restore a backup of the Master Key, or
restore a backup of the original Service Master Key on the new server.
For SQL Database and SQL Data Warehouse, the password protection is not considered to be a safety
mechanism to prevent a data loss scenario in situations where the database may be moved from one server to
another, as the Service Master Key protection on the Master Key is managed by Microsoft Azure platform.
Therefore, the Maser Key password is optional in SQL Database and SQL Data Warehouse.
IMPORTANT
You should back up the master key by using BACKUP MASTER KEY and store the backup in a secure, off-site location.
The service master key and database master keys are protected by using the AES -256 algorithm.
Permissions
Requires CONTROL permission on the database.
Examples
The following example creates a database master key for the current database. The key is encrypted using the
password 23987hxJ#KL95234nl0zBe .
See Also
sys.symmetric_keys (Transact-SQL )
sys.databases (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
DROP MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
Encryption Hierarchy
CREATE MESSAGE TYPE (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new message type. A message type defines the name of a message and the validation that Service
Broker performs on messages that have that name. Both sides of a conversation must define the same message
types.
Transact-SQL Syntax Conventions
Syntax
CREATE MESSAGE TYPE message_type_name
[ AUTHORIZATION owner_name ]
[ VALIDATION = { NONE
| EMPTY
| WELL_FORMED_XML
| VALID_XML WITH SCHEMA COLLECTION schema_collection_name
} ]
[ ; ]
Arguments
message_type_name
Is the name of the message type to create. A new message type is created in the current database and owned by
the principal specified in the AUTHORIZATION clause. Server, database, and schema names cannot be specified.
The message_type_name can be up to 128 characters.
AUTHORIZATION owner_name
Sets the owner of the message type to the specified database user or role. When the current user is dbo or sa,
owner_name can be the name of any valid user or role. Otherwise, owner_name must be the name of the current
user, the name of a user who the current user has IMPERSONATE permission for, or the name of a role to which
the current user belongs. When this clause is omitted, the message type belongs to the current user.
VALIDATION
Specifies how Service Broker validates the message body for messages of this type. When this clause is not
specified, validation defaults to NONE.
NONE
Specifies that no validation is performed. The message body can contain data, or it can be NULL.
EMPTY
Specifies that the message body must be NULL.
WELL_FORMED_XML
Specifies that the message body must contain well-formed XML.
VALID_XML WITH SCHEMA COLLECTION schema_collection_name
Specifies that the message body must contain XML that complies with a schema in the specified schema collection
The schema_collection_name must be the name of an existing XML schema collection.
Remarks
Service Broker validates incoming messages. When a message contains a message body that does not comply
with the validation type specified, Service Broker discards the invalid message and returns an error message to the
service that sent the message.
Both sides of a conversation must define the same name for a message type. To help troubleshooting, both sides
of a conversation typically specify the same validation for the message type, although Service Broker does not
require that both sides of the conversation use the same validation.
A message type can not be a temporary object. Message type names starting with # are allowed, but are
permanent objects.
Permissions
Permission for creating a message type defaults to members of the db_ddladmin or db_owner fixed database
roles and the sysadmin fixed server role.
REFERENCES permission for a message type defaults to the owner of the message type, members of the
db_owner fixed database role, and members of the sysadmin fixed server role.
When the CREATE MESSAGE TYPE statement specifies a schema collection, the user executing the statement
must have REFERENCES permission on the schema collection specified.
Examples
A. Creating a message type containing well-formed XML
The following example creates a new message type that contains well-formed XML.
<xsd:complexType name="ItemDetailType">
<xsd:sequence>
<xsd:element name="Date" type="xsd:date"/>
<xsd:element name="CostCenter" type="xsd:string"/>
<xsd:element name="Total" type="xsd:decimal"/>
<xsd:element name="Currency" type="xsd:string"/>
<xsd:element name="Description" type="xsd:string"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>' ;
See Also
ALTER MESSAGE TYPE (Transact-SQL )
DROP MESSAGE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a function in the current database that maps the rows of a table or index into partitions based on the
values of a specified column. Using CREATE PARTITION FUNCTION is the first step in creating a partitioned
table or index. In SQL Server 2017, a table or index can have a maximum of 15,000 partitions.
Transact-SQL Syntax Conventions
Syntax
CREATE PARTITION FUNCTION partition_function_name ( input_parameter_type )
AS RANGE [ LEFT | RIGHT ]
FOR VALUES ( [ boundary_value [ ,...n ] ] )
[ ; ]
Arguments
partition_function_name
Is the name of the partition function. Partition function names must be unique within the database and comply
with the rules for identifiers.
input_parameter_type
Is the data type of the column used for partitioning. All data types are valid for use as partitioning columns,
except text, ntext, image, xml, timestamp, varchar(max), nvarchar(max), varbinary(max), alias data types,
or CLR user-defined data types.
The actual column, known as a partitioning column, is specified in the CREATE TABLE or CREATE INDEX
statement.
boundary_value
Specifies the boundary values for each partition of a partitioned table or index that uses partition_function_name.
If boundary_value is empty, the partition function maps the whole table or index using partition_function_name
into a single partition. Only one partitioning column, specified in a CREATE TABLE or CREATE INDEX statement,
can be used.
boundary_value is a constant expression that can reference variables. This includes user-defined type variables, or
functions and user-defined functions. It cannot reference Transact-SQL expressions. boundary_value must either
match or be implicitly convertible to the data type supplied in input_parameter_type, and cannot be truncated
during implicit conversion in a way that the size and scale of the value does not match that of its corresponding
input_parameter_type.
NOTE
If boundary_value consists of datetime or smalldatetime literals, these literals are evaluated assuming that us_english is
the session language. This behavior is deprecated. To make sure the partition function definition behaves as expected for all
session languages, we recommend that you use constants that are interpreted the same way for all language settings, such
as the yyyymmdd format; or explicitly convert literals to a specific style. To determine the language session of your server,
run SELECT @@LANGUAGE .
...n
Specifies the number of values supplied by boundary_value, not to exceed 14,999. The number of partitions
created is equal to n + 1. The values do not have to be listed in order. If the values are not in order, the Database
Engine sorts them, creates the function, and returns a warning that the values are not provided in order. The
Database Engine returns an error if n includes any duplicate values.
LEFT | RIGHT
Specifies to which side of each boundary value interval, left or right, the boundary_value [ ,...n ] belongs, when
interval values are sorted by the Database Engine in ascending order from left to right. If not specified, LEFT is
the default.
Remarks
The scope of a partition function is limited to the database that it is created in. Within the database, partition
functions reside in a separate namespace from the other functions.
Any rows whose partitioning column has null values are placed in the left-most partition, unless NULL is
specified as a boundary value and RIGHT is indicated. In this case, the left-most partition is an empty partition,
and NULL values are placed in the following partition.
Permissions
Any one of the following permissions can be used to execute CREATE PARTITION FUNCTION:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition function is being created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition function is being created.
Examples
A. Creating a RANGE LEFT partition function on an int column
The following partition function will partition a table or index into four partitions.
The following table shows how a table that uses this partition function on partitioning column col1 would be
partitioned.
PARTITION 1 2 3 4
PARTITION 1 2 3 4
Values col1 <= 1 col1 > 1 AND col1 col1 > 100 AND col1 > 1000
<= 100 col1 <= 1000
The following table shows how a table that uses this partition function on partitioning column col1 would be
partitioned.
PARTITION 1 2 3 4
Values col1 < 1 col1 >= 1 AND col1 >= 100 AND col1 >= 1000
col1 < 100 col1 < 1000
The following table shows how a table or index that uses this partition function on partitioning column datecol
would be partitioned.
PARTITION 1 2 ... 11 12
The following table shows how a table that uses this partition function on partitioning column col1 would be
partitioned.
PARTITION 1 2 3 4
Values col1 < EX ... col1 >= EX AND col1 >= RXE AND col1 >= XR
col1 < RXE ... col1 < XR ...
See Also
Partitioned Tables and Indexes
$PARTITION (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL )
DROP PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE TABLE (Transact-SQL )
CREATE INDEX (Transact-SQL )
ALTER INDEX (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.partition_functions (Transact-SQL )
sys.partition_parameters (Transact-SQL )
sys.partition_range_values (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a scheme in the current database that maps the partitions of a partitioned table or index to filegroups.
The number and domain of the partitions of a partitioned table or index are determined in a partition function.
A partition function must first be created in a CREATE PARTITION FUNCTION statement before creating a
partition scheme.
NOTE
In Azure SQL Database only primary filegroups are supported.
Syntax
CREATE PARTITION SCHEME partition_scheme_name
AS PARTITION partition_function_name
[ ALL ] TO ( { file_group_name | [ PRIMARY ] } [ ,...n ] )
[ ; ]
Arguments
partition_scheme_name
Is the name of the partition scheme. Partition scheme names must be unique within the database and comply
with the rules for identifiers.
partition_function_name
Is the name of the partition function using the partition scheme. Partitions created by the partition function are
mapped to the filegroups specified in the partition scheme. partition_function_name must already exist in the
database. A single partition cannot contain both FILESTREAM and non-FILESTREAM filegroups.
ALL
Specifies that all partitions map to the filegroup provided in file_group_name, or to the primary filegroup if
[PRIMARY ] is specified. If ALL is specified, only one file_group_name can be specified.
file_group_name | [ PRIMARY ] [ ,...n]
Specifies the names of the filegroups to hold the partitions specified by partition_function_name.
file_group_name must already exist in the database.
If [PRIMARY ] is specified, the partition is stored on the primary filegroup. If ALL is specified, only one
file_group_name can be specified. Partitions are assigned to filegroups, starting with partition 1, in the order in
which the filegroups are listed in [,...n]. The same file_group_name can be specified more than one time in [,...n].
If n is not sufficient to hold the number of partitions specified in partition_function_name, CREATE PARTITION
SCHEME fails with an error.
If partition_function_name generates less partitions than filegroups, the first unassigned filegroup is marked
NEXT USED, and an information message displays naming the NEXT USED filegroup. If ALL is specified, the
sole file_group_name maintains its NEXT USED property for this partition_function_name. The NEXT USED
filegroup will receive an additional partition if one is created in an ALTER PARTITION FUNCTION statement.
To create additional unassigned filegroups to hold new partitions, use ALTER PARTITION SCHEME.
When you specify the primary filegroup in file_group_name [ 1,...n], PRIMARY must be delimited, as in
[PRIMARY ], because it is a keyword.
Only PRIMARY is supported for SQL Database. See example E below.
Permissions
The following permissions can be used to execute CREATE PARTITION SCHEME:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed
server role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition scheme is being created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition scheme is being created.
Examples
A. Creating a partition scheme that maps each partition to a different filegroup
The following example creates a partition function to partition a table or index into four partitions. A partition
scheme is then created that specifies the filegroups to hold each one of the four partitions. This example
assumes the filegroups already exist in the database.
The partitions of a table that uses partition function myRangePF1 on partitioning column col1 would be assigned
as shown in the following table.
Partition 1 2 3 4
Values col1 <= 1 col1 > 1 AND col1 col1 > 100 AND col1 > 1000
<= 100 col1 <= 1000
B. Creating a partition scheme that maps multiple partitions to the same filegroup
If all the partitions map to the same filegroup, use the ALL keyword. But if multiple, but not all, partitions are
mapped to the same filegroup, the filegroup name must be repeated, as shown in the following example.
CREATE PARTITION FUNCTION myRangePF2 (int)
AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS2
AS PARTITION myRangePF2
TO ( test1fg, test1fg, test1fg, test2fg );
The partitions of a table that uses partition function myRangePF2 on partitioning column col1 would be assigned
as shown in the following table.
Partition 1 2 3 4
Values col1 <= 1 col1 > 1 AND col1 col1 > 100 AND col1 > 1000
<= 100 col1 <= 1000
C. Creating a partition scheme that maps all partitions to the same filegroup
The following example creates the same partition function as in the previous examples, and a partition scheme
is created that maps all partitions to the same filegroup.
See Also
CREATE PARTITION FUNCTION (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
DROP PARTITION SCHEME (Transact-SQL )
EVENTDATA (Transact-SQL )
Create Partitioned Tables and Indexes
sys.partition_schemes (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.destination_data_spaces (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
CREATE PROCEDURE (Transact-SQL)
5/3/2018 • 33 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a Transact-SQL or common language runtime (CLR ) stored procedure in SQL Server, Azure SQL
Database, Azure SQL Data Warehouse and Parallel Data Warehouse. Stored procedures are similar to procedures
in other programming languages in that they can:
Accept input parameters and return multiple values in the form of output parameters to the calling
procedure or batch.
Contain programming statements that perform operations in the database, including calling other
procedures.
Return a status value to a calling procedure or batch to indicate success or failure (and the reason for
failure).
Use this statement to create a permanent procedure in the current database or a temporary procedure in
the tempdb database.
NOTE
The integration of .NET Framework CLR into SQL Server is discussed in this topic. CLR integration does not apply to Azure
SQL Database.
Jump to Simple Examples to skip the details of the syntax and get to a quick example of a basic stored procedure.
Transact-SQL Syntax Conventions
Syntax
-- Transact-SQL Syntax for Stored Procedures in SQL Server and Azure SQL Database
<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE AS Clause ]
-- Transact-SQL Syntax for CLR Stored Procedures
<set_option> ::=
LANGUAGE = [ N ] 'language'
| TRANSACTION ISOLATION LEVEL = { SNAPSHOT | REPEATABLE READ | SERIALIZABLE }
| [ DATEFIRST = number ]
| [ DATEFORMAT = format ]
| [ DELAYED_DURABILITY = { OFF | ON } ]
Arguments
OR ALTER
Applies to: Azure SQL Database, SQL Server (starting with SQL Server 2016 (13.x) SP1).
Alters the procedure if it already exists.
schema_name
The name of the schema to which the procedure belongs. Procedures are schema-bound. If a schema name is not
specified when the procedure is created, the default schema of the user who is creating the procedure is
automatically assigned.
procedure_name
The name of the procedure. Procedure names must comply with the rules for identifiers and must be unique
within the schema.
Avoid the use of the sp_ prefix when naming procedures. This prefix is used by SQL Server to designate system
procedures. Using the prefix can cause application code to break if there is a system procedure with the same
name.
Local or global temporary procedures can be created by using one number sign (#) before procedure_name
(#procedure_name) for local temporary procedures, and two number signs for global temporary procedures
(##procedure_name). A local temporary procedure is visible only to the connection that created it and is dropped
when that connection is closed. A global temporary procedure is available to all connections and is dropped at the
end of the last session using the procedure. Temporary names cannot be specified for CLR procedures.
The complete name for a procedure or a global temporary procedure, including ##, cannot exceed 128 characters.
The complete name for a local temporary procedure, including #, cannot exceed 116 characters.
; number
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
An optional integer that is used to group procedures of the same name. These grouped procedures can be
dropped together by using one DROP PROCEDURE statement.
NOTE
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.
Numbered procedures cannot use the xml or CLR user-defined types and cannot be used in a plan guide.
@ parameter
A parameter declared in the procedure. Specify a parameter name by using the at sign (@) as the first character.
The parameter name must comply with the rules for identifiers. Parameters are local to the procedure; the same
parameter names can be used in other procedures.
One or more parameters can be declared; the maximum is 2,100. The value of each declared parameter must be
supplied by the user when the procedure is called unless a default value for the parameter is defined or the value
is set to equal another parameter. If a procedure contains table-valued parameters, and the parameter is missing
in the call, an empty table is passed in. Parameters can take the place only of constant expressions; they cannot be
used instead of table names, column names, or the names of other database objects. For more information, see
EXECUTE (Transact-SQL ).
Parameters cannot be declared if FOR REPLICATION is specified.
[ type_schema_name. ] data_type
The data type of the parameter and the schema to which the data type belongs.
Guidelines for Transact-SQL procedures:
All Transact-SQL data types can be used as parameters.
You can use the user-defined table type to create table-valued parameters. Table-valued parameters can
only be INPUT parameters and must be accompanied by the READONLY keyword. For more information,
see Use Table-Valued Parameters (Database Engine)
cursor data types can only be OUTPUT parameters and must be accompanied by the VARYING keyword.
Guidelines for CLR procedures:
All of the native SQL Server data types that have an equivalent in managed code can be used as
parameters. For more information about the correspondence between CLR types and SQL Server system
data types, see Mapping CLR Parameter Data. For more information about SQL Server system data types
and their syntax, see Data Types (Transact-SQL ).
Table-valued or cursor data types cannot be used as parameters.
If the data type of the parameter is a CLR user-defined type, you must have EXECUTE permission on the
type.
VARYING
Specifies the result set supported as an output parameter. This parameter is dynamically constructed by the
procedure and its contents may vary. Applies only to cursor parameters. This option is not valid for CLR
procedures.
default
A default value for a parameter. If a default value is defined for a parameter, the procedure can be executed
without specifying a value for that parameter. The default value must be a constant or it can be NULL. The
constant value can be in the form of a wildcard, making it possible to use the LIKE keyword when passing the
parameter into the procedure.
Default values are recorded in the sys.parameters.default column only for CLR procedures. That column is
NULL for Transact-SQL procedure parameters.
OUT | OUTPUT
Indicates that the parameter is an output parameter. Use OUTPUT parameters to return values to the caller of the
procedure. text, ntext, and image parameters cannot be used as OUTPUT parameters, unless the procedure is a
CLR procedure. An output parameter can be a cursor placeholder, unless the procedure is a CLR procedure. A
table-value data type cannot be specified as an OUTPUT parameter of a procedure.
READONLY
Indicates that the parameter cannot be updated or modified within the body of the procedure. If the parameter
type is a table-value type, READONLY must be specified.
RECOMPILE
Indicates that the Database Engine does not cache a query plan for this procedure, forcing it to be compiled each
time it is executed. For more information regarding the reasons for forcing a recompile, see Recompile a Stored
Procedure. This option cannot be used when FOR REPLICATION is specified or for CLR procedures.
To instruct the Database Engine to discard query plans for individual queries inside a procedure, use the
RECOMPILE query hint in the definition of the query. For more information, see Query Hints (Transact-SQL ).
ENCRYPTION
Applies to: SQL Server ( SQL Server 2008 through SQL Server 2017), Azure SQL Database.
Indicates that SQL Server converts the original text of the CREATE PROCEDURE statement to an obfuscated
format. The output of the obfuscation is not directly visible in any of the catalog views in SQL Server. Users who
have no access to system tables or database files cannot retrieve the obfuscated text. However, the text is available
to privileged users who can either access system tables over the DAC port or directly access database files. Also,
users who can attach a debugger to the server process can retrieve the decrypted procedure from memory at
runtime. For more information about accessing system metadata, see Metadata Visibility Configuration.
This option is not valid for CLR procedures.
Procedures created with this option cannot be published as part of SQL Server replication.
EXECUTE AS clause
Specifies the security context under which to execute the procedure.
For natively compiled stored procedures, starting SQL Server 2016 (13.x) and in Azure SQL Database, there are
no limitations on the EXECUTE AS clause. In SQL Server 2014 (12.x) the SELF, OWNER, and ‘user_name’ clauses
are supported with natively compiled stored procedures.
For more information, see EXECUTE AS Clause (Transact-SQL ).
FOR REPLICATION
Applies to: SQL Server ( SQL Server 2008 through SQL Server 2017), Azure SQL Database.
Specifies that the procedure is created for replication. Consequently, it cannot be executed on the Subscriber. A
procedure created with the FOR REPLICATION option is used as a procedure filter and is executed only during
replication. Parameters cannot be declared if FOR REPLICATION is specified. FOR REPLICATION cannot be
specified for CLR procedures. The RECOMPILE option is ignored for procedures created with FOR
REPLICATION.
A FOR REPLICATION procedure has an object type RF in sys.objects and sys.procedures.
{ [ BEGIN ] sql_statement [;] [ ...n ] [ END ] }
One or more Transact-SQL statements comprising the body of the procedure. You can use the optional BEGIN
and END keywords to enclose the statements. For information, see the Best Practices, General Remarks, and
Limitations and Restrictions sections that follow.
EXTERNAL NAME assembly_name.class_name.method_name
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies the method of a .NET Framework assembly for a CLR procedure to reference. class_name must be a
valid SQL Server identifier and must exist as a class in the assembly. If the class has a namespace-qualified name
that uses a period (.) to separate namespace parts, the class name must be delimited by using brackets ([]) or
quotation marks (""). The specified method must be a static method of the class.
By default, SQL Server cannot execute CLR code. You can create, modify, and drop database objects that
reference common language runtime modules; however, you cannot execute these references in SQL Server until
you enable the clr enabled option. To enable the option, use sp_configure.
NOTE
CLR procedures are not supported in a contained database.
ATOMIC WITH
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates atomic stored procedure execution. Changes are either committed or all of the changes rolled back by
throwing an exception. The ATOMIC WITH block is required for natively compiled stored procedures.
If the procedure RETURNs (explicitly through the RETURN statement, or implicitly by completing execution), the
work performed by the procedure is committed. If the procedure THROWs, the work performed by the
procedure is rolled back.
XACT_ABORT is ON by default inside an atomic block and cannot be changed. XACT_ABORT specifies whether
SQL Server automatically rolls back the current transaction when a Transact-SQL statement raises a run-time
error.
The following SET options are always ON in the ATOMIC block; the options cannot be changed.
CONCAT_NULL_YIELDS_NULL
QUOTED_IDENTIFIER, ARITHABORT
NOCOUNT
ANSI_NULLS
ANSI_WARNINGS
SET options cannot be changed inside ATOMIC blocks. The SET options in the user session are not used in the
scope of natively compiled stored procedures. These options are fixed at compile time.
BEGIN, ROLLBACK, and COMMIT operations cannot be used inside an atomic block.
There is one ATOMIC block per natively compiled stored procedure, at the outer scope of the procedure. The
blocks cannot be nested. For more information about atomic blocks, see Natively Compiled Stored Procedures.
NULL | NOT NULL
Determines whether null values are allowed in a parameter. NULL is the default.
NATIVE_COMPIL ATION
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates that the procedure is natively compiled. NATIVE_COMPIL ATION, SCHEMABINDING, and EXECUTE
AS can be specified in any order. For more information, see Natively Compiled Stored Procedures.
SCHEMABINDING
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Ensures that tables that are referenced by a procedure cannot be dropped or altered. SCHEMABINDING is
required in natively compiled stored procedures. (For more information, see Natively Compiled Stored
Procedures.) The SCHEMABINDING restrictions are the same as they are for user-defined functions. For more
information, see the SCHEMABINDING section in CREATE FUNCTION (Transact-SQL ).
L ANGUAGE = [N ] 'language'
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Equivalent to SET L ANGUAGE (Transact-SQL ) session option. L ANGUAGE = [N ] 'language' is required.
TRANSACTION ISOL ATION LEVEL
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Required for natively compiled stored procedures. Specifies the transaction isolation level for the stored
procedure. The options are as follows:
For more information about these options, see SET TRANSACTION ISOL ATION LEVEL (Transact-SQL ).
REPEATABLE READ
Specifies that statements cannot read data that has been modified but not yet committed by other transactions. If
another transaction modifies data that has been read by the current transaction, the current transaction fails.
SERIALIZABLE
Specifies the following:
Statements cannot read data that has been modified but not yet committed by other transactions.
If another transactions modifies data that has been read by the current transaction, the current transaction
fails.
If another transaction inserts new rows with key values that would fall in the range of keys read by any
statements in the current transaction, the current transaction fails.
SNAPSHOT
Specifies that data read by any statement in a transaction is the transactionally consistent version of the data that
existed at the start of the transaction.
DATEFIRST = number
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies the first day of the week to a number from 1 through 7. DATEFIRST is optional. If it is not specified, the
setting is inferred from the specified language.
For more information, see SET DATEFIRST (Transact-SQL ).
DATEFORMAT = format
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies the order of the month, day, and year date parts for interpreting date, smalldatetime, datetime,
datetime2 and datetimeoffset character strings. DATEFORMAT is optional. If it is not specified, the setting is
inferred from the specified language.
For more information, see SET DATEFORMAT (Transact-SQL ).
DEL AYED_DURABILITY = { OFF | ON }
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
SQL Server transaction commits can be either fully durable, the default, or delayed durable.
For more information, see Control Transaction Durability.
Simple Examples
To help you get started, here are two quick examples:
SELECT DB_NAME() AS ThisDB; returns the name of the current database.
You can wrap that statement in a stored procedure, such as:
Slightly more complex, is to provide an input parameter to make the procedure more flexible. For example:
Provide a database id number when you call the procedure. For example, EXEC What_DB_is_that 2; returns
tempdb .
See Examples towards the end of this topic for many more examples.
Best Practices
Although this is not an exhaustive list of best practices, these suggestions may improve procedure performance.
Use the SET NOCOUNT ON statement as the first statement in the body of the procedure. That is, place it
just after the AS keyword. This turns off messages that SQL Server sends back to the client after any
SELECT, INSERT, UPDATE, MERGE, and DELETE statements are executed. Overall performance of the
database and application is improved by eliminating this unnecessary network overhead. For information,
see SET NOCOUNT (Transact-SQL ).
Use schema names when creating or referencing database objects in the procedure. It takes less
processing time for the Database Engine to resolve object names if it does not have to search multiple
schemas. It also prevents permission and access problems caused by a user’s default schema being
assigned when objects are created without specifying the schema.
Avoid wrapping functions around columns specified in the WHERE and JOIN clauses. Doing so makes the
columns non-deterministic and prevents the query processor from using indexes.
Avoid using scalar functions in SELECT statements that return many rows of data. Because the scalar
function must be applied to every row, the resulting behavior is like row -based processing and degrades
performance.
Avoid the use of SELECT * . Instead, specify the required column names. This can prevent some Database
Engine errors that stop procedure execution. For example, a SELECT * statement that returns data from a
12 column table and then inserts that data into a 12 column temporary table succeeds until the number or
order of columns in either table is changed.
Avoid processing or returning too much data. Narrow the results as early as possible in the procedure
code so that any subsequent operations performed by the procedure are done using the smallest data set
possible. Send just the essential data to the client application. It is more efficient than sending extra data
across the network and forcing the client application to work through unnecessarily large result sets.
Use explicit transactions by using BEGIN/COMMIT TRANSACTION and keep transactions as short as
possible. Longer transactions mean longer record locking and a greater potential for deadlocking.
Use the Transact-SQL TRY…CATCH feature for error handling inside a procedure. TRY…CATCH can
encapsulate an entire block of Transact-SQL statements. This not only creates less performance overhead,
it also makes error reporting more accurate with significantly less programming.
Use the DEFAULT keyword on all table columns that are referenced by CREATE TABLE or ALTER TABLE
Transact-SQL statements in the body of the procedure. This prevents passing NULL to columns that do
not allow null values.
Use NULL or NOT NULL for each column in a temporary table. The ANSI_DFLT_ON and
ANSI_DFLT_OFF options control the way the Database Engine assigns the NULL or NOT NULL attributes
to columns when these attributes are not specified in a CREATE TABLE or ALTER TABLE statement. If a
connection executes a procedure with different settings for these options than the connection that created
the procedure, the columns of the table created for the second connection can have different nullability and
exhibit different behavior. If NULL or NOT NULL is explicitly stated for each column, the temporary tables
are created by using the same nullability for all connections that execute the procedure.
Use modification statements that convert nulls and include logic that eliminates rows with null values from
queries. Be aware that in Transact-SQL, NULL is not an empty or "nothing" value. It is a placeholder for an
unknown value and can cause unexpected behavior, especially when querying for result sets or using
AGGREGATE functions.
Use the UNION ALL operator instead of the UNION or OR operators, unless there is a specific need for
distinct values. The UNION ALL operator requires less processing overhead because duplicates are not
filtered out of the result set.
General Remarks
There is no predefined maximum size of a procedure.
Variables specified in the procedure can be user-defined or system variables, such as @@SPID.
When a procedure is executed for the first time, it is compiled to determine an optimal access plan to retrieve the
data. Subsequent executions of the procedure may reuse the plan already generated if it still remains in the plan
cache of the Database Engine.
One or more procedures can execute automatically when SQL Server starts. The procedures must be created by
the system administrator in the master database and executed under the sysadmin fixed server role as a
background process. The procedures cannot have any input or output parameters. For more information, see
Execute a Stored Procedure.
Procedures are nested when one procedure call another or executes managed code by referencing a CLR routine,
type, or aggregate. Procedures and managed code references can be nested up to 32 levels. The nesting level
increases by one when the called procedure or managed code reference begins execution and decreases by one
when the called procedure or managed code reference completes execution. Methods invoked from within the
managed code do not count against the nesting level limit. However, when a CLR stored procedure performs data
access operations through the SQL Server managed provider, an additional nesting level is added in the
transition from managed code to SQL.
Attempting to exceed the maximum nesting level causes the entire calling chain to fail. You can use the
@@NESTLEVEL function to return the nesting level of the current stored procedure execution.
Interoperability
The Database Engine saves the settings of both SET QUOTED_IDENTIFIER and SET ANSI_NULLS when a
Transact-SQL procedure is created or modified. These original settings are used when the procedure is executed.
Therefore, any client session settings for SET QUOTED_IDENTIFIER and SET ANSI_NULLS are ignored when
the procedure is running.
Other SET options, such as SET ARITHABORT, SET ANSI_WARNINGS, or SET ANSI_PADDINGS are not saved
when a procedure is created or modified. If the logic of the procedure depends on a particular setting, include a
SET statement at the start of the procedure to guarantee the appropriate setting. When a SET statement is
executed from a procedure, the setting remains in effect only until the procedure has finished running. The setting
is then restored to the value the procedure had when it was called. This enables individual clients to set the
options they want without affecting the logic of the procedure.
Any SET statement can be specified inside a procedure, except SET SHOWPL AN_TEXT and SET
SHOWPL AN_ALL. These must be the only statements in the batch. The SET option chosen remains in effect
during the execution of the procedure and then reverts to its former setting.
NOTE
SET ANSI_WARNINGS is not honored when passing parameters in a procedure, user-defined function, or when declaring
and setting variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger
than three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.
If the procedure makes changes on a remote instance of SQL Server, the changes cannot be rolled back. Remote
procedures do not take part in transactions.
For the Database Engine to reference the correct method when it is overloaded in the .NET Framework, the
method specified in the EXTERNAL NAME clause must have the following characteristics:
Be declared as a static method.
Receive the same number of parameters as the number of parameters of the procedure.
Use parameter types that are compatible with the data types of the corresponding parameters of the SQL
Server procedure. For information about matching SQL Server data types to the .NET Framework data
types, see Mapping CLR Parameter Data.
Metadata
The following table lists the catalog views and dynamic management views that you can use to return
information about stored procedures.
VIEW DESCRIPTION
To estimate the size of a compiled procedure, use the following Performance Monitor Counters.
Cache Pages
PERFORMANCE MONITOR OBJECT NAME PERFORMANCE MONITOR COUNTER NAME
*These counters are available for various categories of cache objects including ad hoc Transact-SQL, prepared
Transact-SQL, procedures, triggers, and so on. For more information, see SQL Server, Plan Cache Object.
Security
Permissions
Requires CREATE PROCEDURE permission in the database and ALTER permission on the schema in which the
procedure is being created, or requires membership in the db_ddladmin fixed database role.
For CLR stored procedures, requires ownership of the assembly referenced in the EXTERNAL NAME clause, or
REFERENCES permission on that assembly.
UPDATE dbo.Departments
SET kitchen_count = ISNULL(kitchen_count, 0) + @kitchen_count
WHERE id = @dept_id
END;
GO
A procedure created without NATIVE_COMPIL ATION cannot be altered to a natively compiled stored procedure.
For a discussion of programmability in natively compiled stored procedures, supported query surface area, and
operators see Supported Features for Natively Compiled T-SQL Modules.
Examples
CATEGORY FEATURED SYNTAX ELEMENTS
Basic Syntax
Examples in this section demonstrate the basic functionality of the CREATE PROCEDURE statement using the
minimum required syntax.
A. Creating a simple Transact-SQL procedure
The following example creates a stored procedure that returns all employees (first and last names supplied), their
job titles, and their department names from a view in the AdventureWorks2012 database. This procedure does
not use any parameters. The example then demonstrates three methods of executing the procedure.
EXECUTE HumanResources.uspGetAllEmployees;
GO
-- Or
EXEC HumanResources.uspGetAllEmployees;
GO
-- Or, if this procedure is the first statement within a batch:
HumanResources.uspGetAllEmployees;
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database (if using an assembly created from
assembly_bits.
CREATE ASSEMBLY HandlingLOBUsingCLR
FROM '\\MachineName\HandlingLOBUsingCLR\bin\Debug\HandlingLOBUsingCLR.dll';
GO
CREATE PROCEDURE dbo.GetPhotoFromDB
(
@ProductPhotoID int,
@CurrentDirectory nvarchar(1024),
@FileName nvarchar(1024)
)
AS EXTERNAL NAME HandlingLOBUsingCLR.LargeObjectBinary.GetPhotoFromDB;
GO
Passing Parameters
Examples in this section demonstrate how to use input and output parameters to pass values to and from a
stored procedure.
D. Creating a procedure with input parameters
The following example creates a stored procedure that returns information for a specific employee by passing
values for the employee's first name and last name. This procedure accepts only exact matches for the parameters
passed.
The uspGetEmployees2 procedure can be executed in many combinations. Only a few possible combinations are
shown here.
EXECUTE HumanResources.uspGetEmployees2;
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'Wi%';
-- Or
EXECUTE HumanResources.uspGetEmployees2 @FirstName = N'%';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'[CK]ars[OE]n';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'Hesse', N'Stefen';
-- Or
EXECUTE HumanResources.uspGetEmployees2 N'H%', N'S%';
Execute uspGetList to return a list of Adventure Works products (Bikes) that cost less than $700 . The OUTPUT
parameters @Cost and @ComparePrices are used with control-of-flow language to return a message in the
Messages window.
NOTE
The OUTPUT variable must be defined when the procedure is created and also when the variable is used. The parameter
name and variable name do not have to match; however, the data type and parameter positioning must match, unless
@ListPrice = variable is used.
H . U si n g a n O U T P U T c u r so r p a r a m e t e r
The following example uses the OUTPUT cursor parameter to pass a cursor that is local to a procedure back to
the calling batch, procedure, or trigger.
First, create the procedure that declares and then opens a cursor on the Currency table:
Next, run a batch that declares a local cursor variable, executes the procedure to assign the cursor to the local
variable, and then fetches the rows from the cursor.
DECLARE @MyCursor CURSOR;
EXEC dbo.uspCurrencyCursor @CurrencyCursor = @MyCursor OUTPUT;
WHILE (@@FETCH_STATUS = 0)
BEGIN;
FETCH NEXT FROM @MyCursor;
END;
CLOSE @MyCursor;
DEALLOCATE @MyCursor;
GO
Error Handling
Examples in this section demonstrate methods to handle errors that might occur when the stored procedure is
executed.
J. Using TRY…CATCH
The following example using the TRY…CATCH construct to return error information caught during the execution
of a stored procedure.
CREATE PROCEDURE Production.uspDeleteWorkOrder ( @WorkOrderID int )
AS
SET NOCOUNT ON;
BEGIN TRY
BEGIN TRANSACTION
-- Delete rows from the child table, WorkOrderRouting, for the specified work order.
DELETE FROM Production.WorkOrderRouting
WHERE WorkOrderID = @WorkOrderID;
-- Delete the rows from the parent table, WorkOrder, for the specified work order.
DELETE FROM Production.WorkOrder
WHERE WorkOrderID = @WorkOrderID;
COMMIT
END TRY
BEGIN CATCH
-- Determine if an error occurred.
IF @@TRANCOUNT > 0
ROLLBACK
GO
EXEC Production.uspDeleteWorkOrder 13;
BEGIN TRY
BEGIN TRANSACTION
-- Delete the rows from the parent table, WorkOrder, for the specified work order.
DELETE FROM Production.WorkOrder
WHERE WorkOrderID = @WorkOrderID;
-- Delete rows from the child table, WorkOrderRouting, for the specified work order.
DELETE FROM Production.WorkOrderRouting
WHERE WorkOrderID = @WorkOrderID;
COMMIT TRANSACTION
END TRY
BEGIN CATCH
-- Determine if an error occurred.
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION
The WITH ENCRYPTION option obfuscates the definition of the procedure when querying the system catalog or
using metadata functions, as shown by the following examples.
Run sp_helptext :
definition
--------------------------------
NULL
See Also
ALTER PROCEDURE (Transact-SQL )
Control-of-Flow Language (Transact-SQL )
Cursors
Data Types (Transact-SQL )
DECL ARE @local_variable (Transact-SQL )
DROP PROCEDURE (Transact-SQL )
EXECUTE (Transact-SQL )
EXECUTE AS (Transact-SQL )
Stored Procedures (Database Engine)
sp_procoption (Transact-SQL )
sp_recompile (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.parameters (Transact-SQL )
sys.procedures (Transact-SQL )
sys.sql_expression_dependencies (Transact-SQL )
sys.assembly_modules (Transact-SQL )
sys.numbered_procedures (Transact-SQL )
sys.numbered_procedure_parameters (Transact-SQL )
OBJECT_DEFINITION (Transact-SQL )
Create a Stored Procedure
Use Table-Valued Parameters (Database Engine)
sys.dm_sql_referenced_entities (Transact-SQL )
sys.dm_sql_referencing_entities (Transact-SQL )
CREATE QUEUE (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new queue in a database. Queues store messages. When a message arrives for a service, Service Broker
puts the message on the queue associated with the service.
Transact-SQL Syntax Conventions
Syntax
CREATE QUEUE <object>
[ WITH
[ STATUS = { ON | OFF } [ , ] ]
[ RETENTION = { ON | OFF } [ , ] ]
[ ACTIVATION (
[ STATUS = { ON | OFF } , ]
PROCEDURE_NAME = <procedure> ,
MAX_QUEUE_READERS = max_readers ,
EXECUTE AS { SELF | 'user_name' | OWNER }
) [ , ] ]
[ POISON_MESSAGE_HANDLING (
[ STATUS = { ON | OFF } ] ) ]
]
[ ON { filegroup | [ DEFAULT ] } ]
[ ; ]
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
queue_name
}
<procedure> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
stored_procedure_name
}
Arguments
database_name (object)
Is the name of the database within which to create the new queue. database_name must specify the name of an
existing database. When database_name is not provided, the queue is created in the current database.
schema_name (object)
Is the name of the schema to which the new queue belongs. The schema defaults to the default schema for the
user that executes the statement. If the CREATE QUEUE statement is executed by a member of the sysadmin fixed
server role, or a member of the db_dbowner or db_ddladmin fixed database roles in the database specified by
database_name, schema_name can specify a schema other than the one associated with the login of the current
connection. Otherwise, schema_name must be the default schema for the user who executes the statement.
queue_name
Is the name of the queue to create. This name must meet the guidelines for SQL Server identifiers.
STATUS (Queue)
Specifies whether the queue is available (ON ) or unavailable (OFF ). When the queue is unavailable, no messages
can be added to the queue or removed from the queue. You can create the queue in an unavailable state to keep
messages from arriving on the queue until the queue is made available with an ALTER QUEUE statement. If this
clause is omitted, the default is ON, and the queue is available.
RETENTION
Specifies the retention setting for the queue. If RETENTION = ON, all messages sent or received on conversations
that use this queue are retained in the queue until the conversations have ended. This lets you retain messages for
auditing purposes, or to perform compensating transactions if an error occurs. If this clause is not specified, the
retention setting defaults to OFF.
NOTE
Setting RETENTION = ON can decrease performance. This setting should only be used if it is required for the application.
ACTIVATION
Specifies information about which stored procedure you have to start to process messages in this queue.
STATUS (Activation)
Specifies whether Service Broker starts the stored procedure. When STATUS = ON, the queue starts the stored
procedure specified with PROCEDURE_NAME when the number of procedures currently running is less than
MAX_QUEUE_READERS and when messages arrive on the queue faster than the stored procedures receive
messages. When STATUS = OFF, the queue does not start the stored procedure. If this clause is not specified, the
default is ON.
PROCEDURE_NAME = <procedure>
Specifies the name of the stored procedure to start to process messages in this queue. This value must be a SQL
Server identifier.
database_name(procedure)
Is the name of the database that contains the stored procedure.
schema_name(procedure)
Is the name of the schema that contains the stored procedure.
procedure_name
Is the name of the stored procedure.
MAX_QUEUE_READERS =max_readers
Specifies the maximum number of instances of the activation stored procedure that the queue starts at the same
time. The value of max_readers must be a number between 0 and 32767.
EXECUTE AS
Specifies the SQL Server database user account under which the activation stored procedure runs. SQL Server
must be able to check the permissions for this user at the time that the queue starts the stored procedure. For a
domain user, the server must be connected to the domain when the procedure is started or activation fails. For a
SQL Server user, the server can always check permissions.
SELF
Specifies that the stored procedure executes as the current user. (The database principal executing this CREATE
QUEUE statement.)
'user_name'
Is the name of the user who the stored procedure executes as. The user_name parameter must be a valid SQL
Server user specified as a SQL Server identifier. The current user must have IMPERSONATE permission for the
user_name specified.
OWNER
Specifies that the stored procedure executes as the owner of the queue.
POISON_MESSAGE_HANDLING
Specifies whether poison message handling is enabled for the queue. The default is ON.
A queue that has poison message handling set to OFF will not be disabled after five consecutive transaction
rollbacks. This allows for a custom poison message handing system to be defined by the application.
ON filegroup | [DEFAULT]
Specifies the SQL Server filegroup on which to create this queue. You can use the filegroup parameter to identify
a filegroup, or use the DEFAULT identifier to use the default filegroup for the service broker database. In the
context of this clause, DEFAULT is not a keyword, and must be delimited as an identifier. When no filegroup is
specified, the queue uses the default filegroup for the database.
Remarks
A queue can be the target of a SELECT statement. However, the contents of a queue can only be modified using
statements that operate on Service Broker conversations, such as SEND, RECEIVE, and END CONVERSATION. A
queue cannot be the target of an INSERT, UPDATE, DELETE, or TRUNCATE statement.
A queue might not be a temporary object. Therefore, queue names starting with # are not valid.
Creating a queue in an inactive state lets you get the infrastructure in place for a service before allowing messages
to be received on the queue.
Service Broker does not stop activation stored procedures when there are no messages on the queue. An
activation stored procedure should exit when no messages are available on the queue for a short time.
Permissions for the activation stored procedure are checked when Service Broker starts the stored procedure, not
when the queue is created. The CREATE QUEUE statement does not verify that the user specified in the EXECUTE
AS clause has permission to execute the stored procedure specified in the PROCEDURE NAME clause.
When a queue is unavailable, Service Broker holds messages for services that use the queue in the transmission
queue for the database. The sys.transmission_queue catalog view provides a view of the transmission queue.
A queue is a schema-owned object. Queues appear in the sys.objects catalog view.
The following table lists the columns in a queue.
1=Ready to receive
E=Empty
N=None
X=XML
Permissions
Permission for creating a queue uses members of the db_ddladmin or db_owner fixed database roles and the
sysadmin fixed server role.
REFERENCES permission for a queue defaults to the owner of the queue, members of the db_ddladmin or
db_owner fixed database roles, and members of the sysadmin fixed server role.
RECEIVE permission for a queue defaults to the owner of the queue, members of the db_owner fixed database
role, and members of the sysadmin fixed server role.
Examples
A. Creating a queue with no parameters
The following example creates a queue that is available to receive messages. No activation stored procedure is
specified for the queue.
See Also
ALTER QUEUE (Transact-SQL )
CREATE SERVICE (Transact-SQL )
DROP QUEUE (Transact-SQL )
RECEIVE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE REMOTE SERVICE BINDING (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a binding that defines the security credentials to use to initiate a conversation with a remote service.
Transact-SQL Syntax Conventions
Syntax
CREATE REMOTE SERVICE BINDING binding_name
[ AUTHORIZATION owner_name ]
TO SERVICE 'service_name'
WITH USER = user_name [ , ANONYMOUS = { ON | OFF } ]
[ ; ]
Arguments
binding_name
Is the name of the remote service binding to be created. Server, database, and schema names cannot be specified.
The binding_name must be a valid sysname.
AUTHORIZATION owner_name
Sets the owner of the binding to the specified database user or role. When the current user is dbo or sa,
owner_name can be the name of any valid user or role. Otherwise, owner_name must be the name of the current
user, the name of a user who the current user has IMPERSONATE permissions for, or the name of a role to which
the current user belongs.
TO SERVICE 'service_name'
Specifies the remote service to bind to the user identified in the WITH USER clause.
USER = user_name
Specifies the database principal that owns the certificate associated with the remote service identified by the TO
SERVICE clause. This certificate is used for encryption and authentication of messages exchanged with the remote
service.
ANONYMOUS
Specifies whether anonymous authentication is used when communicating with the remote service. If
ANONYMOUS = ON, anonymous authentication is used and operations in the remote database occur as a
member of the public fixed database role. If ANONYMOUS = OFF, operations in the remote database occur as a
specific user in that database. If this clause is not specified, the default is OFF.
Remarks
Service Broker uses a remote service binding to locate the certificate to use for a new conversation. The public key
in the certificate associated with user_name is used to authenticate messages sent to the remote service and to
encrypt a session key that is then used to encrypt the conversation. The certificate for user_name must correspond
to the certificate for a user in the database that hosts the remote service.
A remote service binding is only necessary for initiating services that communicate with target services outside of
the SQL Server instance. A database that hosts an initiating service must contain remote service bindings for any
target services outside of the SQL Server instance. A database that hosts a target service need not contain remote
service bindings for the initiating services that communicate with the target service. When the initiator and target
services are in the same instance of SQL Server, no remote service binding is necessary. However, if a remote
service binding is present where the service_name specified for TO SERVICE matches the name of the local
service, Service Broker will use the binding.
When ANONYMOUS = ON, the initiating service connects to the target service as a member of the public fixed
database role. By default, members of this role do not have permission to connect to a database. To successfully
send a message, the target database must grant the public role CONNECT permission for the database and
SEND permission for the target service.
When a user owns more than one certificate, Service Broker selects the certificate with the latest expiration date
from among the certificates that currently valid and marked as AVAIL ABLE FOR BEGIN_DIALOG.
Permissions
Permissions for creating a remote service binding default to the user named in the USER clause, members of the
db_owner fixed database role, members of the db_ddladmin fixed database role, and members of the sysadmin
fixed server role.
The user that executes the CREATE REMOTE SERVICE BINDING statement must have impersonate permission
for the principal specified in the statement.
A remote service binding may not be a temporary object. Remote service binding names beginning with # are
allowed, but are permanent objects.
Examples
A. Creating a remote service binding
The following example creates a binding for the service //Adventure-Works.com/services/AccountsPayable . Service
Broker uses the certificate owned by the APUser database principal to authenticate to the remote service and to
exchange the session encryption key with the remote service.
See Also
ALTER REMOTE SERVICE BINDING (Transact-SQL )
DROP REMOTE SERVICE BINDING (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE REMOTE TABLE AS SELECT (Parallel Data
Warehouse)
5/4/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Selects data from a Parallel Data Warehouse database and copies that data to a new table in a SMP SQL Server
database on a remote server. Parallel Data Warehouse uses the appliance, with all the benefits of MPP query
processing, to select the data for the remote copy. Use this for scenarios that require SQL Server functionality.
To configure the remote server, see "Remote Table Copy" in the Parallel Data Warehouse product documentation.
Transact-SQL Syntax Conventions (Transact-SQL )
Syntax
CREATE REMOTE TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name AT
('<connection_string>')
[ WITH ( BATCH_SIZE = batch_size ) ]
AS <select_statement>
[;]
<connection_string> ::=
Data Source = { IP_address | hostname } [, port ]; User ID = user_name ;Password = password;
<select_statement> ::=
[ WITH <common_table_expression> [ ,...n ] ]
SELECT <select_criteria>
Arguments
database_name
The database to create the remote table in. database_name is a SQL Server database. Default is the default
database for the user login on the destination SQL Server instance.
schema_name
The schema for the new table. Default is the default schema for the user login on the destination SQL Server
instance.
table_name
The name of the new table. For details on permitted table names, see "Object Naming Rules" in the Parallel Data
Warehouse product documentation.
The remote table is created as a heap. It does not have check constraints or triggers. The collation of the remote
table columns is the same as the collation of the source table columns. This applies to columns of type char, nchar,
varchar, and nvarchar.
connection_string
A character string that specifies the Data Source , User ID , and Password parameters for connecting to the remote
server and database.
The connection string is a semicolon-delimited list of key and value pairs. Keywords are not case-sensitive. Spaces
between key and value pairs are ignored. However, values may be case-sensitive, depending on the data source.
Data Source
The parameter that specifies the name or IP address, and TCP port number for the remote SMP SQL Server.
hostname or IP_address
Name of the remote server computer or the IPv4 address of the remote server. IPv6 addresses are not supported.
You can specify a SQL Server named instance in the format Computer_Name\Instance_Name or
IP_address\Instance_Name. The server must be remote and therefore cannot be specified as (local).
TCP port number
The TCP port number for the connection. You can specify a TCP port number from 0 to 65535 for an instance of
SQL Server that is not listening on the default port 1433. For example: ServerA,1450 or 10.192.14.27,1435
NOTE
We recommend connecting to a remote server by using the IP address. Depending on your network configuration,
connecting by using the computer name might require additional steps to use your non-appliance DNS server to resolve the
name to the correct server. This step is not necessary when connecting with an IP address. For more information, see "Use a
DNS Forwarder to Resolve Non-Appliance DNS Names (Analytics Platform System)" in the Parallel Data Warehouse product
documentation.
user_name
A valid SQL Server authentication login name. Maximum number of characters is 128.
password
The login password. Maximum number of characters is 128.
batch_size
The maximum number of rows per batch. Parallel Data Warehouse sends rows in batches to the destination server.
Batch_size is a positive integer >= 0. Default is 0.
WITH common_table_expression
Specifies a temporary named result set, known as a common table expression (CTE ). For more information, see
WITH common_table_expression (Transact-SQL ).
SELECT <select_criteria> The query predicate that specifies which data will populate the new remote table. For
information on the SELECT statement, see SELECT (Transact-SQL ).
Permissions
Requires:
SELECT permission on each object in the SELECT clause.
Requires CREATE TABLE permission on the destination SMP database.
Requires ALTER, INSERT, and SELECT permissions on the destination SMP schema.
Error Handling
If copying data to the remote database fails, Parallel Data Warehouse will abort the operation, log an error, and
attempt to delete the remote table. Parallel Data Warehouse does not guarantee a successful cleanup of the new
table.
Limitations and Restrictions
Remote Destination Server:
TCP is the default and only supported protocol for connecting to a remote server.
The destination server must be a non-appliance server. CREATE REMOTE TABLE cannot be used to copy
data from one appliance to another.
The CREATE REMOTE TABLE statement only creates new tables. Therefore, the new table cannot already
exist. The remote database and schema must already exist.
The remote server must have space available to store the data that is transferred from the appliance to the
SQL Server remote database.
SELECT statement:
The ORDER BY and TOP clauses are not supported in the select criteria.
CREATE REMOTE TABLE cannot be run inside an active transaction or when the AUTOCOMMIT OFF
setting is active for the session.
SET ROWCOUNT (Transact-SQL ) has no effect on this statement. To achieve a similar behavior, use TOP
(Transact-SQL ).
Locking Behavior
After creating the remote table, the destination table is not locked until the copy starts. Therefore, it is possible for
another process to delete the remote table after it is created and before the copy starts. When this occurs, Parallel
Data Warehouse will generate an error and the copy will fail.
Metadata
Use sys.dm_pdw_dms_workers (Transact-SQL ) to view the progress of copying the selected data to the remote
SMP server. Rows with type PARALLEL_COPY_READER contain this information.
Security
CREATE REMOTE TABLE uses SQL Server Authentication to connect to the remote SQL Server instance; it does
not use Windows Authentication.
The Parallel Data Warehouse external facing network must be firewalled, with exception of SQL Server ports,
administrative ports, and management ports.
To help prevent accidental data loss or corruption, the user account that is used to copy from the appliance to the
destination database should have only the minimum required permissions on the destination database.
Connection settings allow you to connect to the SMP SQL Server instance with SSL protecting user name and
password data, but with actual data being sent unencrypted in clear text. When this occurs, a malicious user could
intercept the CREATE REMOTE TABLE statement text, which contains the SQL Server user name and password to
log onto the SMP SQL Server instance. To avoid this risk, use data encryption on the connection to the SMP SQL
Server instance.
Examples
A. Creating a remote table
This example creates a SQL Server SMP remote table called MyOrdersTable on database OrderReporting and
schema Orders . The OrderReporting database is on a server named SQLA that listens on the default port 1433.
The connection to the server uses the credentials of the user David , whose password is e4n8@3 .
USE ssawPDW;
CREATE REMOTE TABLE OrderReporting.Orders.MyOrdersTable
AT ( 'Data Source = SQLA, 1433; User ID = David; Password = e4n8@3;' )
AS SELECT T1.* FROM OrderReporting.Orders.MyOrdersTable T1
JOIN OrderReporting.Order.Customer T2
ON T1.CustomerID=T2.CustomerID OPTION (HASH JOIN);
CREATE RESOURCE POOL (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a Resource Governor resource pool in SQL Server. A resource pool represents a subset of the physical
resources (memory, CPUs and IO ) of an instance of the Database Engine. Resource Governor enables a database
administrator to distribute server resources among resource pools, up to a maximum of 64 pools. Resource
Governor is not available in every edition of SQL Server. For a list of features that are supported by the editions
of SQL Server, see Features Supported by the Editions of SQL Server 2016.
Transact-SQL Syntax Conventions.
Syntax
CREATE RESOURCE POOL pool_name
[ WITH
(
[ MIN_CPU_PERCENT = value ]
[ [ , ] MAX_CPU_PERCENT = value ]
[ [ , ] CAP_CPU_PERCENT = value ]
[ [ , ] AFFINITY {SCHEDULER =
AUTO
| ( <scheduler_range_spec> )
| NUMANODE = ( <NUMA_node_range_spec> )
} ]
[ [ , ] MIN_MEMORY_PERCENT = value ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MIN_IOPS_PER_VOLUME = value ]
[ [ , ] MAX_IOPS_PER_VOLUME = value ]
)
]
[;]
<scheduler_range_spec> ::=
{ SCHED_ID | SCHED_ID TO SCHED_ID }[,…n]
<NUMA_node_range_spec> ::=
{ NUMA_node_ID | NUMA_node_ID TO NUMA_node_ID }[,…n]
Arguments
pool_name
Is the user-defined name for the resource pool. pool_name is alphanumeric, can be up to 128 characters, must be
unique within an instance of SQL Server, and must comply with the rules for identifiers.
MIN_CPU_PERCENT =value
Specifies the guaranteed average CPU bandwidth for all requests in the resource pool when there is CPU
contention. value is an integer with a default setting of 0. The allowed range for value is from 0 through 100.
MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in resource pool will receive when there is CPU
contention. value is an integer with a default setting of 100. The allowed range for value is from 1 through 100.
CAP_CPU_PERCENT =value
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. Limits the
maximum CPU bandwidth level to be the same as the specified value. value is an integer with a default setting of
100. The allowed range for value is from 1 through 100.
AFFINITY {SCHEDULER = AUTO | ( <scheduler_range_spec> ) | NUMANODE =
(<NUMA_node_range_spec>)} Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Attach the resource pool to specific schedulers. The default value is AUTO.
AFFINITY SCHEDULER = ( <scheduler_range_spec> ) maps the resource pool to the SQL Server schedules
identified by the given IDs. These IDs map to the values in the scheduler_id column in sys.dm_os_schedulers
(Transact-SQL ).
When you use AFFINITY NUMANODE = ( <NUMA_node_range_spec> ), the resource pool is affinitized to the
SQL Server schedulers that map to the physical CPUs that correspond to the given NUMA node or range of
nodes. You can use the following Transact-SQL query to discover the mapping between the physical NUMA
configuration and the SQL Server scheduler IDs.
MIN_MEMORY_PERCENT =value
Specifies the minimum amount of memory reserved for this resource pool that can not be shared with other
resource pools. value is an integer with a default setting of 0 The allowed range for value is from 0 to 100.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this resource pool. value is an integer with a
default setting of 100. The allowed range for value is from 1 through 100.
MIN_IOPS_PER_VOLUME =value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Specifies the minimum I/O operations per second (IOPS ) per disk volume to reserve for the resource pool. The
allowed range for value is from 0 through 2^31-1 (2,147,483,647). Specify 0 to indicate no minimum threshold
for the pool. The default is 0.
MAX_IOPS_PER_VOLUME =value
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Specifies the maximum I/O operations per second (IOPS ) per disk volume to allow for the resource pool. The
allowed range for value is from 0 through 2^31-1 (2,147,483,647). Specify 0 to set an unlimited threshold for the
pool. The default is 0.
If the MAX_IOPS_PER_VOLUME for a pool is set to 0, the pool is not governed at all and can take all the IOPS in
the system even if other pools have MIN_IOPS_PER_VOLUME set. For this case, we recommend that you set the
MAX_IOPS_PER_VOLUME value for this pool to a high number (for example, the maximum value 2^31-1) if
you want this pool to be governed for IO.
Remarks
MIN_IOPS_PER_VOLUME and MAX_IOPS_PER_VOLUME specify the minimum and maximum reads or writes
per second. These reads or writes can be of any size and do not indicate minimum or maximum throughput.
The values for MAX_CPU_PERCENT and MAX_MEMORY_PERCENT must be greater than or equal to the
values for MIN_CPU_PERCENT and MIN_MEMORY_PERCENT, respectively.
CAP_CPU_PERCENT differs from MAX_CPU_PERCENT in that workloads associated with the pool can use CPU
capacity above the value of MAX_CPU_PERCENT if it is available, but not above the value of
CAP_CPU_PERCENT.
The total CPU percentage for each affinitized component (scheduler(s) or NUMA node(s)) should not exceed
100%.
Permissions
Requires CONTROL SERVER permission.
Examples
The following example shows how to create a resource pool named bigPool . This pool uses the default Resource
Governor settings.
The following example sets the CAP_CPU_PERCENT to a hard cap of 30% and sets AFFINITY SCHEDULER to a range of
0 to 63, 128 to 191.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
The following example sets MIN_IOPS_PER_VOLUME to <some value> and MAX_IOPS_PER_VOLUME to <some value>.
These values govern the physical I/O read and write operations that are available for the resource pool.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
See Also
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
Resource Governor Resource Pool
Create a Resource Pool
CREATE ROLE (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new database role in the current database.
Transact-SQL Syntax Conventions
Syntax
CREATE ROLE role_name [ AUTHORIZATION owner_name ]
Arguments
role_name
Is the name of the role to be created.
AUTHORIZATION owner_name
Is the database user or role that is to own the new role. If no user is specified, the role will be owned by the user
that executes CREATE ROLE.
Remarks
Roles are database-level securables. After you create a role, configure the database-level permissions of the role
by using GRANT, DENY, and REVOKE. To add members to a database role, use ALTER ROLE (Transact-SQL ). For
more information, see Database-Level Roles.
Database roles are visible in the sys.database_role_members and sys.database_principals catalog views.
For information about designing a permissions system, see Getting Started with Database Engine Permissions.
Cau t i on
Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).
Permissions
Requires CREATE ROLE permission on the database or membership in the db_securityadmin fixed database
role. When you use the AUTHORIZATION option, the following permissions are also required:
To assign ownership of a role to another user, requires IMPERSONATE permission on that user.
To assign ownership of a role to another role, requires membership in the recipient role or ALTER
permission on that role.
To assign ownership of a role to an application role, requires ALTER permission on the application role.
Examples
The following examples all use the AdventureWorks database.
A. Creating a database role that is owned by a database user
The following example creates the database role buyers that is owned by user BenMiller .
See Also
Principals (Database Engine)
ALTER ROLE (Transact-SQL )
DROP ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
Getting Started with Database Engine Permissions
CREATE ROUTE (Transact-SQL)
5/4/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only)
Azure SQL Data Warehouse Parallel Data Warehouse
Adds a new route to the routing table for the current database. For outgoing messages, Service Broker determines
routing by checking the routing table in the local database. For messages on conversations that originate in
another instance, including messages to be forwarded, Service Broker checks the routes in msdb.
Transact-SQL Syntax Conventions
Syntax
CREATE ROUTE route_name
[ AUTHORIZATION owner_name ]
WITH
[ SERVICE_NAME = 'service_name', ]
[ BROKER_INSTANCE = 'broker_instance_identifier' , ]
[ LIFETIME = route_lifetime , ]
ADDRESS = 'next_hop_address'
[ , MIRROR_ADDRESS = 'next_hop_mirror_address' ]
[ ; ]
Arguments
route_name
Is the name of the route to create. A new route is created in the current database and owned by the principal
specified in the AUTHORIZATION clause. Server, database, and schema names cannot be specified. The
route_name must be a valid sysname.
AUTHORIZATION owner_name
Sets the owner of the route to the specified database user or role. The owner_name can be the name of any valid
user or role when the current user is a member of either the db_owner fixed database role or the sysadmin fixed
server role. Otherwise, owner_name must be the name of the current user, the name of a user that the current user
has IMPERSONATE permission for, or the name of a role to which the current user belongs. When this clause is
omitted, the route belongs to the current user.
WITH
Introduces the clauses that define the route being created.
SERVICE_NAME = 'service_name'
Specifies the name of the remote service that this route points to. The service_name must exactly match the name
the remote service uses. Service Broker uses a byte-by-byte comparison to match the service_name. In other
words, the comparison is case sensitive and does not consider the current collation. If the SERVICE_NAME is
omitted, this route matches any service name, but has lower priority for matching than a route that specifies a
SERVICE_NAME. A route with a service name of 'SQL/ServiceBroker/BrokerConfiguration' is a route to a
Broker Configuration Notice service. A route to this service might not specify a broker instance.
BROKER_INSTANCE = 'broker_instance_identifier'
Specifies the database that hosts the target service. The broker_instance_identifier parameter must be the broker
instance identifier for the remote database, which can be obtained by running the following query in the selected
database:
SELECT service_broker_guid
FROM sys.databases
WHERE database_id = DB_ID()
When the BROKER_INSTANCE clause is omitted, this route matches any broker instance. A route that matches
any broker instance has higher priority for matching than routes with an explicit broker instance when the
conversation does not specify a broker instance. For conversations that specify a broker instance, a route with a
broker instance has higher priority than a route that matches any broker instance.
LIFETIME =route_lifetime
Specifies the time, in seconds, that SQL Server retains the route in the routing table. At the end of the lifetime, the
route expires, and SQL Server no longer considers the route when choosing a route for a new conversation. If this
clause is omitted, the route_lifetime is NULL and the route never expires.
ADDRESS ='next_hop_address'
For SQL Database Managed Instance, ADDRESS must be local.
Specifies the network address for this route. The next_hop_address specifies a TCP/IP address in the following
format:
TCP://{ dns_name | netbios_name | ip_address } :port_number
The specified port_number must match the port number for the Service Broker endpoint of an instance of SQL
Server at the specified computer. This can be obtained by running the following query in the selected database:
SELECT tcpe.port
FROM sys.tcp_endpoints AS tcpe
INNER JOIN sys.service_broker_endpoints AS ssbe
ON ssbe.endpoint_id = tcpe.endpoint_id
WHERE ssbe.name = N'MyServiceBrokerEndpoint';
When the service is hosted in a mirrored database, you must also specify the MIRROR_ADDRESS for the other
instance that hosts a mirrored database. Otherwise, this route does not fail over to the mirror.
When a route specifies 'LOCAL' for the next_hop_address, the message is delivered to a service within the current
instance of SQL Server.
When a route specifies 'TRANSPORT' for the next_hop_address, the network address is determined based on the
network address in the name of the service. A route that specifies 'TRANSPORT' might not specify a service
name or broker instance.
MIRROR_ADDRESS ='next_hop_mirror_address'
Specifies the network address for a mirrored database with one mirrored database hosted at the
next_hop_address. The next_hop_mirror_address specifies a TCP/IP address in the following format:
TCP://{ dns_name | netbios_name | ip_address } : port_number
The specified port_number must match the port number for the Service Broker endpoint of an instance of SQL
Server at the specified computer. This can be obtained by running the following query in the selected database:
SELECT tcpe.port
FROM sys.tcp_endpoints AS tcpe
INNER JOIN sys.service_broker_endpoints AS ssbe
ON ssbe.endpoint_id = tcpe.endpoint_id
WHERE ssbe.name = N'MyServiceBrokerEndpoint';
When the MIRROR_ADDRESS is specified, the route must specify the SERVICE_NAME clause and the
BROKER_INSTANCE clause. A route that specifies 'LOCAL' or 'TRANSPORT' for the next_hop_address might
not specify a mirror address.
Remarks
The routing table that stores the routes is a metadata table that can be read through the sys.routes catalog view.
This catalog view can only be updated through the CREATE ROUTE, ALTER ROUTE, and DROP ROUTE
statements.
By default, the routing table in each user database contains one route. This route is named AutoCreatedLocal.
The route specifies 'LOCAL' for the next_hop_address and matches any service name and broker instance
identifier.
When a route specifies 'TRANSPORT' for the next_hop_address, the network address is determined based on the
name of the service. SQL Server can successfully process service names that begin with a network address in a
format that is valid for a next_hop_address.
The routing table can contain any number of routes that specify the same service, network address, and broker
instance identifier. In this case, Service Broker chooses a route using a procedure designed to find the most exact
match between the information specified in the conversation and the information in the routing table.
Service Broker does not remove expired routes from the routing table. An expired route can be made active using
the ALTER ROUTE statement.
A route cannot be a temporary object. Route names that start with # are allowed, but are permanent objects.
Permissions
Permission for creating a route defaults to members of the db_ddladmin or db_owner fixed database roles and
the sysadmin fixed server role.
Examples
A. Creating a TCP/IP route by using a DNS name
The following example creates a route to the service //Adventure-Works.com/Expenses . The route specifies that
messages to this service travel over TCP to port 1234 on the host identified by the DNS name
www.Adventure-Works.com . The target server delivers the messages upon arrival to the broker instance identified by
the unique identifier D8D4D268-00A3-4C62-8F91-634B89C1E315 .
See Also
ALTER ROUTE (Transact-SQL )
DROP ROUTE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE RULE (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an object called a rule. When bound to a column or an alias data type, a rule specifies the acceptable
values that can be inserted into that column.
IMPORTANT
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature. We recommend that you use check constraints instead. Check
constraints are created by using the CHECK keyword of CREATE TABLE or ALTER TABLE. For more information, see Unique
Constraints and Check Constraints.
A column or alias data type can have only one rule bound to it. However, a column can have both a rule and one or
more check constraints associated with it. When this is true, all restrictions are evaluated.
Transact-SQL Syntax Conventions
Syntax
CREATE RULE [ schema_name . ] rule_name
AS condition_expression
[ ; ]
Arguments
schema_name
Is the name of the schema to which the rule belongs.
rule_name
Is the name of the new rule. Rule names must comply with the rules for identifiers. Specifying the rule owner
name is optional.
condition_expression
Is the condition or conditions that define the rule. A rule can be any expression valid in a WHERE clause and can
include elements such as arithmetic operators, relational operators, and predicates (for example, IN, LIKE,
BETWEEN ). A rule cannot reference columns or other database objects. Built-in functions that do not reference
database objects can be included. User-defined functions cannot be used.
condition_expression includes one variable. The at sign (@) precedes each local variable. The expression refers to
the value entered with the UPDATE or INSERT statement. Any name or symbol can be used to represent the value
when creating the rule, but the first character must be the at sign (@).
NOTE
Avoid creating rules on expressions that use alias data types. Although rules can be created on expressions that use alias
data types, after binding the rules to columns or alias data types, the expressions fail to compile when referenced.
Remarks
CREATE RULE cannot be combined with other Transact-SQL statements in a single batch. Rules do not apply to
data already existing in the database at the time the rules are created, and rules cannot be bound to system data
types.
A rule can be created only in the current database. After you create a rule, execute sp_bindrule to bind the rule to
a column or to alias data type. A rule must be compatible with the column data type. For example, "@value LIKE
A%" cannot be used as a rule for a numeric column. A rule cannot be bound to a text, ntext, image,
varchar(max), nvarchar(max), varbinary(max), xml, CLR user-defined type, or timestampcolumn. A rule
cannot be bound to a computed column.
Enclose character and date constants with single quotation marks (') and precede binary constants with 0x. If the
rule is not compatible with the column to which it is bound, the SQL Server Database Engine returns an error
message when a value is inserted, but not when the rule is bound.
A rule bound to an alias data type is activated only when you try to insert a value into, or to update, a database
column of the alias data type. Because rules do not test variables, do not assign a value to an alias data type
variable that would be rejected by a rule that is bound to a column of the same data type.
To get a report on a rule, use sp_help. To display the text of a rule, execute sp_helptext with the rule name as the
parameter. To rename a rule, use sp_rename.
A rule must be dropped by using DROP RULE before a new one with the same name is created, and the rule must
be unbound byusing sp_unbindrule before it is dropped. To unbind a rule from a column, use sp_unbindrule.
You can bind a new rule to a column or data type without unbinding the previous one; the new rule overrides the
previous one. Rules bound to columns always take precedence over rules bound to alias data types. Binding a rule
to a column replaces a rule already bound to the alias data type of that column. But binding a rule to a data type
does not replace a rule bound to a column of that alias data type. The following table shows the precedence in
effect when rules are bound to columns and to alias data types on which rules already exist.
If a column has both a default and a rule associated with it, the default must fall within the domain defined by the
rule. A default that conflicts with a rule is never inserted. The SQL Server Database Engine generates an error
message each time it tries to insert such a default.
Permissions
To execute CREATE RULE, at a minimum, a user must have CREATE RULE permission in the current database and
ALTER permission on the schema in which the rule is being created.
Examples
A. Creating a rule with a range
The following example creates a rule that restricts the range of integers inserted into the column or columns to
which this rule is bound.
See Also
ALTER TABLE (Transact-SQL )
CREATE DEFAULT (Transact-SQL )
CREATE TABLE (Transact-SQL )
DROP DEFAULT (Transact-SQL )
DROP RULE (Transact-SQL )
Expressions (Transact-SQL )
sp_bindrule (Transact-SQL )
sp_help (Transact-SQL )
sp_helptext (Transact-SQL )
sp_rename (Transact-SQL )
sp_unbindrule (Transact-SQL )
WHERE (Transact-SQL )
CREATE SCHEMA (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a schema in the current database. The CREATE SCHEMA transaction can also create tables and views
within the new schema, and set GRANT, DENY, or REVOKE permissions on those objects.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<schema_name_clause> ::=
{
schema_name
| AUTHORIZATION owner_name
| schema_name AUTHORIZATION owner_name
}
<schema_element> ::=
{
table_definition | view_definition | grant_statement |
revoke_statement | deny_statement
}
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
schema_name
Is the name by which the schema is identified within the database.
AUTHORIZATION owner_name
Specifies the name of the database-level principal that will own the schema. This principal may own other
schemas, and may not use the current schema as its default schema.
table_definition
Specifies a CREATE TABLE statement that creates a table within the schema. The principal executing this
statement must have CREATE TABLE permission on the current database.
view_definition
Specifies a CREATE VIEW statement that creates a view within the schema. The principal executing this statement
must have CREATE VIEW permission on the current database.
grant_statement
Specifies a GRANT statement that grants permissions on any securable except the new schema.
revoke_statement
Specifies a REVOKE statement that revokes permissions on any securable except the new schema.
deny_statement
Specifies a DENY statement that denies permissions on any securable except the new schema.
Remarks
NOTE
Statements that contain CREATE SCHEMA AUTHORIZATION but do not specify a name, are permitted for backward
compatibility only. The statement does not cause an error, but does not create a schema.
CREATE SCHEMA can create a schema, the tables and views it contains, and GRANT, REVOKE, or DENY
permissions on any securable in a single statement. This statement must be executed as a separate batch. Objects
created by the CREATE SCHEMA statement are created inside the schema that is being created.
CREATE SCHEMA transactions are atomic. If any error occurs during the execution of a CREATE SCHEMA
statement, none of the specified securables are created and no permissions are granted.
Securables to be created by CREATE SCHEMA can be listed in any order, except for views that reference other
views. In that case, the referenced view must be created before the view that references it.
Therefore, a GRANT statement can grant permission on an object before the object itself is created, or a CREATE
VIEW statement can appear before the CREATE TABLE statements that create the tables referenced by the view.
Also, CREATE TABLE statements can declare foreign keys to tables that are defined later in the CREATE SCHEMA
statement.
NOTE
DENY and REVOKE are supported inside CREATE SCHEMA statements. DENY and REVOKE clauses will be executed in the
order in which they appear in the CREATE SCHEMA statement.
The principal that executes CREATE SCHEMA can specify another database principal as the owner of the schema
being created. This requires additional permissions, as described in the "Permissions" section later in this topic.
The new schema is owned by one of the following database-level principals: database user, database role, or
application role. Objects created within a schema are owned by the owner of the schema, and have a NULL
principal_id in sys.objects. Ownership of schema-contained objects can be transferred to any database-level
principal, but the schema owner always retains CONTROL permission on objects within the schema.
Cau t i on
Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).
Implicit Schema and User Creation
In some cases a user can use a database without having a database user account (a database principal in the
database). This can happen in the following situations:
A login has CONTROL SERVER privileges.
A Windows user does not have an individual database user account (a database principal in the database),
but accesses a database as a member of a Windows group which has a database user account (a database
principal for the Windows group).
When a user without a database user account creates an object without specifying an existing schema, a
database principal and default schema will be automatically created in the database for that user. The
created database principal and schema will have the same name as the name that user used when
connecting to SQL Server (the SQL Server authentication login name or the Windows user name).
This behavior is necessary to allow users that are based on Windows groups to create and own objects.
However it can result in the unintentional creation of schemas and users. To avoid implicitly creating users
and schemas, whenever possible explicitly create database principals and assign a default schema. Or
explicitly state an existing schema when creating objects in a database, using two or three-part object
names.
NOTE
The implicit creation of an Azure Active Directory user is not possible on SQL Database. Since creating an Azure AD user
from external provider must check the users status in the AAD, creating the user will fail with error 2760: The specified
schema name "<user_name@domain>" either does not exist or you do not have permission to use it. And then
error 2759: CREATE SCHEMA failed due to previous errors. To work around these errors, create the Azure AD user from
external provider first and then rerun the statement creating the object.
Deprecation Notice
CREATE SCHEMA statements that do not specify a schema name are currently supported for backward
compatibility. Such statements do not actually create a schema inside the database, but they do create tables and
views, and grant permissions. Principals do not need CREATE SCHEMA permission to execute this earlier form of
CREATE SCHEMA, because no schema is being created. This functionality will be removed from a future release
of SQL Server.
Permissions
Requires CREATE SCHEMA permission on the database.
To create an object specified within the CREATE SCHEMA statement, the user must have the corresponding
CREATE permission.
To specify another user as the owner of the schema being created, the caller must have IMPERSONATE
permission on that user. If a database role is specified as the owner, the caller must have one of the following:
membership in the role or ALTER permission on the role.
NOTE
For the backward-compatible syntax, no permissions to CREATE SCHEMA are checked because no schema is being created.
Examples
A. Creating a schema and granting permissions
The following example creates schema Sprockets owned by Annik that contains table NineProngs . The statement
grants SELECT to Mandar and denies SELECT to Prasanna . Note that Sprockets and NineProngs are created in a
single statement.
USE AdventureWorks2012;
GO
CREATE SCHEMA Sprockets AUTHORIZATION Annik
CREATE TABLE NineProngs (source int, cost int, partnumber int)
GRANT SELECT ON SCHEMA::Sprockets TO Mandar
DENY SELECT ON SCHEMA::Sprockets TO Prasanna;
GO
See Also
ALTER SCHEMA (Transact-SQL )
DROP SCHEMA (Transact-SQL )
GRANT (Transact-SQL )
DENY (Transact-SQL )
REVOKE (Transact-SQL )
CREATE VIEW (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.schemas (Transact-SQL )
Create a Database Schema
CREATE SEARCH PROPERTY LIST (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new search property list. A search property list is used to specify one or more search properties that
you want to include in a full-text index.
Transact-SQL Syntax Conventions
Syntax
CREATE SEARCH PROPERTY LIST new_list_name
[ FROM [ database_name. ] source_list_name ]
[ AUTHORIZATION owner_name ]
;
Arguments
new_list_name
Is the name of the new search property list. new_list_name is an identifier with a maximum of 128 characters.
new_list_name must be unique among all property lists in the current database, and conform to the rules for
identifiers. new_list_name will be used when the full-text index is created.
database_name
Is the name of the database where the property list specified by source_list_name is located. If not specified,
database_name defaults to the current database.
database_name must specify the name of an existing database. The login for the current connection must be
associated with an existing user ID in the database specified by database_name. You must also have the required
permissions on the database.
source_list_name
Specifies that the new property list is created by copying an existing property list from database_name. If
source_list_name does not exist, CREATE SEARCH PROPERTY LIST fails with an error. The search properties in
source_list_name are inherited by new_list_name.
AUTHORIZATION owner_name
Specifies the name of a user or role to own of the property list. owner_name must either be the name of a role of
which the current user is a member, or the current user must have IMPERSONATE permission on owner_name.
If not specified, ownership is given to the current user.
NOTE
The owner can be changed by using the ALTER AUTHORIZATION Transact-SQL statement.
Remarks
NOTE
For information about property lists in general, see Search Document Properties with Search Property Lists.
By default, a new search property list is empty and you must alter it to manually to add one or more search
properties. Alternatively, you can copy an existing search property list. In this case, the new list inherits the search
properties of its source, but you can alter the new list to add or remove search properties. Any properties in the
search property list at the time of the next full population are included in the full-text index.
A CREATE SEARCH PROPERTY LIST statement fails under any of the following conditions:
If the database specified by database_name does not exist.
If the list specified by source_list_name does not exist.
If you do not have the correct permissions.
To add or remove properties from a list
ALTER SEARCH PROPERTY LIST (Transact-SQL )
To drop a property list
DROP SEARCH PROPERTY LIST (Transact-SQL )
Permissions
Requires CREATE FULLTEXT CATALOG permissions in the current database and REFERENCES permissions on
any database from which you copy a source property list.
NOTE
REFERENCES permission is required to associate the list with a full-text index. CONTROL permission is required to add and
remove properties or drop the list. The property list owner can grant REFERENCES or CONTROL permissions on the list.
Users with CONTROL permission can also grant REFERENCES permission to other users.
Examples
A. Creating an empty property list and associating it with an index
The following example creates a new search property list named DocumentPropertyList . The example then uses
an ALTER FULLTEXT INDEX statement to associate the new property list with the full-text index of the
Production.Document table in the AdventureWorks database, without starting a population.
NOTE
For an example that adds several predefined, well-known search properties to this search property list, see ALTER SEARCH
PROPERTY LIST (Transact-SQL). After adding search properties to the list, the database administrator would need to use
another ALTER FULLTEXT INDEX statement with the START FULL POPULATION clause.
CREATE SEARCH PROPERTY LIST DocumentPropertyList;
GO
USE AdventureWorks2012;
ALTER FULLTEXT INDEX ON Production.Document
SET SEARCH PROPERTY LIST DocumentPropertyList
WITH NO POPULATION;
GO
See Also
ALTER SEARCH PROPERTY LIST (Transact-SQL )
DROP SEARCH PROPERTY LIST (Transact-SQL )
sys.registered_search_properties (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
sys.dm_fts_index_keywords_by_property (Transact-SQL )
Search Document Properties with Search Property Lists
Find Property Set GUIDs and Property Integer IDs for Search Properties
CREATE SECURITY POLICY (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a security policy for row level security.
Transact-SQL Syntax Conventions
Syntax
CREATE SECURITY POLICY [schema_name. ] security_policy_name
{ ADD [ FILTER | BLOCK ] } PREDICATE tvf_schema_name.security_predicate_function_name
( { column_name | expression } [ , …n] ) ON table_schema_name. table_name
[ <block_dml_operation> ] , [ , …n]
[ WITH ( STATE = { ON | OFF } [,] [ SCHEMABINDING = { ON | OFF } ] ) ]
[ NOT FOR REPLICATION ]
[;]
<block_dml_operation>
[ { AFTER { INSERT | UPDATE } }
| { BEFORE { UPDATE | DELETE } } ]
Arguments
security_policy_name
The name of the security policy. Security policy names must comply with the rules for identifiers and must be
unique within the database and to its schema.
schema_name
Is the name of the schema to which the security policy belongs. schema_name is required because of schema
binding.
[ FILTER | BLOCK ]
The type of security predicate for the function being bound to the target table. FILTER predicates silently filter the
rows that are available to read operations. BLOCK predicates explicitly block write operations that violate the
predicate function.
tvf_schema_name.security_predicate_function_name
Is the inline table value function that will be used as a predicate and that will be enforced upon queries against a
target table. At most one security predicate can be defined for a particular DML operation against a particular
table. The inline table value function must have been created using the SCHEMABINDING option.
{ column_name | expression }
A column name or expression used as a parameter for the security predicate function. Any column on the target
table can be used. An Expression can only include constants, built in scalar functions, operators and columns from
the target table. A column name or expression needs to be specified for each parameter of the function.
table_schema_name.table_name
Is the target table to which the security predicate will be applied. Multiple disabled security policies can target a
single table for a particular DML operation, but only one can be enabled at any given time.
<block_dml_operation> The particular DML operation for which the block predicate will be applied. AFTER
specifies that the predicate will be evaluated on the values of the rows after the DML operation was performed
(INSERT or UPDATE ). BEFORE specifies that the predicate will be evaluated on the values of the rows before the
DML operation is performed (UPDATE or DELETE ). If no operation is specified, the predicate will apply to all
operations.
[ STATE = { ON | OFF } ]
Enables or disables the security policy from enforcing its security predicates against the target tables. If not
specified the security policy being created is enabled.
[ SCHEMABINDING = { ON | OFF } ]
Indicates whether all predicate functions in the policy must be created with the SCHEMABINDING option. By
default, all functions must be created with SCHEMABINDING.
NOT FOR REPLICATION
Indicates that the security policy should not be executed when a replication agent modifies the target object. For
more information, see Control the Behavior of Triggers and Constraints During Synchronization (Replication
Transact-SQL Programming).
[table_schema_name.] table_name
Is the target table to which the security predicate will be applied. Multiple disabled security policies can target a
single table, but only one can be enabled at any given time.
Remarks
When using predicate functions with memory-optimized tables, you must include SCHEMABINDING and use
the WITH NATIVE_COMPILATION compilation hint.
Block predicates are evaluated after the corresponding DML operation is executed. Therefore, a READ
UNCOMMITTED query can see transient values that will be rolled back.
Permissions
Requires the ALTER ANY SECURITY POLICY permission and ALTER permission on the schema.
Additionally the following permissions are required for each predicate that is added:
SELECT and REFERENCES permissions on the function being used as a predicate.
REFERENCES permission on the target table being bound to the policy.
REFERENCES permission on every column from the target table used as arguments.
Examples
The following examples demonstrate the use of the CREATE SECURITY POLICY syntax. For an example of a
complete security policy scenario, see Row -Level Security.
A. Creating a security policy
The following syntax creates a security policy with a filter predicate for the Customer table, and leaves the security
policy disabled.
See Also
Row -Level Security
ALTER SECURITY POLICY (Transact-SQL )
DROP SECURITY POLICY (Transact-SQL )
sys.security_policies (Transact-SQL )
sys.security_predicates (Transact-SQL )
CREATE SELECTIVE XML INDEX (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new selective XML index on the specified table and XML column. Selective XML indexes improve the
performance of XML indexing and querying by indexing only the subset of nodes that you typically query. You can
also create secondary selective XML indexes. For information, see Create, Alter, and Drop Secondary Selective
XML Indexes.
Transact-SQL Syntax Conventions
Syntax
CREATE SELECTIVE XML INDEX index_name
ON <table_object> (xml_column_name)
[WITH XMLNAMESPACES (<xmlnamespace_list>)]
FOR (<promoted_node_path_list>)
[WITH (<index_options>)]
<table_object> ::=
{ [database_name. [schema_name ] . | schema_name. ] table_name }
<promoted_node_path_list> ::=
<named_promoted_node_path_item> [, <promoted_node_path_list>]
<named_promoted_node_path_item> ::=
<path_name> = <promoted_node_path_item>
<promoted_node_path_item>::=
<xquery_node_path_item> | <sql_values_node_path_item>
<xquery_node_path_item> ::=
<node_path> [AS XQUERY <xsd_type_or_node_hint>] [SINGLETON]
<xsd_type_or_node_hint> ::=
[<xsd_type>] [MAXLENGTH(x)] | node()
<sql_values_node_path_item> ::=
<node_path> AS SQL <sql_type> [SINGLETON]
<node_path> ::=
character_string_literal
<xsd_type> ::=
character_string_literal
<sql_type> ::=
identifier
<path_name> ::=
identifier
<xmlnamespace_list> ::=
<xmlnamespace_item> [, <xmlnamespace_list>]
<xmlnamespace_item> ::=
<xmlnamespace_uri> AS <xmlnamespace_prefix>
<xml_namespace_uri> ::=
character_string_literal
<xml_namespace_prefix> ::=
identifier
<index_options> ::=
(
| PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = OFF
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
)
Arguments
index_name
Is the name of the new index to create. Index names must be unique within a table, but do not have to be unique
within a database. Index names must follow the rules of identifiers.
<table_object> Is the table that contains the XML column to index. Use one of the following formats:
database_name.schema_name.table_name
database_name..table_name
schema_name.table_name
table_name
xml_column_name
Is the name of the XML column that contains the paths to index.
[WITH XMLNAMESPACES (<xmlnamespace_list>)] Is the list of namespaces used by the paths to index.
For information about the syntax of the WITH XMLNAMESPACES clause, see WITH XMLNAMESPACES
(Transact-SQL ).
FOR (<promoted_node_path_list>) Is the list of paths to index with optional optimization hints. For
information about the paths and the optimization hints that you can specify in the CREATE or ALTER
statement, see Specify Paths and Optimization Hints for Selective XML Indexes.
WITH <index_options> For information about the index options, see CREATE XML INDEX (Selective XML
Indexes).
Best Practices
Create a selective XML index instead of an ordinary XML index in most cases for better performance and more
efficient storage. However, a selective XML index is not recommended when either of the following conditions is
true:
You need to map a large number of node paths.
You need to support queries for unknown elements or elements in an unknown location.
Security
Permissions
Requires ALTER permission on the table or view. User must be a member of the sysadmin fixed server role or the
db_ddladmin and db_owner fixed database roles.
Examples
The following example shows the syntax for creating a selective XML index. It also shows several variations of the
syntax for describing the paths to be indexed, with optional optimization hints.
CREATE TABLE Tbl ( id INT PRIMARY KEY, xmlcol XML );
GO
CREATE SELECTIVE XML INDEX sxi_index
ON Tbl(xmlcol)
FOR(
pathab = '/a/b' as XQUERY 'node()',
pathabc = '/a/b/c' as XQUERY 'xs:double',
pathdtext = '/a/b/d/text()' as XQUERY 'xs:string' MAXLENGTH(200) SINGLETON,
pathabe = '/a/b/e' as SQL NVARCHAR(100)
);
See Also
Selective XML Indexes (SXI)
Create, Alter, and Drop Selective XML Indexes
Specify Paths and Optimization Hints for Selective XML Indexes
CREATE SEQUENCE (Transact-SQL)
5/3/2018 • 10 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a sequence object and specifies its properties. A sequence is a user-defined schema bound object that
generates a sequence of numeric values according to the specification with which the sequence was created. The
sequence of numeric values is generated in an ascending or descending order at a defined interval and can be
configured to restart (cycle) when exhausted. Sequences, unlike identity columns, are not associated with specific
tables. Applications refer to a sequence object to retrieve its next value. The relationship between sequences and
tables is controlled by the application. User applications can reference a sequence object and coordinate the
values across multiple rows and tables.
Unlike identity columns values that are generated when rows are inserted, an application can obtain the next
sequence number without inserting the row by calling the NEXT VALUE FOR function. Use
sp_sequence_get_range to get multiple sequence numbers at once.
For information and scenarios that use both CREATE SEQUENCE and the NEXT VALUE FOR function, see
Sequence Numbers.
Transact-SQL Syntax Conventions
Syntax
CREATE SEQUENCE [schema_name . ] sequence_name
[ AS [ built_in_integer_type | user-defined_integer_type ] ]
[ START WITH <constant> ]
[ INCREMENT BY <constant> ]
[ { MINVALUE [ <constant> ] } | { NO MINVALUE } ]
[ { MAXVALUE [ <constant> ] } | { NO MAXVALUE } ]
[ CYCLE | { NO CYCLE } ]
[ { CACHE [ <constant> ] } | { NO CACHE } ]
[ ; ]
Arguments
sequence_name
Specifies the unique name by which the sequence is known in the database. Type is sysname.
[ built_in_integer_type | user-defined_integer_type
A sequence can be defined as any integer type. The following types are allowed.
tinyint - Range 0 to 255
smallint - Range -32,768 to 32,767
int - Range -2,147,483,648 to 2,147,483,647
bigint - Range -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
decimal and numeric with a scale of 0.
Any user-defined data type (alias type) that is based on one of the allowed types.
If no data type is provided, the bigint data type is used as the default.
START WITH <constant>
The first value returned by the sequence object. The START value must be a value less than or equal to the
maximum and greater than or equal to the minimum value of the sequence object. The default start value
for a new sequence object is the minimum value for an ascending sequence object and the maximum value
for a descending sequence object.
INCREMENT BY <constant>
Value used to increment (or decrement if negative) the value of the sequence object for each call to the
NEXT VALUE FOR function. If the increment is a negative value, the sequence object is descending;
otherwise, it is ascending. The increment cannot be 0. The default increment for a new sequence object is 1.
[ MINVALUE <constant> | NO MINVALUE ]
Specifies the bounds for the sequence object. The default minimum value for a new sequence object is the
minimum value of the data type of the sequence object. This is zero for the tinyint data type and a
negative number for all other data types.
[ MAXVALUE <constant> | NO MAXVALUE
Specifies the bounds for the sequence object. The default maximum value for a new sequence object is the
maximum value of the data type of the sequence object.
[ CYCLE | NO CYCLE ]
Property that specifies whether the sequence object should restart from the minimum value (or maximum
for descending sequence objects) or throw an exception when its minimum or maximum value is exceeded.
The default cycle option for new sequence objects is NO CYCLE.
Note that cycling restarts from the minimum or maximum value, not from the start value.
[ CACHE [<constant> ] | NO CACHE ]
Increases performance for applications that use sequence objects by minimizing the number of disk IOs
that are required to generate sequence numbers. Defaults to CACHE.
For example, if a cache size of 50 is chosen, SQL Server does not keep 50 individual values cached. It only
caches the current value and the number of values left in the cache. This means that the amount of
memory required to store the cache is always two instances of the data type of the sequence object.
NOTE
If the cache option is enabled without specifying a cache size, the Database Engine will select a size. However, users should
not rely upon the selection being consistent. Microsoft might change the method of calculating the cache size without
notice.
When created with the CACHE option, an unexpected shutdown (such as a power failure) may result in the loss of
sequence numbers remaining in the cache.
General Remarks
Sequence numbers are generated outside the scope of the current transaction. They are consumed whether the
transaction using the sequence number is committed or rolled back.
Cache management
To improve performance, SQL Server pre-allocates the number of sequence numbers specified by the CACHE
argument.
For an example, a new sequence is created with a starting value of 1 and a cache size of 15. When the first value is
needed, values 1 through 15 are made available from memory. The last cached value (15) is written to the system
tables on the disk. When all 15 numbers are used, the next request (for number 16) will cause the cache to be
allocated again. The new last cached value (30) will be written to the system tables.
If the Database Engine is stopped after you use 22 numbers, the next intended sequence number in memory (23)
is written to the system tables, replacing the previously stored number.
After SQL Server restarts and a sequence number is needed, the starting number is read from the system tables
(23). The cache amount of 15 numbers (23-38) is allocated to memory and the next non-cache number (39) is
written to the system tables.
If the Database Engine stops abnormally for an event such as a power failure, the sequence restarts with the
number read from system tables (39). Any sequence numbers allocated to memory (but never requested by a
user or application) are lost. This functionality may leave gaps, but guarantees that the same value will never be
issued two times for a single sequence object unless it is defined as CYCLE or is manually restarted.
The cache is maintained in memory by tracking the current value (the last value issued) and the number of values
left in the cache. Therefore, the amount of memory used by the cache is always two instances of the data type of
the sequence object.
Setting the cache argument to NO CACHE writes the current sequence value to the system tables every time that
a sequence is used. This might slow performance by increasing disk access, but reduces the chance of unintended
gaps. Gaps can still occur if numbers are requested using the NEXT VALUE FOR or sp_sequence_get_range
functions, but then the numbers are either not used or are used in uncommitted transactions.
When a sequence object uses the CACHE option, if you restart the sequence object, or alter the INCREMENT,
CYCLE, MINVALUE, MAXVALUE, or the cache size properties, it will cause the cache to be written to the
system tables before the change occurs. Then the cache is reloaded starting with the current value (i.e. no
numbers are skipped). Changing the cache size takes effect immediately.
CACHE option when cached values are available
The following process occurs every time that a sequence object is requested to generate the next value for the
CACHE option if there are unused values available in the in-memory cache for the sequence object.
1. The next value for the sequence object is calculated.
2. The new current value for the sequence object is updated in memory.
3. The calculated value is returned to the calling statement.
CACHE option when the cache is exhausted
The following process occurs every time a sequence object is requested to generate the next value for the
CACHE option if the cache has been exhausted:
4. The next value for the sequence object is calculated.
5. The last value for the new cache is calculated.
6. The system table row for the sequence object is locked, and the value calculated in step 2 (the last value) is
written to the system table. A cache-exhausted xevent is fired to notify the user of the new persisted value.
NO CACHE option
The following process occurs every time that a sequence object is requested to generate the next value for
the NO CACHE option:
7. The next value for the sequence object is calculated.
8. The new current value for the sequence object is written to the system table.
9. The calculated value is returned to the calling statement.
Metadata
Security
Permissions
Requires CREATE SEQUENCE, ALTER, or CONTROL permission on the SCHEMA.
Members of the db_owner and db_ddladmin fixed database roles can create, alter, and drop sequence
objects.
Members of the db_owner and db_datawriter fixed database roles can update sequence objects by causing
them to generate numbers.
The following example grants the user AdventureWorks\Larry permission to create sequences in the Test
schema.
Ownership of a sequence object can be transferred by using the ALTER AUTHORIZATION statement.
If a sequence uses a user-defined data type, the creator of the sequence must have REFERENCES permission on
the type.
Audit
To audit CREATE SEQUENCE, monitor the SCHEMA_OBJECT_CHANGE_GROUP.
Examples
For examples of creating sequences and using the NEXT VALUE FOR function to generate sequence numbers,
see Sequence Numbers.
Most of the following examples create sequence objects in a schema named Test.
To create the Test schema, execute the following statement.
start_value -9223372036854775808
increment 1
mimimum_value -9223372036854775808
maximum_value 9223372036854775807
is_cycling 0
is_cached 1
current_value -9223372036854775808
Execute the following statement to see the first value; the START WITH option of 125.
Execute the statement three more times to return 150, 175, and 200.
Execute the statement again to see how the start value cycles back to the MINVALUE option of 100.
Execute the following code to confirm the cache size and see the current value.
See Also
ALTER SEQUENCE (Transact-SQL )
DROP SEQUENCE (Transact-SQL )
NEXT VALUE FOR (Transact-SQL )
Sequence Numbers
CREATE SERVER AUDIT (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database (Managed Instance only) Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a server audit object using SQL Server Audit. For more information, see SQL Server Audit (Database
Engine).
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
CREATE SERVER AUDIT audit_name
{
TO { [ FILE (<file_options> [ , ...n ] ) ] | APPLICATION_LOG | SECURITY_LOG }
[ WITH ( <audit_options> [ , ...n ] ) ]
[ WHERE <predicate_expression> ]
}
[ ; ]
<file_options>::=
{
FILEPATH = 'os_file_path'
[ , MAXSIZE = { max_size { MB | GB | TB } | UNLIMITED } ]
[ , { MAX_ROLLOVER_FILES = { integer | UNLIMITED } } | { MAX_FILES = integer } ]
[ , RESERVE_DISK_SPACE = { ON | OFF } ]
}
<audit_options>::=
{
[ QUEUE_DELAY = integer ]
[ , ON_FAILURE = { CONTINUE | SHUTDOWN | FAIL_OPERATION } ]
[ , AUDIT_GUID = uniqueidentifier ]
}
<predicate_expression>::=
{
[NOT ] <predicate_factor>
[ { AND | OR } [NOT ] { <predicate_factor> } ]
[,...n ]
}
<predicate_factor>::=
event_field_name { = | < > | ! = | > | > = | < | < = } { number | ' string ' }
Arguments
TO { FILE | APPLICATION_LOG | SECURITY_LOG }
Determines the location of the audit target. The options are a binary file, The Windows Application log, or the
Windows Security log. SQL Server cannot write to the Windows Security log without configuring additional
settings in Windows. For more information, see Write SQL Server Audit Events to the Security Log.
FILEPATH ='os_file_path'
The path of the audit log. The file name is generated based on the audit name and audit GUID.
MAXSIZE = { max_size }
Specifies the maximum size to which the audit file can grow. The max_size value must be an integer followed by
MB, GB, TB, or UNLIMITED. The minimum size that you can specify for max_size is 2 MB and the maximum is
2,147,483,647 TB. When UNLIMITED is specified, the file grows until the disk is full. (0 also indicates
UNLIMITED.) Specifying a value lower than 2 MB raises the error MSG_MAXSIZE_TOO_SMALL. The default
value is UNLIMITED.
MAX_ROLLOVER_FILES ={ integer | UNLIMITED }
Specifies the maximum number of files to retain in the file system in addition to the current file. The
MAX_ROLLOVER_FILES value must be an integer or UNLIMITED. The default value is UNLIMITED. This
parameter is evaluated whenever the audit restarts (which can happen when the instance of the Database Engine
restarts or when the audit is turned off and then on again) or when a new file is needed because the MAXSIZE
has been reached. When MAX_ROLLOVER_FILES is evaluated, if the number of files exceeds the
MAX_ROLLOVER_FILES setting, the oldest file is deleted. As a result, when the setting of MAX_ROLLOVER_FILES
is 0 a new file is created each time the MAX_ROLLOVER_FILES setting is evaluated. Only one file is automatically
deleted when MAX_ROLLOVER_FILES setting is evaluated, so when the value of MAX_ROLLOVER_FILES is
decreased, the number of files does not shrink unless old files are manually deleted. The maximum number of
files that can be specified is 2,147,483,647.
MAX_FILES =integer
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the maximum number of audit files that can be created. Does not rollover to the first file when the limit
is reached. When the MAX_FILES limit is reached, any action that causes additional audit events to be generated,
fails with an error.
RESERVE_DISK_SPACE = { ON | OFF }
This option pre-allocates the file on the disk to the MAXSIZE value. It applies only if MAXSIZE is not equal to
UNLIMITED. The default value is OFF.
QUEUE_DEL AY =integer
Determines the time, in milliseconds, that can elapse before audit actions are forced to be processed. A value of 0
indicates synchronous delivery. The minimum settable query delay value is 1000 (1 second), which is the default.
The maximum is 2,147,483,647 (2,147,483.647 seconds or 24 days, 20 hours, 31 minutes, 23.647 seconds).
Specifying an invalid number, raises the MSG_INVALID_QUEUE_DEL AY error.
ON_FAILURE = { CONTINUE | SHUTDOWN | FAIL_OPERATION }
Indicates whether the instance writing to the target should fail, continue, or stop SQL Server if the target cannot
write to the audit log. The default value is CONTINUE.
CONTINUE
SQL Server operations continue. Audit records are not retained. The audit continues to attempt to log events and
resumes if the failure condition is resolved. Selecting the continue option can allow unaudited activity, which
could violate your security policies. Use this option, when continuing operation of the Database Engine is more
important than maintaining a complete audit.
SHUTDOWN
Forces the instance of SQL Server to shut down, if SQL Server fails to write data to the audit target for any
reason. The login executing the CREATE SERVER AUDIT statement must have the SHUTDOWN permission within SQL
Server. The shutdown behavior persists even if the SHUTDOWN permission is later revoked from the executing
login. If the user does not have this permission, then the statement fails and the audit is not be created. Use the
option when an audit failure could compromise the security or integrity of the system. For more information, see
SHUTDOWN.
FAIL_OPERATION
Database actions fail if they cause audited events. Actions, which do not cause audited events can continue, but no
audited events can occur. The audit continues to attempt to log events and resumes if the failure condition is
resolved. Use this option when maintaining a complete audit is more important than full access to the Database
Engine.
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
AUDIT_GUID =uniqueidentifier
To support scenarios such as database mirroring, an audit needs a specific GUID that matches the GUID found in
the mirrored database. The GUID cannot be modified after the audit has been created.
predicate_expression
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the predicate expression used to determine if an event should be processed or not. Predicate
expressions are limited to 3000 characters, which limits string arguments.
event_field_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Is the name of the event field that identifies the predicate source. Audit fields are described in sys.fn_get_audit_file
(Transact-SQL ). All fields can be filtered except file_name , audit_file_offset , and event_time .
NOTE
While the action_id and class_type fields are of type varchar in sys.fn_get_audit_file, they can only be used with
numbers when they are a predicate source for filtering. To get the list of values to be used with class_type , execute the
following query:
number
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Is any numeric type including decimal. Limitations are the lack of available physical memory or a number that is
too large to be represented as a 64-bit integer.
' string '
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Either an ANSI or Unicode string as required by the predicate compare. No implicit string type conversion is
performed for the predicate compare functions. Passing the wrong type results in an error.
Remarks
When a server audit is created, it is in a disabled state.
The CREATE SERVER AUDIT statement is in a transaction's scope. If the transaction is rolled back, the statement
is also rolled back.
Permissions
To create, alter, or drop a server audit, principals require the ALTER ANY SERVER AUDIT or the CONTROL
SERVER permission.
When you are saving audit information to a file, to help prevent tampering, restrict access to the file location.
Examples
A. Creating a server audit with a file target
The following example creates a server audit called HIPPA_Audit with a binary file as the target and no options.
B. Creating a server audit with a Windows Application log target with options
The following example creates a server audit called HIPPA_Audit with the target set for the Windows Application
log. The queue is written every second and shuts down the SQL Server engine on failure.
See Also
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
sys.dm_audit_class_type_map (Transact-SQL )
Create a Server Audit and Server Audit Specification
CREATE SERVER AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a server audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions
Syntax
CREATE SERVER AUDIT SPECIFICATION audit_specification_name
FOR SERVER AUDIT audit_name
{
{ ADD ( { audit_action_group_name } )
} [, ...n]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]
Arguments
audit_specification_name
Name of the server audit specification.
audit_name
Name of the audit to which this specification is applied.
audit_action_group_name
Name of a group of server-level auditable actions. For a list of Audit Action Groups, see SQL Server Audit Action
Groups and Actions.
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification.
Remarks
An audit must exist before creating a server audit specification for it. When a server audit specification is created,
it is in a disabled state.
Permissions
Users with the ALTER ANY SERVER AUDIT permission can create server audit specifications and bind them to
any audit.
After a server audit specification is created, it can be viewed by principals with the, CONTROL SERVER, or
ALTER ANY SERVER AUDIT permissions, the sysadmin account, or principals having explicit access to the audit.
Examples
The following example creates a server audit specification called HIPPA_Audit_Specification that audits failed
logins, for a SQL Server Audit called HIPPA_Audit .
For a full example about how to create an audit, see SQL Server Audit (Database Engine).
See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
CREATE SERVER ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new user-defined server role.
Transact-SQL Syntax Conventions
Syntax
CREATE SERVER ROLE role_name [ AUTHORIZATION server_principal ]
Arguments
role_name
Is the name of the server role to be created.
AUTHORIZATION server_principal
Is the login that will own the new server role. If no login is specified, the server role will be owned by the login
that executes CREATE SERVER ROLE.
Remarks
Server roles are server-level securables. After you create a server role, configure the server-level permissions of
the role by using GRANT, DENY, and REVOKE. To add logins to or remove logins from a server role, use ALTER
SERVER ROLE (Transact-SQL ). To drop a server role, use DROP SERVER ROLE (Transact-SQL ). For more
information, see sys.server_principals (Transact-SQL ).
You can view the server roles by querying the sys.server_role_members and sys.server_principals catalog views.
Server roles cannot be granted permission on database-level securables. To create database roles, see CREATE
ROLE (Transact-SQL ).
For information about designing a permissions system, see Getting Started with Database Engine Permissions.
Permissions
Requires CREATE SERVER ROLE permission or membership in the sysadmin fixed server role.
Also requires IMPERSONATE on the server_principal for logins, ALTER permission for server roles used as the
server_principal, or membership in a Windows group that is used as the server_principal.
This will fire the Audit Server Principal Management event withthe object type set to server role and event type to
add.
When you use the AUTHORIZATION option to assign server role ownership, the following permissions are also
required:
To assign ownership of a server role to another login, requires IMPERSONATE permission on that login.
To assign ownership of a server role to another server role, requires membership in the recipient server
role or ALTER permission on that server role.
Examples
A. Creating a server role that is owned by a login
The following example creates the server role buyers that is owned by login BenMiller .
USE master;
CREATE SERVER ROLE buyers AUTHORIZATION BenMiller;
GO
USE master;
CREATE SERVER ROLE auditors AUTHORIZATION securityadmin;
GO
See Also
DROP SERVER ROLE (Transact-SQL )
Principals (Database Engine)
EVENTDATA (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
Getting Started with Database Engine Permissions
CREATE SERVICE (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new service. A Service Broker service is a name for a specific task or set of tasks. Service Broker uses
the name of the service to route messages, deliver messages to the correct queue within a database, and enforce
the contract for a conversation.
Transact-SQL Syntax Conventions
Syntax
CREATE SERVICE service_name
[ AUTHORIZATION owner_name ]
ON QUEUE [ schema_name. ]queue_name
[ ( contract_name | [DEFAULT][ ,...n ] ) ]
[ ; ]
Arguments
service_name
Is the name of the service to create. A new service is created in the current database and owned by the principal
specified in the AUTHORIZATION clause. Server, database, and schema names cannot be specified. The
service_name must be a valid sysname.
NOTE
Do not create a service that uses the keyword ANY for the service_name. When you specify ANY for a service name in
CREATE BROKER PRIORITY, the priority is considered for all services. It is not limited to a service whose name is ANY.
AUTHORIZATION owner_name
Sets the owner of the service to the specified database user or role. When the current user is dbo or sa,
owner_name may be the name of any valid user or role. Otherwise, owner_name must be the name of the current
user, the name of a user that the current user has IMPERSONATE permission for, or the name of a role to which
the current user belongs.
ON QUEUE [ schema_name. ] queue_name
Specifies the queue that receives messages for the service. The queue must exist in the same database as the
service. If no schema_name is provided, the schema is the default schema for the user that executes the statement.
contract_name
Specifies a contract for which this service may be a target. Service programs initiate conversations to this service
using the contracts specified. If no contracts are specified, the service may only initiate conversations.
[DEFAULT]
Specifies that the service may be a target for conversations that follow the DEFAULT contract. In the context of
this clause, DEFAULT is not a keyword, and must be delimited as an identifier. The DEFAULT contract allows both
sides of the conversation to send messages of message type DEFAULT. Message type DEFAULT uses validation
NONE.
Remarks
A service exposes the functionality provided by the contracts with which it is associated, so that they can be used
by other services. The CREATE SERVICE statement specifies the contracts that this service is the target for. A
service can only be a target for conversations that use the contracts specified by the service. A service that
specifies no contracts exposes no functionality to other services.
Conversations initiated from this service may use any contract. You create a service without specifying contracts
when the service only initiates conversations.
When Service Broker accepts a new conversation from a remote service, the name of the target service
determines the queue where the broker places messages in the conversation.
Permissions
Permission for creating a service defaults to members of the db_ddladmin or db_owner fixed database roles and
the sysadmin fixed server role. The user executing the CREATE SERVICE statement must have REFERENCES
permission on the queue and all contracts specified.
REFERENCES permission for a service defaults to the owner of the service, members of the db_ddladmin or
db_owner fixed database roles, and members of the sysadmin fixed server role. SEND permissions for a service
default to the owner of the service, members of the db_owner fixed database role, and members of the sysadmin
fixed server role.
A service may not be a temporary object. Service names beginning with # are allowed, but are permanent objects.
Examples
A. Creating a service with one contract
The following example creates the service //Adventure-Works.com/Expenses on the ExpenseQueue queue in the dbo
schema. Dialogs that target this service must follow the contract
//Adventure-Works.com/Expenses/ExpenseSubmission .
See Also
ALTER SERVICE (Transact-SQL )
DROP SERVICE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL)
5/3/2018 • 18 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a spatial index on a specified table and column in SQL Server. An index can be created before there is
data in the table. Indexes can be created on tables or views in another database by specifying a qualified database
name. Spatial indexes require the table to have a clustered primary key. For information about spatial indexes, see
Spatial Indexes Overview.
Transact-SQL Syntax Conventions
Syntax
-- SQL Server Syntax
<object> ::=
[ database_name. [ schema_name ] . | schema_name. ] table_name
<geometry_tessellation> ::=
{
<geometry_automatic_grid_tessellation>
| <geometry_manual_grid_tessellation>
}
<geometry_automatic_grid_tessellation> ::=
{
[ USING GEOMETRY_AUTO_GRID ]
WITH (
<bounding_box>
[ [,] <tessellation_cells_per_object> [ ,…n] ]
[ [,] <spatial_index_option> [ ,…n] ]
)
}
<geometry_manual_grid_tessellation> ::=
{
[ USING GEOMETRY_GRID ]
WITH (
<bounding_box>
[ [,]<tessellation_grid> [ ,…n] ]
[ [,]<tessellation_cells_per_object> [ ,…n] ]
[ [,]<spatial_index_option> [ ,…n] ]
)
}
<geography_tessellation> ::=
{
<geography_automatic_grid_tessellation> | <geography_manual_grid_tessellation>
}
<geography_automatic_grid_tessellation> ::=
<geography_automatic_grid_tessellation> ::=
{
[ USING GEOGRAPHY_AUTO_GRID ]
[ WITH (
[ [,] <tessellation_cells_per_object> [ ,…n] ]
[ [,] <spatial_index_option> ]
) ]
}
<geography_manual_grid_tessellation> ::=
{
[ USING GEOGRAPHY_GRID ]
[ WITH (
[ <tessellation_grid> [ ,…n] ]
[ [,] <tessellation_cells_per_object> [ ,…n] ]
[ [,] <spatial_index_option> [ ,…n] ]
) ]
}
<bounding_box> ::=
{
BOUNDING_BOX = ( {
xmin, ymin, xmax, ymax
| <named_bb_coordinate>, <named_bb_coordinate>, <named_bb_coordinate>, <named_bb_coordinate>
} )
}
<tesselation_grid> ::=
{
GRIDS = ( { <grid_level> [ ,...n ] | <grid_size>, <grid_size>, <grid_size>, <grid_size> }
)
}
<tesseallation_cells_per_object> ::=
{
CELLS_PER_OBJECT = n
}
<grid_level> ::=
{
LEVEL_1 = <grid_size>
| LEVEL_2 = <grid_size>
| LEVEL_3 = <grid_size>
| LEVEL_4 = <grid_size>
}
<spatial_index_option> ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = OFF
| STATISTICS_NORECOMPUTE = { ON | OFF }
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| DATA_COMPRESSION = { NONE | ROW | PAGE }
}
-- Windows Azure SQL Database Syntax
[ ; ]
<object> ::=
{
[database_name. [schema_name ] . | schema_name. ]
table_name
}
<geometry_grid_tessellation> ::=
{ GEOMETRY_GRID }
<bounding_box> ::=
BOUNDING_BOX = ( {
xmin, ymin, xmax, ymax
| <named_bb_coordinate>, <named_bb_coordinate>, <named_bb_coordinate>, <named_bb_coordinate>
} )
<tesselation_parameters> ::=
{
GRIDS = ( { <grid_density> [ ,... n ] | <density>, <density>, <density>, <density> } )
| CELLS_PER_OBJECT = n
}
<grid_density> ::=
{
LEVEL_1 = <density>
| LEVEL_2 = <density>
| LEVEL_3 = <density>
| LEVEL_4 = <density>
}
<geography_grid_tessellation> ::=
{ GEOGRAPHY_GRID }
<spatial_index_option> ::=
{
IGNORE_DUP_KEY = OFF
| STATISTICS_NORECOMPUTE = { ON | OFF }
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
}
Arguments
index_name
Is the name of the index. Index names must be unique within a table but do not have to be unique within a
database. Index names must follow the rules of identifiers.
ON <object> ( spatial_column_name )
Specifies the object (database, schema, or table) on which the index is to be created and the name of spatial
column.
spatial_column_name specifies the spatial column on which the index is based. Only one spatial column can be
specified in a single spatial index definition; however, multiple spatial indexes can be created on a geometry or
geography column.
USING
Indicates the tessellation scheme for the spatial index. This parameter uses the type-specific value, shown in the
following table:
geometry GEOMETRY_GRID
geometry GEOMETRY_AUTO_GRID
geography GEOGRAPY_GRID
geography GEOGRAPHY_AUTO_GRID
A spatial index can be created only on a column of type geometry or geography. Otherwise, an error is raised.
Also, if an invalid parameter for a given type is passed, an error is raised.
For information about how SQL Server implements tessellation, see Spatial Indexes Overview.
ON filegroup_name
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Creates the specified index on the specified filegroup. If no location is specified and the table is not partitioned,
the index uses the same filegroup as the underlying table. The filegroup must already exist.
ON "default"
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Creates the specified index on the default filegroup.
The term default, in this context, is not a keyword. It is an identifier for the default filegroup and must be
delimited, as in ON "default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be
ON for the current session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER
(Transact-SQL ).
<object>::=
Is the fully qualified or non-fully qualified object to be indexed.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to be indexed.
Windows Azure SQL Database supports the three-part name format database_name.
[schema_name].object_name when the database_name is the current database or the database_name is tempdb
and the object_name starts with #.
USING Options
GEOMETRY_GRID
Specifies the geometry grid tessellation scheme that you are using. GEOMETRY_GRID can be specified only on
a column of the geometry data type. GEOMETRY_GRID allows for manual adjusting of the tessellation scheme.
GEOMETRY_AUTO_GRID
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Can be specified only on a column of the geometry data type. This is the default for this data type and does not
need to be specified.
GEOGRAPHY_GRID
Specifies the geography grid tessellation scheme. GEOGRAPHY_GRID can be specified only on a column of the
geography data type.
GEOGRAPHY_AUTO_GRID
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Can be specified only on a column of the geography data type. This is the default for this data type and does not
need to be specified.
WITH Options
BOUNDING_BOX
Specifies a numeric four-tuple that defines the four coordinates of the bounding box: the x-min and y-min
coordinates of the lower-left corner, and the x-max and y-max coordinates of the upper-right corner.
xmin
Specifies the x-coordinate of the lower-left corner of the bounding box.
ymin
Specifies the y-coordinate of the lower-left corner of the bounding box.
xmax
Specifies the x-coordinate of the upper-right corner of the bounding box.
ymax
Specifies the y-coordinate of the upper-right corner of the bounding box.
XMIN = xmin
Specifies the property name and value for the x-coordinate of the lower-left corner of the bounding box.
YMIN =ymin
Specifies the property name and value for the y-coordinate of the lower-left corner of the bounding box.
XMAX =xmax
Specifies the property name and value for the x-coordinate of the upper-right corner of the bounding box.
YMAX =ymax
Specifies the property name and value for the y-coordinate of upper-right corner of the bounding box
NOTE
Bounding-box coordinates apply only within a USING GEOMETRY_GRID clause.
xmax must be greater than xmin and ymax must be greater than ymin. You can specify any valid float value representation,
assuming that: xmax > xmin and ymax > ymin. Otherwise the appropriate errors are raised.
There are no default values.
The bounding-box property names are case-insensitive regardless of the database collation.
To specify property names, you must specify each of them once and only once. You can specify them in any order.
For example, the following clauses are equivalent:
BOUNDING_BOX =( XMIN =xmin, YMIN =ymin, XMAX =xmax, YMAX =ymax )
BOUNDING_BOX =( XMIN =xmin, XMAX =xmax, YMIN =ymin, YMAX =ymax)
GRIDS
Defines the density of the grid at each level of a tessellation scheme. When GEOMETRY_AUTO_GRID and
GEOGRAPHY_AUTO_GRID are selected, this option is disabled.
For information about tessellation, see Spatial Indexes Overview.
The GRIDS parameters are as follows:
LEVEL_1
Specifies the first-level (top) grid.
LEVEL_2
Specifies the second-level grid.
LEVEL_3
Specifies the third-level grid.
LEVEL_4
Specifies the fourth-level grid.
LOW
Specifies the lowest possible density for the grid at a given level. LOW equates to 16 cells (a 4x4 grid).
MEDIUM
Specifies the medium density for the grid at a given level. MEDIUM equates to 64 cells (an 8x8 grid).
HIGH
Specifies the highest possible density for the grid at a given level. HIGH equates to 256 cells (a 16x16 grid).
NOTE
Using level names allows you to specify the levels in any order and to omit levels. If you use the name for any level, you
must use the name of any other level that you specify. If you omit a level, its density defaults to MEDIUM.
WARNING
If an invalid density is specified, an error is raised.
CELLS_PER_OBJECT =n
Specifies the number of tessellation cells per object that can be used for a single spatial object in the index by the
tessellation process. n can be any integer between 1 and 8192, inclusive. If an invalid number is passed or the
number is larger than the maximum number of cells for the specified tessellation, an error is raised.
CELLS_PER_OBJECT has the following default values:
GEOMETRY_GRID 16
GEOMETRY_AUTO_GRID 8
GEOGRAPHY_GRID 16
GEOGRAPHY_AUTO_GRID 12
At the top level, if an object covers more cells than specified by n, the indexing uses as many cells as necessary to
provide a complete top-level tessellation. In such cases, an object might receive more than the specified number
of cells. In this case, the maximum number is the number of cells generated by the top-level grid, which depends
on the density.
The CELLS_PER_OBJECT value is used by the cells-per-object tessellation rule. For information about the
tessellation rules, see Spatial Indexes Overview.
PAD_INDEX = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies index padding. The default is OFF.
ON
Indicates that the percentage of free space that is specified by fillfactor is applied to the intermediate-level pages
of the index.
OFF or fillfactor is not specified
Indicates that the intermediate-level pages are filled to near capacity, leaving sufficient space for at least one row
of the maximum size the index can have, considering the set of keys on the intermediate pages.
The PAD_INDEX option is useful only when FILLFACTOR is specified, because PAD_INDEX uses the percentage
specified by FILLFACTOR. If the percentage specified for FILLFACTOR is not large enough to allow for one row,
the Database Engine internally overrides the percentage to allow for the minimum. The number of rows on an
intermediate index page is never less than two, regardless of how low the value of fillfactor.
FILLFACTOR =fillfactor
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or rebuild. fillfactor must be an integer value from 1 to 100. The default is 0. If fillfactor is
100 or 0, the Database Engine creates indexes with leaf pages filled to capacity.
NOTE
Fill factor values 0 and 100 are the same in all respects.
The FILLFACTOR setting applies only when the index is created or rebuilt. The Database Engine does not
dynamically keep the specified percentage of empty space in the pages. To view the fill factor setting, use the
sys.indexes catalog view.
IMPORTANT
Creating a clustered index with a FILLFACTOR less than 100 affects the amount of storage space the data occupies because
the Database Engine redistributes the data when it creates the clustered index.
IMPORTANT
Disabling automatic recomputation of distribution statistics may prevent the query optimizer from picking optimal
execution plans for queries involving the table.
DROP_EXISTING = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies that the named, preexisting spatial index is dropped and rebuilt. The default is OFF.
ON
The existing index is dropped and rebuilt. The index name specified must be the same as a currently existing
index; however, the index definition can be modified. For example, you can specify different columns, sort order,
partition scheme, or index options.
OFF
An error is displayed if the specified index name already exists.
The index type cannot be changed by using DROP_EXISTING.
ONLINE =OFF
Specifies that underlying tables and associated indexes are not available for queries and data modification during
the index operation. In this version of SQL Server, online index builds are not supported for spatial indexes. If this
option is set to ON for a spatial index, an error is raised. Either omit the ONLINE option or set ONLINE to OFF.
An offline index operation that creates, rebuilds, or drops a spatial index, acquires a Schema modification (Sch-M )
lock on the table. This prevents all user access to the underlying table for the duration of the operation.
NOTE
Online index operations are not available in every edition of SQL Server. For a list of features that are supported by the
editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.
ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when accessing the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
MAXDOP =max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Overrides the max degree of parallelism configuration option for the duration of the index operation. Use
MAXDOP to limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
IMPORTANT
Although the MAXDOP option is syntactically supported, CREATE SPATIAL INDEX currently always uses only a single
processor.
NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.
Remarks
Every option can be specified only once per CREATE SPATIAL INDEX statement. Specifying a duplicate of any
option raises an error.
You can create up to 249 spatial indexes on each spatial column in a table. Creating more than one spatial index
on specific spatial column can be useful, for example, to index different tessellation parameters in a single column.
IMPORTANT
There are a number of other restrictions on creating a spatial index. For more information, see Spatial Indexes Overview.
Permissions
The user must have ALTER permission on the table or view, or be a member of the sysadmin fixed server role or
the db_ddladmin and db_owner fixed database roles.
Examples
A. Creating a spatial index on a geometry column
The following example creates a table named SpatialTable that contains a geometry type column,
geometry_col . The example then creates a spatial index, SIndx_SpatialTable_geometry_col1 , on the geometry_col .
The example uses the default tessellation scheme and specifies the bounding box.
NOTE
For geography grid indexes, a bounding box cannot be specified.
See Also
ALTER INDEX (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
CREATE TABLE (Transact-SQL )
Data Types (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP INDEX (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.index_columns (Transact-SQL )
sys.indexes (Transact-SQL )
sys.spatial_index_tessellations (Transact-SQL )
sys.spatial_indexes (Transact-SQL )
Spatial Indexes Overview
CREATE STATISTICS (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates query optimization statistics on one or more columns of a table, an indexed view, or an external table. For
most queries, the query optimizer already generates the necessary statistics for a high-quality query plan; in a few
cases, you need to create additional statistics with CREATE STATISTICS or modify the query design to improve
query performance.
To learn more, see Statistics.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<filter_predicate> ::=
<conjunct> [AND <conjunct>]
<conjunct> ::=
<disjunct> | <comparison>
<disjunct> ::=
column_name IN (constant ,…)
<comparison> ::=
column_name <comparison_op> constant
<comparison_op> ::=
IS | IS NOT | = | <> | != | > | >= | !> | < | <= | !<
<update_stats_stream_option> ::=
[ STATS_STREAM = stats_stream ]
[ ROWCOUNT = numeric_constant ]
[ PAGECOUNT = numeric_contant ]
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
<filter_predicate> ::=
<conjunct> [AND <conjunct>]
<conjunct> ::=
<disjunct> | <comparison>
<disjunct> ::=
column_name IN (constant ,…)
<comparison> ::=
column_name <comparison_op> constant
<comparison_op> ::=
IS | IS NOT | = | <> | != | > | >= | !> | < | <= | !<
Arguments
statistics_name
Is the name of the statistics to create.
table_or_indexed_view_name
Is the name of the table, indexed view, or external table on which to create the statistics. To create statistics on
another database, specify a qualified table name.
column [ ,…n]
One or more columns to be included in the statistics. The columns should be in priority order from left to right.
Only the first column is used for creating the histogram. All columns are used for cross-column correlation
statistics called densities.
You can specify any column that can be specified as an index key column with the following exceptions:
Xml, full-text, and FILESTREAM columns cannot be specified.
Computed columns can be specified only if the ARITHABORT and QUOTED_IDENTIFIER database
settings are ON.
CLR user-defined type columns can be specified if the type supports binary ordering. Computed columns
defined as method invocations of a user-defined type column can be specified if the methods are marked
deterministic.
WHERE <filter_predicate> Specifies an expression for selecting a subset of rows to include when creating
the statistics object. Statistics that are created with a filter predicate are called filtered statistics. The filter
predicate uses simple comparison logic and cannot reference a computed column, a UDT column, a spatial
data type column, or a hierarchyID data type column. Comparisons using NULL literals are not allowed
with the comparison operators. Use the IS NULL and IS NOT NULL operators instead.
Here are some examples of filter predicates for the Production.BillOfMaterials table:
WHERE StartDate > '20000101' AND EndDate <= '20000630'
For more information about filter predicates, see Create Filtered Indexes.
FULLSCAN
Compute statistics by scanning all rows. FULLSCAN and SAMPLE 100 PERCENT have the same results.
FULLSCAN cannot be used with the SAMPLE option.
When omitted, SQL Server uses sampling to create the statistics, and determines the sample size that is
required to create a high quality query plan
SAMPLE number { PERCENT | ROWS }
Specifies the approximate percentage or number of rows in the table or indexed view for the query
optimizer to use when it creates statistics. For PERCENT, number can be from 0 through 100 and for
ROWS, number can be from 0 to the total number of rows. The actual percentage or number of rows the
query optimizer samples might not match the percentage or number specified. For example, the query
optimizer scans all rows on a data page.
SAMPLE is useful for special cases in which the query plan, based on default sampling, is not optimal. In
most situations, it is not necessary to specify SAMPLE because the query optimizer already uses sampling
and determines the statistically significant sample size by default, as required to create high-quality query
plans.
SAMPLE cannot be used with the FULLSCAN option. When neither SAMPLE nor FULLSCAN is specified,
the query optimizer uses sampled data and computes the sample size by default.
We recommend against specifying 0 PERCENT or 0 ROWS. When 0 PERCENT or ROWS is specified, the
statistics object is created but does not contain statistics data.
PERSIST_SAMPLE_PERCENT = { ON | OFF }
When ON, the statistics will retain the creation sampling percentage for subsequent updates that do not
explicitly specify a sampling percentage. When OFF, statistics sampling percentage will get reset to default
sampling in subsequent updates that do not explicitly specify a sampling percentage. The default is OFF.
Applies to: SQL Server 2016 (13.x) (starting with SQL Server 2016 (13.x) SP1 CU4) through SQL Server
2017 (starting with SQL Server 2017 (14.x) CU1).
STATS_STREAM =stats_stream
Identified for informational purposes only. Not supported. Future compatibility is not guaranteed.
NORECOMPUTE
Disable the automatic statistics update option, AUTO_STATISTICS_UPDATE, for statistics_name. If this
option is specified, the query optimizer will complete any in-progress statistics updates for statistics_name
and disable future updates.
To re-enable statistics updates, remove the statistics with DROP STATISTICS and then run CREATE
STATISTICS without the NORECOMPUTE option.
WARNING
Using this option can produce suboptimal query plans. We recommend using this option sparingly, and then only by a
qualified system administrator.
For more information about the AUTO_STATISTICS_UPDATE option, see ALTER DATABASE SET Options
(Transact-SQL ). For more information about disabling and re-enabling statistics updates, see Statistics.
INCREMENTAL = { ON | OFF }
When ON, the statistics created are per partition statistics. When OFF, stats are combined for all partitions. The
default is OFF.
If per partition statistics are not supported an error is generated. Incremental stats are not supported for
following statistics types:
Statistics created with indexes that are not partition-aligned with the base table.
Statistics created on Always On readable secondary databases.
Statistics created on read-only databases.
Statistics created on filtered indexes.
Statistics created on views.
Statistics created on internal tables.
Statistics created with spatial indexes or XML indexes.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server (Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3).
Overrides the max degree of parallelism configuration option for the duration of the statistic operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to
limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism can be:
1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel statistic operation to the specified number or
fewer based on the current system workload.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
<update_stats_stream_option> Identified for informational purposes only. Not supported. Future compatibility is
not guaranteed.
Permissions
Requires one of these permissions:
ALTER TABLE
User is the table owner
Membership in the db_ddladmin fixed database role
General Remarks
SQL Server can use tempdb to sort the sampled rows before building statistics.
Statistics for external tables
When creating external table statistics, SQL Server imports the external table into a temporary SQL Server table,
and then creates the statistics. For samples statistics, only the sampled rows are imported. If you have a large
external table, it will be much faster to use the default sampling instead of the full scan option.
Statistics with a filtered condition
Filtered statistics can improve query performance for queries that select from well-defined subsets of data.
Filtered statistics use a filter predicate in the WHERE clause to select the subset of data that is included in the
statistics.
When to Use CREATE STATISTICS
For more information about when to use CREATE STATISTICS, see Statistics.
Referencing Dependencies for Filtered Statistics
The sys.sql_expression_dependencies catalog view tracks each column in the filtered statistics predicate as a
referencing dependency. Consider the operations that you perform on table columns before creating filtered
statistics because you cannot drop, rename, or alter the definition of a table column that is defined in a filtered
statistics predicate.
Examples
Examples use the AdventureWorks database.
A. Using CREATE STATISTICS with SAMPLE number PERCENT
The following example creates the ContactMail1 statistics, using a random sample of 5 percent of the
BusinessEntityID and EmailPromotion columns of the Contact table of the AdventureWorks2012database.
See Also
Statistics
UPDATE STATISTICS (Transact-SQL )
sp_updatestats (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP STATISTICS (Transact-SQL )
sys.stats (Transact-SQL )
sys.stats_columns (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Generates a symmetric key and specifies its properties in SQL Server.
This feature is incompatible with database export using Data Tier Application Framework (DACFx). You must
drop all symmetric keys before exporting.
Transact-SQL Syntax Conventions
Syntax
CREATE SYMMETRIC KEY key_name
[ AUTHORIZATION owner_name ]
[ FROM PROVIDER provider_name ]
WITH
[
<key_options> [ , ... n ]
| ENCRYPTION BY <encrypting_mechanism> [ , ... n ]
]
<key_options> ::=
KEY_SOURCE = 'pass_phrase'
| ALGORITHM = <algorithm>
| IDENTITY_VALUE = 'identity_phrase'
| PROVIDER_KEY_NAME = 'key_name_in_provider'
| CREATION_DISPOSITION = {CREATE_NEW | OPEN_EXISTING }
<algorithm> ::=
DES | TRIPLE_DES | TRIPLE_DES_3KEY | RC2 | RC4 | RC4_128
| DESX | AES_128 | AES_192 | AES_256
<encrypting_mechanism> ::=
CERTIFICATE certificate_name
| PASSWORD = 'password'
| SYMMETRIC KEY symmetric_key_name
| ASYMMETRIC KEY asym_key_name
Arguments
Key_name
Specifies the unique name by which the symmetric key is known in the database. The names of temporary keys
should begin with one number (#) sign. For example, #temporaryKey900007. You cannot create a symmetric
key that has a name that starts with more than one #. You cannot create a temporary symmetric key using an
EKM provider.
AUTHORIZATION owner_name
Specifies the name of the database user or application role that will own this key.
FROM PROVIDER provider_name
Specifies an Extensible Key Management (EKM ) provider and name. The key is not exported from the EKM
device. The provider must be defined first using the CREATE PROVIDER statement. For more information about
creating external key providers, see Extensible Key Management (EKM ).
NOTE
This option is not available in a contained database.
KEY_SOURCE ='pass_phrase'
Specifies a pass phrase from which to derive the key.
IDENTITY_VALUE ='identity_phrase'
Specifies an identity phrase from which to generate a GUID for tagging data that is encrypted with a temporary
key.
PROVIDER_KEY_NAME='key_name_in_provider'
Specifies the name referenced in the Extensible Key Management provider.
NOTE
This option is not available in a contained database.
CREATION_DISPOSITION = CREATE_NEW
Creates a new key on the Extensible Key Management device. If a key already exists on the device, the statement
fails with error.
CREATION_DISPOSITION = OPEN_EXISTING
Maps a SQL Server symmetric key to an existing Extensible Key Management key. If CREATION_DISPOSITION
= OPEN_EXISTING is not provided, this defaults to CREATE_NEW.
certificate_name
Specifies the name of the certificate that will be used to encrypt the symmetric key. The certificate must already
exist in the database.
' password '
Specifies a password from which to derive a TRIPLE_DES key with which to secure the symmetric key. password
must meet the Windows password policy requirements of the computer that is running the instance of SQL
Server. Always use strong passwords.
symmetric_key_name
Specifies a symmetric key, used to encrypt the key that is being created. The specified key must already exist in
the database, and the key must be open.
asym_key_name
Specifies an asymmetric key, used to encrypt the key that is being created. This asymmetric key must already
exist in the database.
<algorithm>
Specify the encrypting algorithm.
WARNING
Beginning with SQL Server 2016 (13.x), all algorithms other than AES_128, AES_192, and AES_256 are deprecated. To use
older algorithms (not recommended), you must set the database to database compatibility level 120 or lower.
Remarks
When a symmetric key is created, the symmetric key must be encrypted by using at least one of the following:
certificate, password, symmetric key, asymmetric key, or PROVIDER. The key can have more than one encryption
of each type. In other words, a single symmetric key can be encrypted by using multiple certificates, passwords,
symmetric keys, and asymmetric keys at the same time.
Cau t i on
When a symmetric key is encrypted with a password instead of a certificate (or another key), the TRIPLE DES
encryption algorithm is used to encrypt the password. Because of this, keys that are created with a strong
encryption algorithm, such as AES, are themselves secured by a weaker algorithm.
The optional password can be used to encrypt the symmetric key before distributing the key to multiple users.
Temporary keys are owned by the user that creates them. Temporary keys are only valid for the current session.
IDENTITY_VALUE generates a GUID with which to tag data that is encrypted with the new symmetric key. This
tagging can be used to match keys to encrypted data. The GUID generated by a specific phrase is always the
same. After a phrase has been used to generate a GUID, the phrase cannot be reused as long as there is at least
one session that is actively using the phrase. IDENTITY_VALUE is an optional clause; however, we recommend
using it when you are storing data encrypted with a temporary key.
There is no default encryption algorithm.
IMPORTANT
We do not recommend using the RC4 and RC4_128 stream ciphers to protect sensitive data. SQL Server does not further
encode the encryption performed with such keys.
WARNING
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or
RC4_128 when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of
the AES algorithms instead. In SQL Server 2017 material encrypted using RC4 or RC4_128 can be decrypted in any
compatibility level.
Permissions
Requires ALTER ANY SYMMETRIC KEY permission on the database. If AUTHORIZATION is specified, requires
IMPERSONATE permission on the database user or ALTER permission on the application role. If encryption is
by certificate or asymmetric key, requires VIEW DEFINITION permission on the certificate or asymmetric key.
Only Windows logins, SQL Server logins, and application roles can own symmetric keys. Groups and roles
cannot own symmetric keys.
Examples
A. Creating a symmetric key
The following example creates a symmetric key called JanainaKey09 by using the AES 256 algorithm, and then
encrypts the new key with certificate Shipping04 .
See Also
Choose an Encryption Algorithm
ALTER SYMMETRIC KEY (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
sys.symmetric_keys (Transact-SQL )
Extensible Key Management (EKM )
Extensible Key Management Using Azure Key Vault (SQL Server)
CREATE SYNONYM (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new synonym.
Transact-SQL Syntax Conventions
Syntax
-- SQL Server Syntax
<object> :: =
{
[ server_name.[ database_name ] . [ schema_name_2 ]. object_name
| database_name . [ schema_name_2 ].| schema_name_2. ] object_name
}
Arguments
schema_name_1
Specifies the schema in which the synonym is created. If schema is not specified, SQL Server uses the default
schema of the current user.
synonym_name
Is the name of the new synonym.
server_name
Applies to: SQL Server 2008 through SQL Server 2017.
Is the name of the server on which base object is located.
database_name
Is the name of the database in which the base object is located. If database_name is not specified, the name of the
current database is used.
schema_name_2
Is the name of the schema of the base object. If schema_name is not specified the default schema of the current
user is used.
object_name
Is the name of the base object that the synonym references.
Windows Azure SQL Database supports the three-part name format database_name.[schema_name].object_name
when the database_name is the current database or the database_name is tempdb and the object_name starts with
#.
Remarks
The base object need not exist at synonym create time. SQL Server checks for the existence of the base object at
run time.
Synonyms can be created for the following types of objects:
Permissions
To create a synonym in a given schema, a user must have CREATE SYNONYM permission and either own the
schema or have ALTER SCHEMA permission.
The CREATE SYNONYM permission is a grantable permission.
NOTE
You do not need permission on the base object to successfully compile the CREATE SYNONYM statement, because all
permission checking on the base object is deferred until run time.
Examples
A. Creating a synonym for a local object
The following example first creates a synonym for the base object, Product in the AdventureWorks2012 database,
and then queries the synonym.
-- Create a synonym for the Product table in AdventureWorks2012.
CREATE SYNONYM MyProduct
FOR AdventureWorks2012.Production.Product;
GO
-----------------------
ProductID Name
----------- --------------------------
1 Adjustable Race
2 Bearing Ball
3 BB Ball Bearing
4 Headset Ball Bearings
(4 row(s) affected)
See Also
DROP SYNONYM (Transact-SQL )
GRANT (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE TABLE (Transact-SQL)
5/3/2018 • 67 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new table in SQL Server and Azure SQL Database.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL
Database Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
NOTE
For SQL Data Warehouse syntax, see CREATE TABLE (Azure SQL Data Warehouse).
Simple Syntax
--Simple CREATE TABLE Syntax (common if not using options)
CREATE TABLE
[ database_name . [ schema_name ] . | schema_name . ] table_name
( { <column_definition> } [ ,...n ] )
[ ; ]
Full Syntax
--Disk-Based CREATE TABLE Syntax
CREATE TABLE
[ database_name . [ schema_name ] . | schema_name . ] table_name
[ AS FileTable ]
( { <column_definition>
| <computed_column_definition>
| <column_set_definition>
| [ <table_constraint> ]
| [ <table_index> ] }
[ ,...n ]
[ PERIOD FOR SYSTEM_TIME ( system_start_time_column_name
, system_end_time_column_name ) ]
)
[ ON { partition_scheme_name ( partition_column_name )
| filegroup
| "default" } ]
[ TEXTIMAGE_ON { filegroup | "default" } ]
[ FILESTREAM_ON { partition_scheme_name
| filegroup
| "default" } ]
[ WITH ( <table_option> [ ,...n ] ) ]
[ ; ]
<column_definition> ::=
column_name <data_type>
[ FILESTREAM ]
[ FILESTREAM ]
[ COLLATE collation_name ]
[ SPARSE ]
[ MASKED WITH ( FUNCTION = ' mask_function ') ]
[ CONSTRAINT constraint_name [ DEFAULT constant_expression ] ]
[ IDENTITY [ ( seed,increment ) ]
[ NOT FOR REPLICATION ]
[ GENERATED ALWAYS AS ROW { START | END } [ HIDDEN ] ]
[ NULL | NOT NULL ]
[ ROWGUIDCOL ]
[ ENCRYPTED WITH
( COLUMN_ENCRYPTION_KEY = key_name ,
ENCRYPTION_TYPE = { DETERMINISTIC | RANDOMIZED } ,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
) ]
[ <column_constraint> [ ...n ] ]
[ <column_index> ]
<column_constraint> ::=
[ CONSTRAINT constraint_name ]
{ { PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
]
[ ON { partition_scheme_name ( partition_column_name )
| filegroup | "default" } ]
| [ FOREIGN KEY ]
REFERENCES [ schema_name . ] referenced_table_name [ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
[ NOT FOR REPLICATION ]
<column_index> ::=
INDEX index_name [ CLUSTERED | NONCLUSTERED ]
[ WITH ( <index_option> [ ,... n ] ) ]
[ ON { partition_scheme_name (column_name )
| filegroup_name
| default
}
]
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]
<computed_column_definition> ::=
column_name AS computed_column_expression
[ PERSISTED [ NOT NULL ] ]
[
[ CONSTRAINT constraint_name ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH FILLFACTOR = fillfactor
| WITH ( <index_option> [ , ...n ] )
]
[ ON { partition_scheme_name ( partition_column_name )
| filegroup | "default" } ]
| [ FOREIGN KEY ]
REFERENCES referenced_table_name [ ( ref_column ) ]
[ ON DELETE { NO ACTION | CASCADE } ]
[ ON UPDATE { NO ACTION } ]
[ NOT FOR REPLICATION ]
<column_set_definition> ::=
column_set_name XML COLUMN_SET FOR ALL_SPARSE_COLUMNS
<table_option> ::=
{
[DATA_COMPRESSION = { NONE | ROW | PAGE }
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]]
[ FILETABLE_DIRECTORY = <directory_name> ]
[ FILETABLE_COLLATE_FILENAME = { <collation_name> | database_default } ]
[ FILETABLE_PRIMARY_KEY_CONSTRAINT_NAME = <constraint_name> ]
[ FILETABLE_STREAMID_UNIQUE_CONSTRAINT_NAME = <constraint_name> ]
[ FILETABLE_FULLPATH_UNIQUE_CONSTRAINT_NAME = <constraint_name> ]
[ SYSTEM_VERSIONING = ON [ ( HISTORY_TABLE = schema_name . history_table_name
[, DATA_CONSISTENCY_CHECK = { ON | OFF } ] ) ] ]
[ REMOTE_DATA_ARCHIVE =
{
ON [ ( <table_stretch_options> [,...n] ) ]
| OFF ( MIGRATION_STATE = PAUSED )
}
]
}
<table_stretch_options> ::=
{
[ FILTER_PREDICATE = { null | table_predicate_function } , ]
MIGRATION_STATE = { OUTBOUND | INBOUND | PAUSED }
}
<index_option> ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF}
| ALLOW_PAGE_LOCKS ={ ON | OFF}
| COMPRESSION_DELAY= {0 | delay [Minutes]}
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ]
}
<range> ::=
<partition_number_expression> TO <partition_number_expression>
<column_definition> ::=
column_name <data_type>
[ COLLATE collation_name ]
[ GENERATED ALWAYS AS ROW { START | END } [ HIDDEN ] ]
[ NULL | NOT NULL ]
[
[ CONSTRAINT constraint_name ] DEFAULT memory_optimized_constant_expression ]
| [ IDENTITY [ ( 1, 1 ) ]
]
[ <column_constraint> ]
[ <column_index> ]
<column_constraint> ::=
[ CONSTRAINT constraint_name ]
{
{ PRIMARY KEY | UNIQUE }
{ NONCLUSTERED
| NONCLUSTERED HASH WITH (BUCKET_COUNT = bucket_count)
}
| [ FOREIGN KEY ]
REFERENCES [ schema_name . ] referenced_table_name [ ( ref_column ) ]
| CHECK ( logical_expression )
}
<column_index> ::=
INDEX index_name
{ [ NONCLUSTERED ] | [ NONCLUSTERED ] HASH WITH (BUCKET_COUNT = bucket_count) }
<table_index> ::=
INDEX index_name
{ [ NONCLUSTERED ] HASH (column [ ,... n ] ) WITH (BUCKET_COUNT = bucket_count)
| [ NONCLUSTERED ] (column [ ASC | DESC ] [ ,... n ] )
[ ON filegroup_name | default ]
| CLUSTERED COLUMNSTORE [WITH ( COMPRESSION_DELAY = {0 | delay [Minutes]})]
[ ON filegroup_name | default ]
<table_option> ::=
{
MEMORY_OPTIMIZED = ON
| DURABILITY = {SCHEMA_ONLY | SCHEMA_AND_DATA}
| SYSTEM_VERSIONING = ON [ ( HISTORY_TABLE = schema_name . history_table_name
[, DATA_CONSISTENCY_CHECK = { ON | OFF } ] ) ]
Arguments
database_name
Is the name of the database in which the table is created. database_name must specify the name of an
existing database. If not specified, database_name defaults to the current database. The login for the current
connection must be associated with an existing user ID in the database specified by database_name, and
that user ID must have CREATE TABLE permissions.
schema_name
Is the name of the schema to which the new table belongs.
table_name
Is the name of the new table. Table names must follow the rules for identifiers. table_name can be a
maximum of 128 characters, except for local temporary table names (names prefixed with a single number
sign (#)) that cannot exceed 116 characters.
AS FileTable
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Creates the new table as a FileTable. You do not specify columns because a FileTable has a fixed schema. For
more information about FileTables, see FileTables (SQL Server).
column_name
computed_column_expression
Is an expression that defines the value of a computed column. A computed column is a virtual column that is
not physically stored in the table, unless the column is marked PERSISTED. The column is computed from
an expression that uses other columns in the same table. For example, a computed column can have the
definition: cost AS price * qty. The expression can be a noncomputed column name, constant, function,
variable, and any combination of these connected by one or more operators. The expression cannot be a
subquery or contain alias data types.
Computed columns can be used in select lists, WHERE clauses, ORDER BY clauses, or any other locations in
which regular expressions can be used, with the following exceptions:
Computed columns must be marked PERSISTED to participate in a FOREIGN KEY or CHECK
constraint.
A computed column can be used as a key column in an index or as part of any PRIMARY KEY or
UNIQUE constraint, if the computed column value is defined by a deterministic expression and the
data type of the result is allowed in index columns.
For example, if the table has integer columns a and b, the computed column a+b may be indexed, but
computed column a+DATEPART(dd, GETDATE ()) cannot be indexed because the value may
change in subsequent invocations.
A computed column cannot be the target of an INSERT or UPDATE statement.
NOTE
Each row in a table can have different values for columns that are involved in a computed column; therefore, the
computed column may not have the same value for each row.
Based on the expressions that are used, the nullability of computed columns is determined automatically by
the Database Engine. The result of most expressions is considered nullable even if only nonnullable columns
are present, because possible underflows or overflows also produce NULL results. Use the
COLUMNPROPERTY function with the AllowsNull property to investigate the nullability of any computed
column in a table. An expression that is nullable can be turned into a nonnullable one by specifying ISNULL
with the check_expression constant, where the constant is a nonnull value substituted for any NULL result.
REFERENCES permission on the type is required for computed columns based on common language
runtime (CLR ) user-defined type expressions.
PERSISTED
Specifies that the SQL Server Database Engine will physically store the computed values in the table, and
update the values when any other columns on which the computed column depends are updated. Marking a
computed column as PERSISTED lets you create an index on a computed column that is deterministic, but
not precise. For more information, see Indexes on Computed Columns. Any computed columns that are
used as partitioning columns of a partitioned table must be explicitly marked PERSISTED.
computed_column_expression must be deterministic when PERSISTED is specified.
ON { partition_scheme | filegroup | "default" }
Specifies the partition scheme or filegroup on which the table is stored. If partition_scheme is specified, the
table is to be a partitioned table whose partitions are stored on a set of one or more filegroups specified in
partition_scheme. If filegroup is specified, the table is stored in the named filegroup. The filegroup must exist
within the database. If "default" is specified, or if ON is not specified at all, the table is stored on the default
filegroup. The storage mechanism of a table as specified in CREATE TABLE cannot be subsequently altered.
ON {partition_scheme | filegroup | "default"} can also be specified in a PRIMARY KEY or UNIQUE
constraint. These constraints create indexes. If filegroup is specified, the index is stored in the named
filegroup. If "default" is specified, or if ON is not specified at all, the index is stored in the same filegroup as
the table. If the PRIMARY KEY or UNIQUE constraint creates a clustered index, the data pages for the table
are stored in the same filegroup as the index. If CLUSTERED is specified or the constraint otherwise creates
a clustered index, and a partition_scheme is specified that differs from the partition_scheme or filegroup of
the table definition, or vice-versa, only the constraint definition will be honored, and the other will be
ignored.
NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in ON
"default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current
session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).
NOTE
After you create a partitioned table, consider setting the LOCK_ESCALATION option for the table to AUTO. This can
improve concurrency by enabling locks to escalate to partition (HoBT) level instead of the table. For more information,
see ALTER TABLE (Transact-SQL).
NOTE
Varchar(max), nvarchar(max), varbinary(max), xml and large UDT values are stored directly in the data row, up to a
limit of 8000 bytes and as long as the value can fit the record. If the value does not fit in the record, a pointer is
sorted in-row and the rest is stored out of row in the LOB storage space. 0 is the default value. TEXTIMAGE_ON only
changes the location of the "LOB storage space", it does not affect when data is stored in-row. Use large value types
out of row option of sp_tableoption to store the entire LOB value out of the row.
NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in
TEXTIMAGE_ON "default" or TEXTIMAGE_ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option must
be ON for the current session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER
(Transact-SQL).
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies to create an index on the table. This can be a clustered index, or a nonclustered index. The index will
contain the columns listed, and will sort the data in either ascending or descending order.
INDEX index_name CLUSTERED COLUMNSTORE
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies to store the entire table in columnar format with a clustered columnstore index. This always
includes all columns in the table. The data is not sorted in alphabetical or numeric order since the rows are
organized to gain columnstore compression benefits.
INDEX index_name [ NONCLUSTERED ] COLUMNSTORE (column_name [ ,... n ] )
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Specifies to create a nonclustered columnstore index on the table. The underlying table can be a rowstore
heap or clustered index, or it can be a clustered columnstore index. In all cases, creating a nonclustered
columnstore index on a table stores a second copy of the data for the columns in the index.
The nonclustered columnstore index is stored and managed as a clustered columnstore index. It is called a
nonclustered columnstore index to because the columns can be limited and it exists as a secondary index on
a table.
ON partition_scheme_name(column_name)
Specifies the partition scheme that defines the filegroups onto which the partitions of a partitioned index
will be mapped. The partition scheme must exist within the database by executing either CREATE
PARTITION SCHEME or ALTER PARTITION SCHEME. column_name specifies the column against which a
partitioned index will be partitioned. This column must match the data type, length, and precision of the
argument of the partition function that partition_scheme_name is using. column_name is not restricted to
the columns in the index definition. Any column in the base table can be specified, except when partitioning
a UNIQUE index, column_name must be chosen from among those used as the unique key. This restriction
allows the Database Engine to verify uniqueness of key values within a single partition only.
NOTE
When you partition a non-unique, clustered index, the Database Engine by default adds the partitioning column to
the list of clustered index keys, if it is not already specified. When partitioning a non-unique, nonclustered index, the
Database Engine adds the partitioning column as a non-key (included) column of the index, if it is not already
specified.
If partition_scheme_name or filegroup is not specified and the table is partitioned, the index is placed in the
same partition scheme, using the same partitioning column, as the underlying table.
NOTE
You cannot specify a partitioning scheme on an XML index. If the base table is partitioned, the XML index uses the
same partition scheme as the table.
For more information about partitioning indexes, Partitioned Tables and Indexes.
ON filegroup_name
Creates the specified index on the specified filegroup. If no location is specified and the table or view is not
partitioned, the index uses the same filegroup as the underlying table or view. The filegroup must already
exist.
ON "default"
Creates the specified index on the default filegroup.
The term default, in this context, is not a keyword. It is an identifier for the default filegroup and must be
delimited, as in ON "default" or ON [default]. If "default" is specified, the QUOTED_IDENTIFIER option
must be ON for the current session. This is the default setting. For more information, see SET
QUOTED_IDENTIFIER (Transact-SQL ).
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]
Applies to: SQL Server.
Specifies the placement of FILESTREAM data for the table when a clustered index is created. The
FILESTREAM_ON clause allows FILESTREAM data to be moved to a different FILESTREAM filegroup or
partition scheme.
filestream_filegroup_name is the name of a FILESTREAM filegroup. The filegroup must have one file
defined for the filegroup by using a CREATE DATABASE or ALTER DATABASE statement; otherwise, an
error is raised.
If the table is partitioned, the FILESTREAM_ON clause must be included and must specify a partition
scheme of FILESTREAM filegroups that uses the same partition function and partition columns as the
partition scheme for the table. Otherwise, an error is raised.
If the table is not partitioned, the FILESTREAM column cannot be partitioned. FILESTREAM data for the
table must be stored in a single filegroup that is specified in the FILESTREAM_ON clause.
FILESTREAM_ON NULL can be specified in a CREATE INDEX statement if a clustered index is being
created and the table does not contain a FILESTREAM column.
For more information, see FILESTREAM (SQL Server).
ROWGUIDCOL
Indicates that the new column is a row GUID column. Only one uniqueidentifier column per table can be
designated as the ROWGUIDCOL column. Applying the ROWGUIDCOL property enables the column to
be referenced using $ROWGUID. The ROWGUIDCOL property can be assigned only to a
uniqueidentifier column. User-defined data type columns cannot be designated with ROWGUIDCOL.
The ROWGUIDCOL property does not enforce uniqueness of the values stored in the column.
ROWGUIDCOL also does not automatically generate values for new rows inserted into the table. To
generate unique values for each column, either use the NEWID or NEWSEQUENTIALID function on
INSERT statements or use these functions as the default for the column.
ENCRYPTED WITH
Specifies encrypting columns by using the Always Encrypted feature.
COLUMN_ENCRYPTION_KEY = key_name
Specifies the column encryption key. For more information, see CREATE COLUMN ENCRYPTION KEY
(Transact-SQL ).
ENCRYPTION_TYPE = { DETERMINISTIC | RANDOMIZED }
Deterministic encryption uses a method which always generates the same encrypted value for any given
plain text value. Using deterministic encryption allows searching using equality comparison, grouping, and
joining tables using equality joins based on encrypted values, but can also allow unauthorized users to guess
information about encrypted values by examining patterns in the encrypted column. Joining two tables on
columns encrypted deterministically is only possible if both columns are encrypted using the same column
encryption key. Deterministic encryption must use a column collation with a binary2 sort order for character
columns.
Randomized encryption uses a method that encrypts data in a less predictable manner. Randomized
encryption is more secure, but prevents equality searches, grouping, and joining on encrypted columns.
Columns using randomized encryption cannot be indexed.
Use deterministic encryption for columns that will be search parameters or grouping parameters, for
example a government ID number. Use randomized encryption, for data such as a credit card number, which
is not grouped with other records, or used to join tables, and which is not searched for because you use
other columns (such as a transaction number) to find the row which contains the encrypted column of
interest.
Columns must be of a qualifying data type.
ALGORITHM
Must be 'AEAD_AES_256_CBC_HMAC_SHA_256'.
For more information including feature constraints, see Always Encrypted (Database Engine).
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
SPARSE
Indicates that the column is a sparse column. The storage of sparse columns is optimized for null values.
Sparse columns cannot be designated as NOT NULL. For additional restrictions and more information
about sparse columns, see Use Sparse Columns.
MASKED WITH ( FUNCTION = ' mask_function ')
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Specifies a dynamic data mask. mask_function is the name of the masking function with the appropriate
parameters. Three functions are available:
default()
email()
partial()
random()
For function parameters, see Dynamic Data Masking.
FILESTREAM
Applies to: SQL Server.
Valid only for varbinary(max) columns. Specifies FILESTREAM storage for the varbinary(max) BLOB
data.
The table must also have a column of the uniqueidentifier data type that has the ROWGUIDCOL attribute.
This column must not allow null values and must have either a UNIQUE or PRIMARY KEY single-column
constraint. The GUID value for the column must be supplied either by an application when inserting data, or
by a DEFAULT constraint that uses the NEWID () function.
The ROWGUIDCOL column cannot be dropped and the related constraints cannot be changed while there
is a FILESTREAM column defined for the table. The ROWGUIDCOL column can be dropped only after the
last FILESTREAM column is dropped.
When the FILESTREAM storage attribute is specified for a column, all values for that column are stored in a
FILESTREAM data container on the file system.
COLL ATE collation_name
Specifies the collation for the column. Collation name can be either a Windows collation name or an SQL
collation name. collation_name is applicable only for columns of the char, varchar, text, nchar, nvarchar,
and ntext data types. If not specified, the column is assigned either the collation of the user-defined data
type, if the column is of a user-defined data type, or the default collation of the database.
For more information about the Windows and SQL collation names, see Windows Collation Name and SQL
Collation Name.
For more information about the COLL ATE clause, see COLL ATE (Transact-SQL ).
CONSTRAINT
Is an optional keyword that indicates the start of the definition of a PRIMARY KEY, NOT NULL, UNIQUE,
FOREIGN KEY, or CHECK constraint.
constraint_name
Is the name of a constraint. Constraint names must be unique within the schema to which the table belongs.
NULL | NOT NULL
Determine whether null values are allowed in the column. NULL is not strictly a constraint but can be
specified just like NOT NULL. NOT NULL can be specified for computed columns only if PERSISTED is also
specified.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column or columns through a unique index. Only
one PRIMARY KEY constraint can be created per table.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns through a unique index. A
table can have multiple UNIQUE constraints.
CLUSTERED | NONCLUSTERED
Indicate that a clustered or a nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints default to CLUSTERED, and UNIQUE constraints default to NONCLUSTERED.
In a CREATE TABLE statement, CLUSTERED can be specified for only one constraint. If CLUSTERED is
specified for a UNIQUE constraint and a PRIMARY KEY constraint is also specified, the PRIMARY KEY
defaults to NONCLUSTERED.
The following shows how to use NONCLUSTERED in a disk-based table:
CREATE TABLE t1 ( c1 int, INDEX ix_1 NONCLUSTERED (c1))
CREATE TABLE t2( c1 int INDEX ix_1 NONCLUSTERED (c1))
CREATE TABLE t3( c1 int, c2 int INDEX ix_1 NONCLUSTERED)
CREATE TABLE t4( c1 int, c2 int, INDEX ix_1 NONCLUSTERED (c1,c2))
IMPORTANT
We recommend that you specify NOT NULL on the partitioning column of partitioned tables, and also nonpartitioned
tables that are sources or targets of ALTER TABLE...SWITCH operations. Doing this makes sure that any CHECK
constraints on partitioning columns do not have to check for null values.
IMPORTANT
Documenting WITH FILLFACTOR = fillfactor as the only index option that applies to PRIMARY KEY or UNIQUE
constraints is maintained for backward compatibility, but will not be documented in this manner in future releases.
WITH
(
DATA_COMPRESSION = NONE ON PARTITIONS (1),
DATA_COMPRESSION = ROW ON PARTITIONS (2, 4, 6 TO 8),
DATA_COMPRESSION = PAGE ON PARTITIONS (3, 5)
)
<index_option> ::=
Specifies one or more index options. For a complete description of these options, see CREATE INDEX
(Transact-SQL ).
PAD_INDEX = { ON | OFF }
When ON, the percentage of free space specified by FILLFACTOR is applied to the intermediate level pages
of the index. When OFF or a FILLFACTOR value it not specified, the intermediate level pages are filled to
near capacity leaving enough space for at least one row of the maximum size the index can have, considering
the set of keys on the intermediate pages. The default is OFF.
FILLFACTOR =fillfactor
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index
page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0.
Fill factor values 0 and 100 are the same in all respects.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique
index. The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt.
The option has no effect when executing CREATE INDEX, ALTER INDEX, or UPDATE. The default is OFF.
ON
A warning message will occur when duplicate key values are inserted into a unique index. Only the rows
violating the uniqueness constraint will fail.
OFF
An error message will occur when duplicate key values are inserted into a unique index. The entire INSERT
operation will be rolled back.
IGNORE_DUP_KEY cannot be set to ON for indexes created on a view, non-unique indexes, XML indexes,
spatial indexes, and filtered indexes.
To view IGNORE_DUP_KEY, use sys.indexes.
In backward compatible syntax, WITH IGNORE_DUP_KEY is equivalent to WITH IGNORE_DUP_KEY =
ON.
STATISTICS_NORECOMPUTE = { ON | OFF }
When ON, out-of-date index statistics are not automatically recomputed. When OFF, automatic statistics
updating are enabled. The default is OFF.
ALLOW_ROW_LOCKS = { ON | OFF }
When ON, row locks are allowed when you access the index. The Database Engine determines when row
locks are used. When OFF, row locks are not used. The default is ON.
ALLOW_PAGE_LOCKS = { ON | OFF }
When ON, page locks are allowed when you access the index. The Database Engine determines when page
locks are used. When OFF, page locks are not used. The default is ON.
FILETABLE_DIRECTORY = directory_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the windows-compatible FileTable directory name. This name should be unique among all the
FileTable directory names in the database. Uniqueness comparison is case-insensitive, regardless of collation
settings. If this value is not specified, the name of the filetable is used.
FILETABLE_COLL ATE_FILENAME = { collation_name | database_default }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the name of the collation to be applied to the Name column in the FileTable. The collation must be
case-insensitive to comply with Windows file naming semantics. If this value is not specified, the database
default collation is used. If the database default collation is case-sensitive, an error is raised and the CREATE
TABLE operation fails.
collation_name
The name of a case-insensitive collation.
database_default
Specifies that the default collation for the database should be used. This collation must be case-insensitive.
FILETABLE_PRIMARY_KEY_CONSTRAINT_NAME = constraint_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the name to be used for the primary key constraint that is automatically created on the FileTable. If
this value is not specified, the system generates a name for the constraint.
FILETABLE_STREAMID_UNIQUE_CONSTRAINT_NAME = constraint_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the name to be used for the unique constraint that is automatically created on the stream_id
column in the FileTable. If this value is not specified, the system generates a name for the constraint.
FILETABLE_FULLPATH_UNIQUE_CONSTRAINT_NAME = constraint_name
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the name to be used for the unique constraint that is automatically created on the
parent_path_locator and name columns in the FileTable. If this value is not specified, the system generates
a name for the constraint.
SYSTEM_VERSIONING = ON [ ( HISTORY_TABLE = schema_name . history_table_name [,
DATA_CONSISTENCY_CHECK = { ON | OFF } ] ) ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017 and Azure SQL Database.
Enables system versioning of the table if the datatype, nullability constraint, and primary key constraint
requirements are met. If the HISTORY_TABLE argument is not used, the system generates a new history
table matching the schema of the current table in the same filegroup as the current table, creating a link
between the two tables and enables the system to record the history of each record in the current table in
the history table. The name of this history table will be MSSQL_TemporalHistoryFor<primary_table_object_id> .
By default, the history table is PAGE compressed. If the HISTORY_TABLE argument is used to create a link
to and use an existing history table, the link is created between the current table and the specified table. If
current table is partitioned, the history table is created on default file group because partitioning
configuration is not replicated automatically from the current table to the history table. If the name of a
history table is specified during history table creation, you must specify the schema and table name. When
creating a link to an existing history table, you can choose to perform a data consistency check. This data
consistency check ensures that existing records do not overlap. Performing the data consistency check is the
default. Use this argument in conjunction with the PERIOD FOR SYSTEM_TIME and GENERATED
ALWAYS AS ROW { START | END } arguments to enable system versioning on a table. For more
information, see Temporal Tables.
REMOTE_DATA_ARCHIVE = { ON [ ( table_stretch_options [,...n] ) ] | OFF ( MIGRATION_STATE = PAUSED
)}
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Creates the new table with Stretch Database enabled or disabled. For more info, see Stretch Database.
Enabling Stretch Database for a table
When you enable Stretch for a table by specifying ON , you can optionally specify
MIGRATION_STATE = OUTBOUND to begin migrating data immediately, or MIGRATION_STATE = PAUSED to postpone
data migration. The default value is MIGRATION_STATE = OUTBOUND . For more info about enabling Stretch for a
table, see Enable Stretch Database for a table.
Prerequisites. Before you enable Stretch for a table, you have to enable Stretch on the server and on the
database. For more info, see Enable Stretch Database for a database.
Permissions. Enabling Stretch for a database or a table requires db_owner permissions. Enabling Stretch for
a table also requires ALTER permissions on the table.
[ FILTER_PREDICATE = { null | predicate } ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Optionally specifies a filter predicate to select rows to migrate from a table that contains both historical and
current data. The predicate must call a deterministic inline table-valued function. For more info, see Enable
Stretch Database for a table and Select rows to migrate by using a filter function.
IMPORTANT
If you provide a filter predicate that performs poorly, data migration also performs poorly. Stretch Database applies
the filter predicate to the table by using the CROSS APPLY operator.
WARNING
When a table is created with DURABILITY = SCHEMA_ONLY, and READ_COMMITTED_SNAPSHOT is subsequently
changed using ALTER DATABASE, data in the table will be lost.
BUCKET_COUNT
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates the number of buckets that should be created in the hash index. The maximum value for
BUCKET_COUNT in hash indexes is 1,073,741,824. For more information about bucket counts, see Indexes
for Memory-Optimized Tables.
Bucket_count is a required argument.
INDEX
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Column and table indexes can be specified as part of the CREATE TABLE statement. For details about
adding and removing indexes on memory-optimized tables see: Altering Memory-Optimized Tables
HASH
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates that a HASH index is created.
Hash indexes are supported only on memory-optimized tables.
Remarks
For information about the number of allowed tables, columns, constraints and indexes, see Maximum
Capacity Specifications for SQL Server.
Space is generally allocated to tables and indexes in increments of one extent at a time. When the SET
MIXED_PAGE_ALLOCATION option of ALTER DATABASE is set to TRUE, or always prior to SQL Server
2016 (13.x), when a table or index is created, it is allocated pages from mixed extents until it has enough
pages to fill a uniform extent. After it has enough pages to fill a uniform extent, another extent is allocated
every time the currently allocated extents become full. For a report about the amount of space allocated and
used by a table, execute sp_spaceused.
The Database Engine does not enforce an order in which DEFAULT, IDENTITY, ROWGUIDCOL, or column
constraints are specified in a column definition.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the metadata for the
table, even if the option is set to OFF when the table is created.
Temporary Tables
You can create local and global temporary tables. Local temporary tables are visible only in the current
session, and global temporary tables are visible to all sessions. Temporary tables cannot be partitioned.
Prefix local temporary table names with single number sign (#table_name), and prefix global temporary
table names with a double number sign (##table_name).
SQL statements reference the temporary table by using the value specified for table_name in the CREATE
TABLE statement, for example####:
If more than one temporary table is created inside a single stored procedure or batch, they must have
different names.
If a local temporary table is created in a stored procedure or application that can be executed at the same
time by several users, the Database Engine must be able to distinguish the tables created by the different
users. The Database Engine does this by internally appending a numeric suffix to each local temporary table
name. The full name of a temporary table as stored in the sysobjects table in tempdb is made up of the
table name specified in the CREATE TABLE statement and the system-generated numeric suffix. To allow for
the suffix, table_name specified for a local temporary name cannot exceed 116 characters.
Temporary tables are automatically dropped when they go out of scope, unless explicitly dropped by using
DROP TABLE:
A local temporary table created in a stored procedure is dropped automatically when the stored
procedure is finished. The table can be referenced by any nested stored procedures executed by the
stored procedure that created the table. The table cannot be referenced by the process that called the
stored procedure that created the table.
All other local temporary tables are dropped automatically at the end of the current session.
Global temporary tables are automatically dropped when the session that created the table ends and
all other tasks have stopped referencing them. The association between a task and a table is
maintained only for the life of a single Transact-SQL statement. This means that a global temporary
table is dropped at the completion of the last Transact-SQL statement that was actively referencing
the table when the creating session ended.
A local temporary table created within a stored procedure or trigger can have the same name as a
temporary table that was created before the stored procedure or trigger is called. However, if a query
references a temporary table and two temporary tables with the same name exist at that time, it is not
defined which table the query is resolved against. Nested stored procedures can also create
temporary tables with the same name as a temporary table that was created by the stored procedure
that called it. However, for modifications to resolve to the table that was created in the nested
procedure, the table must have the same structure, with the same column names, as the table created
in the calling procedure. This is shown in the following example.
CREATE PROCEDURE dbo.Test2
AS
n CREATE TABLE #t(x INT PRIMARY KEY);
INSERT INTO #t VALUES (2);
SELECT Test2Col = x FROM #t;
GO
EXEC Test1;
GO
(1 row(s) affected)
Test1Col
-----------
1
(1 row(s) affected)
Test2Col
-----------
2
When you create local or global temporary tables, the CREATE TABLE syntax supports constraint definitions
except for FOREIGN KEY constraints. If a FOREIGN KEY constraint is specified in a temporary table, the
statement returns a warning message that states the constraint was skipped. The table is still created without
the FOREIGN KEY constraints. Temporary tables cannot be referenced in FOREIGN KEY constraints.
If a temporary table is created with a named constraint and the temporary table is created within the scope
of a user-defined transaction, only one user at a time can execute the statement that creates the temp table.
For example, if a stored procedure creates a temporary table with a named primary key constraint, the
stored procedure cannot be executed simultaneously by multiple users.
---Result
1253579504
---Obtain global temp table name for a given object ID 1253579504 in tempdb (2)
SELECT name FROM tempdb.sys.objects WHERE object_id = 1253579504
---Result
##test
Session B connects to Azure SQL Database testdb1 and can access table ##test created by session A
Session C connects to another database in Azure SQL Database testdb2 and wants to access ##test
created in testdb1. This select fails due to the database scope for the global temp tables
Addressing system object in Azure SQL Database tempdb from current user database testdb1
Partitioned Tables
Before creating a partitioned table by using CREATE TABLE, you must first create a partition function to
specify how the table becomes partitioned. A partition function is created by using CREATE PARTITION
FUNCTION. Second, you must create a partition scheme to specify the filegroups that will hold the
partitions indicated by the partition function. A partition scheme is created by using CREATE PARTITION
SCHEME. Placement of PRIMARY KEY or UNIQUE constraints to separate filegroups cannot be specified
for partitioned tables. For more information, see Partitioned Tables and Indexes.
NOTE
For memory-optimized tables, the NULLable key column is allowed.
If a primary key is defined on a CLR user-defined type column, the implementation of the type must
support binary ordering. For more information, see CLR User-Defined Types.
UNIQUE Constraints
If CLUSTERED or NONCLUSTERED is not specified for a UNIQUE constraint, NONCLUSTERED is
used by default.
Each UNIQUE constraint generates an index. The number of UNIQUE constraints cannot cause the
number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered index.
If a unique constraint is defined on a CLR user-defined type column, the implementation of the type
must support binary or operator-based ordering. For more information, see CLR User-Defined Types.
DEFAULT Definitions
A column can have only one DEFAULT definition.
A DEFAULT definition can contain constant values, functions, SQL standard niladic functions, or
NULL. The following table shows the niladic functions and the values they return for the default
during an INSERT statement.
constant_expression in a DEFAULT definition cannot refer to another column in the table, or to other
tables, views, or stored procedures.
DEFAULT definitions cannot be created on columns with a timestamp data type or columns with an
IDENTITY property.
DEFAULT definitions cannot be created for columns with alias data types if the alias data type is
bound to a default object.
CHECK Constraints
A column can have any number of CHECK constraints, and the condition can include multiple logical
expressions combined with AND and OR. Multiple CHECK constraints for a column are validated in
the order they are created.
The search condition must evaluate to a Boolean expression and cannot reference another table.
A column-level CHECK constraint can reference only the constrained column, and a table-level
CHECK constraint can reference only columns in the same table.
CHECK CONSTRAINTS and rules serve the same function of validating the data during INSERT and
UPDATE statements.
When a rule and one or more CHECK constraints exist for a column or columns, all restrictions are
evaluated.
CHECK constraints cannot be defined on text, ntext, or image columns.
Alias data type The Database Engine uses the nullability that is specified
when the data type was created. To determine the default
nullability of the data type, use sp_help.
System-supplied data type If the system-supplied data type has only one option, it
takes precedence. timestamp data types must be NOT
NULL. When any session settings are set ON by using SET:
ANSI_NULL_DFLT_ON = ON, NULL is assigned.
ANSI_NULL_DFLT_OFF = ON, NOT NULL is assigned.
When neither of the ANSI_NULL_DFLT options is set for the session and the database is set to the default
(ANSI_NULL_DEFAULTis OFF ), the default of NOT NULL is assigned.
If the column is a computed column, its nullability is always automatically determined by the Database
Engine. To find out the nullability of this type of column, use the COLUMNPROPERTY function with the
AllowsNull property.
NOTE
The SQL Server ODBC driver and Microsoft OLE DB Provider for SQL Server both default to having
ANSI_NULL_DFLT_ON set to ON. ODBC and OLE DB users can configure this in ODBC data sources, or with
connection attributes or properties set by the application.
Data Compression
System tables cannot be enabled for compression. When you are creating a table, data compression is set to
NONE, unless specified otherwise. If you specify a list of partitions or a partition that is out of range, an
error will be generated. For a more information about data compression, see Data Compression.
To evaluate how changing the compression state will affect a table, an index, or a partition, use the
sp_estimate_data_compression_savings stored procedure.
Permissions
Requires CREATE TABLE permission in the database and ALTER permission on the schema in which the
table is being created.
If any columns in the CREATE TABLE statement are defined to be of a user-defined type, REFERENCES
permission on the user-defined type is required.
If any columns in the CREATE TABLE statement are defined to be of a CLR user-defined type, either
ownership of the type or REFERENCES permission on it is required.
If any columns in the CREATE TABLE statement have an XML schema collection associated with them, either
ownership of the XML schema collection or REFERENCES permission on it is required.
Any user can create temporary tables in tempdb.
Examples
A. Create a PRIMARY KEY constraint on a column
The following example shows the column definition for a PRIMARY KEY constraint with a clustered index
on the EmployeeID column of the Employee table. Because a constraint name is not specified, the system
supplies the constraint name.
You can also explicitly use the FOREIGN KEY clause and restate the column attribute. Note that the column
name does not have to be the same in both tables.
Multicolumn key constraints are created as table constraints. In the AdventureWorks2012 database, the
SpecialOfferProduct table includes a multicolumn PRIMARY KEY. The following example shows how to
reference this key from another table; an explicit constraint name is optional.
In addition to constants, DEFAULT definitions can include functions. Use the following example to get the
current date for an entry.
DEFAULT (getdate())
A niladic-function scan can also improve data integrity. To keep track of the user that inserted a row, use the
niladic-function for USER. Do not enclose the niladic-functions with parentheses.
DEFAULT USER
This example shows a named constraint with a pattern restriction on the character data entered into a
column of a table.
This example specifies that the values must be within a specific list or follow a specified pattern.
Based on the values of column col1 of PartitionTable , the partitions are assigned in the following ways.
Partition 1 2 3 4
FILEGROUP TEST1FG TEST2FG TEST3FG TEST4FG
Values col 1 <= 1 col1 > 1 AND col1 col1 > 100 AND col1 > 1000
<= 100 col1 <= 1,000
This example creates a table that has two sparse columns and a column set named CSet .
CREATE TABLE T1
(c1 int PRIMARY KEY,
c2 varchar(50) SPARSE NULL,
c3 int SPARSE NULL,
CSet XML COLUMN_SET FOR ALL_SPARSE_COLUMNS ) ;
This example creates a new temporal table linked to an existing history table.
--Existing table
CREATE TABLE Department_History
(
DepartmentNumber char(10) NOT NULL,
DepartmentName varchar(50) NOT NULL,
ManagerID int NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 NOT NULL,
SysEndTime datetime2 NOT NULL
);
--Temporal table
CREATE TABLE Department
(
DepartmentNumber char(10) NOT NULL PRIMARY KEY CLUSTERED,
DepartmentName varchar(50) NOT NULL,
ManagerID INT NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
SysEndTime datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH
(SYSTEM_VERSIONING = ON
(HISTORY_TABLE = dbo.Department_History, DATA_CONSISTENCY_CHECK = ON )
);
This example creates a new temporal table linked to an existing history table.
--Existing table
CREATE TABLE Department_History
(
DepartmentNumber char(10) NOT NULL,
DepartmentName varchar(50) NOT NULL,
ManagerID int NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 NOT NULL,
SysEndTime datetime2 NOT NULL
);
--Temporal table
CREATE TABLE Department
(
DepartmentNumber char(10) NOT NULL PRIMARY KEY CLUSTERED,
DepartmentName varchar(50) NOT NULL,
ManagerID INT NULL,
ParentDepartmentNumber char(10) NULL,
SysStartTime datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
SysEndTime datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH
(SYSTEM_VERSIONING = ON
(HISTORY_TABLE = dbo.Department_History, DATA_CONSISTENCY_CHECK = ON )
);
CREATE TABLE t1
(
c1 int,
index IX1 (c1) WHERE c1 > 0
)
GO
See Also
ALTER TABLE (Transact-SQL )
COLUMNPROPERTY (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE VIEW (Transact-SQL )
Data Types (Transact-SQL )
DROP INDEX (Transact-SQL )
sys.dm_sql_referenced_entities (Transact-SQL )
sys.dm_sql_referencing_entities (Transact-SQL )
DROP TABLE (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_help (Transact-SQL )
sp_helpconstraint (Transact-SQL )
sp_rename (Transact-SQL )
sp_spaceused (Transact-SQL )
CREATE TABLE (Azure SQL Data Warehouse)
5/4/2018 • 16 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Creates a new table in SQL Data Warehouse or Parallel Data Warehouse.
To understand tables and how to use them, see Tables in SQL Data Warehouse.
NOTE: Discussions about SQL Data Warehouse in this article apply to both SQL Data Warehouse and Parallel
Data Warehouse unless otherwise noted.
Transact-SQL Syntax Conventions
Syntax
-- Create a new table.
CREATE TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
(
{ column_name <data_type> [ <column_options> ] } [ ,...n ]
)
[ WITH [ <table_option> [ ,...n ] ) ]
[;]
<column_options> ::=
[ COLLATE Windows_collation_name ]
[ NULL | NOT NULL ] -- default is NULL
[ [ CONSTRAINT constraint_name ] DEFAULT constant_expression ]
<table_option> ::=
{
CLUSTERED COLUMNSTORE INDEX --default for SQL Data Warehouse
| HEAP --default for Parallel Data Warehouse
| CLUSTERED INDEX ( { index_column_name [ ASC | DESC ] } [ ,...n ] ) -- default is ASC
}
{
DISTRIBUTION = HASH ( distribution_column_name )
| DISTRIBUTION = ROUND_ROBIN -- default for SQL Data Warehouse
| DISTRIBUTION = REPLICATE -- default for Parallel Data Warehouse
}
| PARTITION ( partition_column_name RANGE [ LEFT | RIGHT ] -- default is LEFT
FOR VALUES ( [ boundary_value [,...n] ] ) )
Arguments
database_name
The name of the database that will contain the new table. The default is the current database.
schema_name
The schema for the table. Specifying schema is optional. If blank, the default schema will be used.
table_name
The name of the new table. To create a local temporary table, precede the table name with #. For explanations and
guidance on temporary tables, see Temporary tables in Azure SQL Data Warehouse.
column_name
The name of a table column.
Column options
COLLATE Windows_collation_name
Specifies the collation for the expression. The collation must be one of the Windows collations supported by SQL
Server. For a list of Windows collations supported by SQL Server, see Windows Collation Name (Transact-SQL ).
NULL | NOT NULL
Specifies whether NULL values are allowed in the column. The default is NULL .
[ CONSTRAINT constraint_name ] DEFAULT constant_expression
Specifies the default column value.
ARGUMENT EXPLANATION
constraint_name The optional name for the constraint. The constraint name is
unique within the database. The name can be re-used in other
databases.
constant_expression The default value for the column. The expression must be a
literal value or a a constant. For example, these constant
expressions are allowed: 'CA' , 4 . These are not allowed:
2+3 , CURRENT_TIMESTAMP .
ARGUMENT EXPLANATION
partition_column_name Specifies the column that SQL Data Warehouse will use to
partition the rows. This column can be any data type. SQL
Data Warehouse sorts the partition column values in
ascending order. The low-to-high ordering goes from LEFT
to RIGHT for the purpose of the RANGE specification.
RANGE LEFT Specifies the boundary value belongs to the partition on the
left (lower values). The default is LEFT.
RANGE RIGHT Specifies the boundary value belongs to the partition on the
right (higher values).
FOR VALUES ( boundary_value [,...n] ) Specifies the boundary values for the partition.
boundary_value is a constant expression. It cannot be NULL.
It must either match or be implicitly convertible to the data
type of partition_column_name. It cannot be truncated
during implicit conversion so that the size and scale of the
value do not match the data type of partition_column_name
If you specify one boundary value, the resulting table has two
partitions; one for the values lower than the boundary value
and one for the values higher than the boundary value. Note
that if you move a partition into a non-partitioned table, the
non-partitioned table will receive the data, but will not have
the partition boundaries in its metadata.
0 19 0
1 21 1
2 22 2
3 23 3
4 24 4
5 25 5
6 26 6
7 27 7
datetime
Stores date and time of day with 19 to 23 characters according to the Gregorian calendar. The date can contain
year, month, and day. The time contains hour, minutes, seconds.As an option, you can display three digits for
fractional seconds. The storage size is 8 bytes.
smalldatetime
Stores a date and a time. Storage size is 4 bytes.
date
Stores a date using a maximum of 10 characters for year, month, and day according to the Gregorian calendar. The
storage size is 3 bytes. Date is stored as an integer.
time [(n)]
The default value for n is 7 .
float [(n)]
Approximate number data type for use with floating point numeric data. Floating point data is approximate, which
means that not all values in the data type range can be represented exactly. n specifies the number of bits used to
store the mantissa of the float in scientific notation. Therefore, n dictates the precision and storage size. If n is
specified, it must be a value between 1 and 53 . The default value of n is 53 .
SQL Data Warehouse treats n as one of two possible values. If 1 <= n <= 24 , n is treated as 24 . If 25 <= n <=
53 , n is treated as 53 .
The SQL Data Warehouse float data type complies with the ISO standard for all values of n from 1 through
53 . The synonym for double precision is float(53) .
real [(n)]
The definition of real is the same as float. The ISO synonym for real is float(24) .
decimal [ ( precision [ , scale ] ) ] | numeric [ ( precision [ , scale ] ) ]
Stores fixed precision and scale numbers.
precision
The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal
point. The precision must be a value from 1 through the maximum precision of 38 . The default precision is 18 .
scale
The maximum number of decimal digits that can be stored to the right of the decimal point. Scale must be a value
from 0 through precision. You can only specify scale if precision is specified. The default scale is 0 ; therefore, 0
<= scale <= precision. Maximum storage sizes vary, based on the precision.
1-9 5
10-19 9
20-28 13
29-38 17
money | smallmoney
Data types that represent currency values.
money 8
smallmoney 4
bigint 8
int 4
smallint 2
tinyint 1
bit
An integer data type that can take the value of 1 , 0 , or `NULL. SQL Data Warehouse optimizes storage of bit
columns. If there are 8 or fewer bit columns in a table, the columns are stored as 1 byte. If there are from 9-16 bit
columns, the columns are stored as 2 bytes, and so on.
nvarchar [ ( n | max ) ] -- max applies only to SQL Data Warehouse.
Variable-length Unicode character data. n can be a value from 1 through 4000. max indicates that the maximum
storage size is 2^31-1 bytes (2 GB ). Storage size in bytes is two times the number of characters entered + 2 bytes.
The data entered can be 0 characters in length.
nchar [(n)]
Fixed-length Unicode character data with a length of n characters. n must be a value from 1 through 4000 . The
storage size is two times n bytes.
varchar [ ( n | max ) ] -- max applies only to SQL Data Warehouse.
Variable length, non-Unicode character data with a length of n bytes. n must be a value from 1 to 8000 . max
-
indicates that the maximum storage size is 2^31-1 bytes (2 GB ).The storage size is the actual length of data
entered + 2 bytes.
char [(n)]
Fixed-length, non-Unicode character data with a length of n bytes. n must be a value from 1 to 8000 . The
storage size is n bytes. The default for n is 1 .
varbinary [ ( n | max ) ] -- max applies only to SQL Data Warehouse.
Variable-length binary data. n can be a value from 1 to 8000 . max indicates that the maximum storage size is
2^31-1 bytes (2 GB ). The storage size is the actual length of data entered + 2 bytes. The default value for n is 7.
binary [(n)]
Fixed-length binary data with a length of n bytes. n can be a value from 1 to 8000 . The storage size is n bytes.
The default value for n is 7 .
uniqueidentifier
Is a 16-byte GUID.
Permissions
Creating a table requires permission in the db_ddladmin fixed database role, or:
CREATE TABLE permission on the database
ALTER SCHEMA permission on the schema that will contain the table.
Creating a partitioned table requires permission in the db_ddladmin fixed database role, or
ALTER ANY DATASPACE permission
The login that creates a local temporary table receives CONTROL , INSERT , SELECT , and UPDATE permissions
on the table.
General Remarks
For minimum and maximum limits, see SQL Data Warehouse capacity limits.
Determining the number of table partitions
Each user-defined table is divided into multiple smaller tables which are stored in separate locations called
distributions. SQL Data Warehouse uses 60 distributions. In Parallel Data Warehouse, the number of distributions
depends on the number of Compute nodes.
Each distribution contains all table partitions. For example, if there are 60 distributions and four table partitions,
there will be 320 partitions. If the table is a clustered columnstore index, there will be one columnstore index per
partition which means you will have 320 columnstore indexes.
We recommend using fewer table partitions to ensure each columnstore index has enough rows to take
advantage of the benefits of columnstore indexes. For further guidance, see Partitioning tables in SQL Data
Warehouse and Indexing tables in SQL Data Warehouse
Rowstore table (heap or clustered index)
A rowstore table is a table stored in row -by-row order. It is a heap or clustered index. SQL Data Warehouse
creates all rowstore tables with page compression; this is not user-configurable.
Columnstore table (columnstore index)
A columnstore table is a table stored in column-by-column order. The columnstore index is the technology that
manages data stored in a columnstore table. The clustered columnstore index does not affect how data are
distributed; it affects how the data are stored within each distribution.
To change a rowstore table to a columnstore table, drop all existing indexes on the table and create a clustered
columnstore index. For an example, see CREATE COLUMNSTORE INDEX (Transact-SQL ).
For more information, see these articles:
Columnstore indexes versioned feature summary
Indexing tables in SQL Data Warehouse
Columnstore Indexes Guide
If boundary_value is a literal value that must be implicitly converted to the data type in partition_column_name, a
discrepancy will occur. The literal value is displayed through the SQL Data Warehouse system views, but the
converted value is used for Transact-SQL operations.
Temporary tables
Global temporary tables that begin with ## are not supported.
Local temporary tables have the following limitations and restrictions:
They are visible only to the current session. SQL Data Warehouse drops them automatically at the end of the
session. To drop them explicitlt, use the DROP TABLE statement.
They cannot be renamed.
They cannot have partitions or views.
Their permissions cannot be changed. GRANT , DENY , and REVOKE statements cannot be used with local
temporary tables.
Database console commands are blocked for temporary tables.
If more than one local temporary table is used within a batch, each must have a unique name. If multiple
sessions are running the same batch and creating the same local temporary table, SQL Data Warehouse
internally appends a numeric suffix to the local temporary table name to maintain a unique name for each local
temporary table.
Locking behavior
Takes an exclusive lock on the table. Takes a shared lock on the DATABASE, SCHEMA, and
SCHEMARESOLUTION objects.
See also
CREATE TABLE AS SELECT (Azure SQL Data Warehouse)
DROP TABLE (Transact-SQL )
ALTER TABLE (Transact-SQL )
CREATE TABLE (SQL Graph)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new SQL graph table as either a NODE or an EDGE table.
NOTE
For standard Transact-SQL statements, see CREATE TABLE (Transact-SQL).
Syntax
CREATE TABLE
[ database_name . [ schema_name ] . | schema_name . ] table_name
( { <column_definition> } [ ,...n ] )
AS [ NODE | EDGE ]
[ ; ]
Arguments
This document lists only arguments pertaining to SQL graph. For a full list and description of supported
arguments, see CREATE TABLE (Transact-SQL )
database_name
Is the name of the database in which the table is created. database_name must specify the name of an existing
database. If not specified, database_name defaults to the current database. The login for the current connection
must be associated with an existing user ID in the database specified by database_name, and that user ID must
have CREATE TABLE permissions.
schema_name
Is the name of the schema to which the new table belongs.
table_name
Is the name of the node or edge table. Table names must follow the rules for identifiers. table_name can be a
maximum of 128 characters, except for local temporary table names (names prefixed with a single number sign (#))
that cannot exceed 116 characters.
NODE
Creates a node table.
EDGE
Creates an edge table.
Remarks
Creating a temporary table as node or edge table is not supported.
Creating a node or edge table as a temporal table is not supported.
Stretch database is not supported for node or edge table.
Node or edge tables cannot be external tables (no polybase support for graph tables).
Examples
A. Create a NODE table
The following example shows how to create a NODE table
-- Create a likes edge table, this table does not have any user defined attributes
CREATE TABLE likes AS EDGE;
See Also
ALTER TABLE (Transact-SQL )
INSERT (SQL Graph)]
Graph processing with SQL Server 2017
CREATE TABLE AS SELECT (Azure SQL Data
Warehouse)
5/4/2018 • 19 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
CREATE TABLE AS SELECT (CTAS ) is one of the most important T-SQL features available. It is a fully parallelized
operation that creates a new table based on the output of a SELECT statement. CTAS is the simplest and fastest
way to create a copy of a table.
For example, use CTAS to:
Re-create a table with a different hash distribution column.
Re-create a table as replicated.
Create a columnstore index on just some of the columns in the table.
Query or import external data.
NOTE
Since CTAS adds to the capabilities of creating a table, this topic tries not to repeat the CREATE TABLE topic. Instead, it
describes the differences between the CTAS and CREATE TABLE statements. For the CREATE TABLE details, see CREATE TABLE
(Azure SQL Data Warehouse) statement.
Syntax
CREATE TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
[ ( column_name [ ,...n ] ) ]
WITH (
<distribution_option> -- required
[ , <table_option> [ ,...n ] ]
)
AS <select_statement>
[;]
<distribution_option> ::=
{
DISTRIBUTION = HASH ( distribution_column_name )
| DISTRIBUTION = ROUND_ROBIN
| DISTRIBUTION = REPLICATE
}
<table_option> ::=
{
CLUSTERED COLUMNSTORE INDEX --default for SQL Data Warehouse
| HEAP --default for Parallel Data Warehouse
| CLUSTERED INDEX ( { index_column_name [ ASC | DESC ] } [ ,...n ] ) --default is ASC
}
| PARTITION ( partition_column_name RANGE [ LEFT | RIGHT ] --default is LEFT
FOR VALUES ( [ boundary_value [,...n] ] ) )
<select_statement> ::=
[ WITH <common_table_expression> [ ,...n ] ]
SELECT select_criteria
Arguments
For details, see the Arguments section in CREATE TABLE.
Column options
column_name [ ,... n ]
Column names do not allow the column options mentioned in CREATE TABLE. Instead, you can provide an
optional list of one or more column names for the new table. The columns in the new table will use the names you
specify. When you specify column names, the number of columns in the column list must match the number of
columns in the select results. If you don't specify any column names, the new target table will use the column
names in the select statement results.
You cannot specify any other column options such as data types, collation, or nullability. Each of these attributes is
derived from the results of the SELECT statement. However, you can use the SELECT statement to change the
attributes. For an example, see Use CTAS to change column attributes.
Table distribution options
DISTRIBUTION = HASH ( distribution_column_name ) | ROUND_ROBIN | REPLICATE
The CTAS statement requires a distribution option and does not have default values. This is different from
CREATE TABLE which has defaults.
For details and to understand how to choose the best distribution column, see the Table distribution options
section in CREATE TABLE.
Table partition options
The CTAS statement creates a non-partitioned table by default, even if the source table is partitioned. To create a
partitioned table with the CTAS statement, you must specify the partition option.
For details, see the Table partition options section in CREATE TABLE.
Select options
The select statement is the fundamental difference between CTAS and CREATE TABLE.
WITH common_table_expression
Specifies a temporary named result set, known as a common table expression (CTE ). For more information, see
WITH common_table_expression (Transact-SQL ).
SELECT select_criteria
Populates the new table with the results from a SELECT statement. select_criteria is the body of the SELECT
statement that determines which data to copy to the new table. For information about SELECT statements, see
SELECT (Transact-SQL ).
Permissions
CTAS requires SELECT permission on any objects referenced in the select_criteria.
For permissions to create a table, see Permissions in CREATE TABLE.
General Remarks
For details, see General Remarks in CREATE TABLE.
Locking Behavior
For details, see Locking Behavior in CREATE TABLE.
Performance
For a hash-distributed table, you can use CTAS to choose a different distribution column to achieve better
performance for joins and aggregations. If choosing a different distribution column is not your goal, you will have
the best CTAS performance if you specify the same distribution column since this will avoid re-distributing the
rows.
If you are using CTAS to create table and performance is not a factor, you can specify ROUND_ROBIN to avoid
having to decide on a distribution column.
To avoid data movement in subsequent queries, you can specify REPLICATE at the cost of increased storage for
loading a full copy of the table on each Compute node.
Now you want to create a new copy of this table with a clustered columnstore index so that you can take
advantage of the performance of clustered columnstore tables. You also want to distribute this table on
ProductKey since you are anticipating joins on this column and want to avoid data movement during joins on
ProductKey. Lastly you also want to add partitioning on OrderDateKey so that you can quickly delete old data by
dropping old partitions. Here is the CTAS statement which would copy your old table into a new table.
Finally you can rename your tables to swap in your new table and then drop your old table.
RENAME OBJECT FactInternetSales TO FactInternetSales_old;
RENAME OBJECT FactInternetSales_new TO FactInternetSales;
-- Original table
CREATE TABLE [dbo].[DimCustomer2] (
[CustomerKey] int NOT NULL,
[GeographyKey] int NULL,
[CustomerAlternateKey] nvarchar(15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
)
WITH (CLUSTERED COLUMNSTORE INDEX, DISTRIBUTION = HASH([CustomerKey]));
-- Resulting table
CREATE TABLE [dbo].[test] (
[CustomerKeyNoChange] int NOT NULL,
[CustomerKeyChangeNullable] int NULL,
[CustomerKeyChangeDataTypeNullable] decimal(10, 2) NULL,
[CustomerKeyChangeDataTypeNotNullable] decimal(10, 2) NOT NULL,
[GeographyKeyNoChange] int NULL,
[GeographyKeyChangeNotNullable] int NOT NULL,
[CustomerAlternateKeyNoChange] nvarchar(15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[CustomerAlternateKeyNullable] nvarchar(15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[CustomerAlternateKeyChangeCollation] nvarchar(15) COLLATE Latin1_General_CS_AS_KS_WS NOT NULL
)
WITH (DISTRIBUTION = ROUND_ROBIN);
As a final step, you can use RENAME (Transact-SQL ) to switch the table names. This makes DimCustomer2 be
the new table.
-- DimSalesTerritory is hash-distributed.
-- Copy it to a round-robin table.
CREATE TABLE [dbo].[myTable]
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = ROUND_ROBIN
)
AS SELECT * FROM [dbo].[DimSalesTerritory];
--Use your own processes to create the text-delimited files on Azure blob storage.
--Create the external table called ClickStream.
CREATE EXTERNAL TABLE ClickStreamExt (
url varchar(50),
event_date date,
user_IP varchar(50)
)
WITH (
LOCATION='/logs/clickstream/2015/',
DATA_SOURCE = MyAzureStorage,
FILE_FORMAT = TextFileFormat)
;
--Use CREATE TABLE AS SELECT to import the Azure blob storage data into a new
--SQL Data Warehouse table called ClickStreamData
CREATE TABLE ClickStreamData
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH (user_IP)
)
AS SELECT * FROM ClickStreamExt
;
-- Use CREATE TABLE AS SELECT to import the Hadoop data into a new
-- table called ClickStreamPDW
CREATE TABLE ClickStreamPDW
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH (user_IP)
)
AS SELECT * FROM ClickStreamExt
;
NOTE
Try to think "CTAS first". If you think you can solve a problem using CTAS then that is generally the best way to approach it
- even if you are writing more data as a result.
SELECT *
INTO #tmp_fct
FROM [dbo].[FactInternetSales]
This syntax is not supported in SQL Data Warehouse and Parallel Data Warehouse. This example shows how to
rewrite the previous SELECT..INTO statement as a CTAS statement. You can choose any of the DISTRIBUTION
options described in the CTAS syntax. This example uses the ROUND_ROBIN distribution method.
CREATE TABLE #tmp_fct
WITH
(
DISTRIBUTION = ROUND_ROBIN
)
AS
SELECT *
FROM [dbo].[FactInternetSales]
;
J. Use CTAS and implicit joins to replace ANSI joins in the FROM clause of an UPDATE statement
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
You may find you have a complex update that joins more than two tables together using ANSI joining syntax to
perform the UPDATE or DELETE.
Imagine you had to update this table:
UPDATE acs
SET [TotalSalesAmount] = [fis].[TotalSalesAmount]
FROM [dbo].[AnnualCategorySales] AS acs
JOIN (
SELECT [EnglishProductCategoryName]
, [CalendarYear]
, SUM([SalesAmount]) AS [TotalSalesAmount]
FROM [dbo].[FactInternetSales] AS s
JOIN [dbo].[DimDate] AS d ON s.[OrderDateKey] = d.[DateKey]
JOIN [dbo].[DimProduct] AS p ON s.[ProductKey] = p.[ProductKey]
JOIN [dbo].[DimProductSubCategory] AS u ON p.[ProductSubcategoryKey] = u.
[ProductSubcategoryKey]
JOIN [dbo].[DimProductCategory] AS c ON u.[ProductCategoryKey] = c.
[ProductCategoryKey]
WHERE [CalendarYear] = 2004
GROUP BY
[EnglishProductCategoryName]
, [CalendarYear]
) AS fis
ON [acs].[EnglishProductCategoryName] = [fis].[EnglishProductCategoryName]
AND [acs].[CalendarYear] = [fis].[CalendarYear]
;
Since SQL Data Warehouse does not support ANSI joins in the FROM clause of an UPDATE statement, you cannot
use this SQL Server code over without changing it slightly.
You can use a combination of a CTAS and an implicit join to replace this code:
-- Create an interim table
CREATE TABLE CTAS_acs
WITH (DISTRIBUTION = ROUND_ROBIN)
AS
SELECT ISNULL(CAST([EnglishProductCategoryName] AS NVARCHAR(50)),0) AS [EnglishProductCategoryName]
, ISNULL(CAST([CalendarYear] AS SMALLINT),0) AS [CalendarYear]
, ISNULL(CAST(SUM([SalesAmount]) AS MONEY),0) AS [TotalSalesAmount]
FROM [dbo].[FactInternetSales] AS s
JOIN [dbo].[DimDate] AS d ON s.[OrderDateKey] = d.[DateKey]
JOIN [dbo].[DimProduct] AS p ON s.[ProductKey] = p.[ProductKey]
JOIN [dbo].[DimProductSubCategory] AS u ON p.[ProductSubcategoryKey] = u.[ProductSubcategoryKey]
JOIN [dbo].[DimProductCategory] AS c ON u.[ProductCategoryKey] = c.[ProductCategoryKey]
WHERE [CalendarYear] = 2004
GROUP BY
[EnglishProductCategoryName]
, [CalendarYear]
;
K. Use CTAS to specify which data to keep instead of using ANSI joins in the FROM clause of a DELETE
statement
Applies to: Azure SQL Data Warehouse and Parallel Data Warehouse
Sometimes the best approach for deleting data is to use CTAS . Rather than deleting the data simply select the
data you want to keep. This especially true for DELETE statements that use ansi joining syntax since SQL Data
Warehouse does not support ANSI joins in the FROM clause of a DELETE statement.
An example of a converted DELETE statement is available below:
Instinctively you might think you should migrate this code to a CTAS and you would be correct. However, there is
a hidden issue here.
The following code does NOT yield the same result:
SELECT result,result*@d
from result
;
SELECT result,result*@d
from ctas_r
;
The value stored for result is different. As the persisted value in the result column is used in other expressions the
error becomes even more significant.
This is particularly important for data migrations. Even though the second query is arguably more accurate there
is a problem. The data would be different compared to the source system and that leads to questions of integrity
in the migration. This is one of those rare cases where the "wrong" answer is actually the right one!
The reason we see this disparity between the two results is down to implicit type casting. In the first example the
table defines the column definition. When the row is inserted an implicit type conversion occurs. In the second
example there is no implicit type conversion as the expression defines data type of the column. Notice also that
the column in the second example has been defined as a NULLable column whereas in the first example it has
not. When the table was created in the first example column nullability was explicitly defined. In the second
example it was just left to the expression and by default this would result in a NULL definition.
To resolve these issues you must explicitly set the type conversion and nullability in the SELECT portion of the
CTAS statement. You cannot set these properties in the create table part.
This tip is not just useful for ensuring the integrity of your calculations. It is also important for table partition
switching. Imagine you have this table defined as your fact:
However, the value field is a calculated expression it is not part of the source data.
To create your partitioned dataset you might want to do this:
The query would run perfectly fine. The problem comes when you try to perform the partition switch. The table
definitions do not match. To make the table definitions match the CTAS needs to be modified.
CREATE TABLE [dbo].[Sales_in]
WITH
( DISTRIBUTION = HASH([product])
, PARTITION ( [date] RANGE RIGHT FOR VALUES
(20000101,20010101
)
)
)
AS
SELECT
[date]
, [product]
, [store]
, [quantity]
, [price]
, ISNULL(CAST([quantity]*[price] AS MONEY),0) AS [amount]
FROM [stg].[source]
OPTION (LABEL = 'CTAS : Partition IN table : Create');
You can see therefore that type consistency and maintaining nullability properties on a CTAS is a good
engineering best practice. It helps to maintain integrity in your calculations and also ensures that partition
switching is possible.
See Also
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
CREATE EXTERNAL TABLE (Transact-SQL )
CREATE EXTERNAL TABLE AS SELECT (Transact-SQL )
CREATE TABLE (Azure SQL Data Warehouse) DROP TABLE (Transact-SQL )
DROP EXTERNAL TABLE (Transact-SQL )
ALTER TABLE (Transact-SQL )
ALTER EXTERNAL TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL) IDENTITY (Property)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an identity column in a table. This property is used with the CREATE TABLE and ALTER TABLE Transact-
SQL statements.
NOTE
The IDENTITY property is different from the SQL-DMO Identity property that exposes the row identity property of a
column.
Syntax
IDENTITY [ (seed , increment) ]
Arguments
seed
Is the value that is used for the very first row loaded into the table.
increment
Is the incremental value that is added to the identity value of the previous row that was loaded.
You must specify both the seed and increment or neither. If neither is specified, the default is (1,1).
Remarks
Identity columns can be used for generating key values. The identity property on a column guarantees the
following:
Each new value is generated based on the current seed & increment.
Each new value for a particular transaction is different from other concurrent transactions on the table.
The identity property on a column does not guarantee the following:
Uniqueness of the value – Uniqueness must be enforced by using a PRIMARY KEY or UNIQUE
constraint or UNIQUE index.
Consecutive values within a transaction – A transaction inserting multiple rows is not guaranteed to get
consecutive values for the rows because other concurrent inserts might occur on the table. If values must
be consecutive then the transaction should use an exclusive lock on the table or use the SERIALIZABLE
isolation level.
Consecutive values after server restart or other failures – SQL Server might cache identity values for
performance reasons and some of the assigned values can be lost during a database failure or server
restart. This can result in gaps in the identity value upon insert. If gaps are not acceptable then the
application should use its own mechanism to generate key values. Using a sequence generator with the
NOCACHE option can limit the gaps to transactions that are never committed.
Reuse of values – For a given identity property with specific seed/increment, the identity values are not
reused by the engine. If a particular insert statement fails or if the insert statement is rolled back then the
consumed identity values are lost and will not be generated again. This can result in gaps when the
subsequent identity values are generated.
These restrictions are part of the design in order to improve performance, and because they are acceptable
in many common situations. If you cannot use identity values because of these restrictions, create a
separate table holding a current value and manage access to the table and number assignment with your
application.
If a table with an identity column is published for replication, the identity column must be managed in a
way that is appropriate for the type of replication used. For more information, see Replicate Identity
Columns.
Only one identity column can be created per table.
In memory-optimized tables the seed and increment must be set to 1,1. Setting the seed or increment to a
value other than 1 results in the following error: The use of seed and increment values other than 1 is not
supported with memory optimized tables.
Examples
A. Using the IDENTITY property with CREATE TABLE
The following example creates a new table using the IDENTITY property for an automatically incrementing
identification number.
USE AdventureWorks2012;
INSERT new_employees
(fname, minit, lname)
VALUES
('Karin', 'F', 'Josephs');
INSERT new_employees
(fname, minit, lname)
VALUES
('Pirkko', 'O', 'Koskitalo');
-- Here is the generic syntax for finding identity value gaps in data.
-- The illustrative example starts here.
SET IDENTITY_INSERT tablename ON;
DECLARE @minidentval column_type;
DECLARE @maxidentval column_type;
DECLARE @nextidentval column_type;
SELECT @minidentval = MIN($IDENTITY), @maxidentval = MAX($IDENTITY)
FROM tablename
IF @minidentval = IDENT_SEED('tablename')
SELECT @nextidentval = MIN($IDENTITY) + IDENT_INCR('tablename')
FROM tablename t1
WHERE $IDENTITY BETWEEN IDENT_SEED('tablename') AND
@maxidentval AND
NOT EXISTS (SELECT * FROM tablename t2
WHERE t2.$IDENTITY = t1.$IDENTITY +
IDENT_INCR('tablename'))
ELSE
SELECT @nextidentval = IDENT_SEED('tablename');
SET IDENTITY_INSERT tablename OFF;
-- Here is an example to find gaps in the actual data.
-- The table is called img and has two columns: the first column
-- called id_num, which is an increasing identification number, and the
-- second column called company_name.
-- This is the end of the illustration example.
See Also
ALTER TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL )
DBCC CHECKIDENT (Transact-SQL )
IDENT_INCR (Transact-SQL )
@@IDENTITY (Transact-SQL )
IDENTITY (Function) (Transact-SQL )
IDENT_SEED (Transact-SQL )
SELECT (Transact-SQL )
SET IDENTITY_INSERT (Transact-SQL )
Replicate Identity Columns
CREATE TRIGGER (Transact-SQL)
5/3/2018 • 24 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a DML, DDL, or logon trigger. A trigger is a special type of stored procedure that automatically executes
when an event occurs in the database server. DML triggers execute when a user tries to modify data through a
data manipulation language (DML ) event. DML events are INSERT, UPDATE, or DELETE statements on a table
or view. These triggers fire when any valid event is fired, regardless of whether or not any table rows are affected.
For more information, see DML Triggers.
DDL triggers execute in response to a variety of data definition language (DDL ) events. These events primarily
correspond to Transact-SQL CREATE, ALTER, and DROP statements, and certain system stored procedures that
perform DDL -like operations. Logon triggers fire in response to the LOGON event that is raised when a user's
session is being established. Triggers can be created directly from Transact-SQL statements or from methods of
assemblies that are created in the Microsoft .NET Framework common language runtime (CLR ) and uploaded to
an instance of SQL Server. SQL Server allows for creating multiple triggers for any specific statement.
IMPORTANT
Malicious code inside triggers can run under escalated privileges. For more information on how to mitigate this threat, see
Manage Trigger Security.
NOTE
The integration of .NET Framework CLR into SQL Server is discussed in this topic. CLR integration does not apply to Azure
SQL Database.
Syntax
-- SQL Server Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)
<dml_trigger_option> ::=
[ ENCRYPTION ]
[ EXECUTE AS Clause ]
<method_specifier> ::=
assembly_name.class_name.method_name
-- SQL Server Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a
-- table (DML Trigger on memory-optimized tables)
<dml_trigger_option> ::=
[ NATIVE_COMPILATION ]
[ SCHEMABINDING ]
[ EXECUTE AS Clause ]
<ddl_trigger_option> ::=
[ ENCRYPTION ]
[ EXECUTE AS Clause ]
<logon_trigger_option> ::=
[ ENCRYPTION ]
[ EXECUTE AS Clause ]
Syntax
-- Windows Azure SQL Database Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)
<dml_trigger_option> ::=
[ EXECUTE AS Clause ]
-- Windows Azure SQL Database Syntax
-- Trigger on a CREATE, ALTER, DROP, GRANT, DENY,
-- REVOKE, or UPDATE STATISTICS statement (DDL Trigger)
<ddl_trigger_option> ::=
[ EXECUTE AS Clause ]
Arguments
OR ALTER
Applies to: Azure SQL Database, SQL Server (starting with SQL Server 2016 (13.x) SP1).
Conditionally alters the trigger only if it already exists.
schema_name
Is the name of the schema to which a DML trigger belongs. DML triggers are scoped to the schema of the table
or view on which they are created. schema_name cannot be specified for DDL or logon triggers.
trigger_name
Is the name of the trigger. A trigger_name must comply with the rules for identifiers, except that trigger_name
cannot start with # or ##.
table | view
Is the table or view on which the DML trigger is executed and is sometimes referred to as the trigger table or
trigger view. Specifying the fully qualified name of the table or view is optional. A view can be referenced only by
an INSTEAD OF trigger. DML triggers cannot be defined on local or global temporary tables.
DATABASE
Applies the scope of a DDL trigger to the current database. If specified, the trigger fires whenever event_type or
event_group occurs in the current database.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
Applies the scope of a DDL or logon trigger to the current server. If specified, the trigger fires whenever
event_type or event_group occurs anywhere in the current server.
WITH ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017.
Obfuscates the text of the CREATE TRIGGER statement. Using WITH ENCRYPTION prevents the trigger from
being published as part of SQL Server replication. WITH ENCRYPTION cannot be specified for CLR triggers.
EXECUTE AS
Specifies the security context under which the trigger is executed. Enables you to control which user account the
instance of SQL Server uses to validate permissions on any database objects that are referenced by the trigger.
This option is required for triggers on memory-optimized tables.
For more information, seeEXECUTE AS Clause (Transact-SQL ).
NATIVE_COMPIL ATION
Indicates that the trigger is natively compiled.
This option is required for triggers on memory-optimized tables.
SCHEMABINDING
Ensures that tables that are referenced by a trigger cannot be dropped or altered.
This option is required for triggers on memory-optimized tables and is not supported for triggers on traditional
tables.
FOR | AFTER
AFTER specifies that the DML trigger is fired only when all operations specified in the triggering SQL statement
have executed successfully. All referential cascade actions and constraint checks also must succeed before this
trigger fires.
AFTER is the default when FOR is the only keyword specified.
AFTER triggers cannot be defined on views.
INSTEAD OF
Specifies that the DML trigger is executed instead of the triggering SQL statement, therefore, overriding the
actions of the triggering statements. INSTEAD OF cannot be specified for DDL or logon triggers.
At most, one INSTEAD OF trigger per INSERT, UPDATE, or DELETE statement can be defined on a table or
view. However, you can define views on views where each view has its own INSTEAD OF trigger.
INSTEAD OF triggers are not allowed on updatable views that use WITH CHECK OPTION. SQL Server raises
an error when an INSTEAD OF trigger is added to an updatable view WITH CHECK OPTION specified. The user
must remove that option by using ALTER VIEW before defining the INSTEAD OF trigger.
{ [ DELETE ] [ , ] [ INSERT ] [ , ] [ UPDATE ] }
Specifies the data modification statements that activate the DML trigger when it is tried against this table or view.
At least one option must be specified. Any combination of these options in any order is allowed in the trigger
definition.
For INSTEAD OF triggers, the DELETE option is not allowed on tables that have a referential relationship
specifying a cascade action ON DELETE. Similarly, the UPDATE option is not allowed on tables that have a
referential relationship specifying a cascade action ON UPDATE.
WITH APPEND
Applies to: SQL Server 2008 through SQL Server 2008 R2.
Specifies that an additional trigger of an existing type should be added. WITH APPEND cannot be used with
INSTEAD OF triggers or if AFTER trigger is explicitly stated. WITH APPEND can be used only when FOR is
specified, without INSTEAD OF or AFTER, for backward compatibility reasons. WITH APPEND cannot be
specified if EXTERNAL NAME is specified (that is, if the trigger is a CLR trigger).
event_type
Is the name of a Transact-SQL language event that, after execution, causes a DDL trigger to fire. Valid events for
DDL triggers are listed in DDL Events.
event_group
Is the name of a predefined grouping of Transact-SQL language events. The DDL trigger fires after execution of
any Transact-SQL language event that belongs to event_group. Valid event groups for DDL triggers are listed in
DDL Event Groups.
After the CREATE TRIGGER has finished running, event_group also acts as a macro by adding the event types it
covers to the sys.trigger_events catalog view.
NOT FOR REPLICATION
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates that the trigger should not be executed when a replication agent modifies the table that is involved in
the trigger.
sql_statement
Is the trigger conditions and actions. Trigger conditions specify additional criteria that determine whether the
tried DML, DDL, or logon events cause the trigger actions to be performed.
The trigger actions specified in the Transact-SQL statements go into effect when the operation is tried.
Triggers can include any number and type of Transact-SQL statements, with exceptions. For more information,
see Remarks. A trigger is designed to check or change data based on a data modification or definition statement;
it should not return data to the user. The Transact-SQL statements in a trigger frequently include control-of-flow
language.
DML triggers use the deleted and inserted logical (conceptual) tables. They are structurally similar to the table on
which the trigger is defined, that is, the table on which the user action is tried. The deleted and inserted tables
hold the old values or new values of the rows that may be changed by the user action. For example, to retrieve all
values in the deleted table, use:
For more information, see Use the inserted and deleted Tables.
DDL and logon triggers capture information about the triggering event by using the EVENTDATA (Transact-
SQL ) function. For more information, see Use the EVENTDATA Function.
SQL Server allows for the update of text, ntext, or image columns through the INSTEAD OF trigger on tables
or views.
IMPORTANT
ntext, text, and image data types will be removed in a future version of Microsoft SQL Server. Avoid using these data
types in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max),
and varbinary(max) instead. Both AFTER and INSTEAD OF triggers support varchar(MAX), nvarchar(MAX), and
varbinary(MAX) data in the inserted and deleted tables.
For triggers on memory-optimized tables, the only sql_statement allowed at the top level is an ATOMIC block.
The T-SQL allowed inside the ATOMIC block is limited by the T-SQL allowed inside native procs.
< method_specifier > Applies to: SQL Server 2008 through SQL Server 2017.
For a CLR trigger, specifies the method of an assembly to bind with the trigger. The method must take no
arguments and return void. class_name must be a valid SQL Server identifier and must exist as a class in the
assembly with assembly visibility. If the class has a namespace-qualified name that uses '.' to separate
namespace parts, the class name must be delimited by using [ ] or " " delimiters. The class cannot be a nested
class.
NOTE
By default, the ability of SQL Server to run CLR code is off. You can create, modify, and drop database objects that reference
managed code modules, but these references will not execute in an instance of SQL Server unless the clr enabled Option is
enabled by using sp_configure.
Additionally, the following Transact-SQL statements are not allowed inside the body of a DML trigger when it is
used against the table or view that is the target of the triggering action.
Switch partitions.
NOTE
Because SQL Server does not support user-defined triggers on system tables, we recommend that you do not create user-
defined triggers on system tables.
IF (@@ROWCOUNT_BIG = 0)
RETURN;
IMPORTANT
Test your DDL triggers to determine their responses to system stored procedure execution. For example, the CREATE TYPE
statement and the sp_addtype and sp_rename stored procedures will fire a DDL trigger that is created on a CREATE_TYPE
event.
NOTE
Server-scoped DDL triggers appear in the SQL Server Management Studio Object Explorer in the Triggers folder. This
folder is located under the Server Objects folder. Database-scoped DDL Triggers appear in the Database Triggers folder.
This folder is located under the Programmability folder of the corresponding database.
Logon Triggers
Logon triggers execute stored procedures in response to a LOGON event. This event is raised when a user
session is established with an instance of SQL Server. Logon triggers fire after the authentication phase of
logging in finishes, but before the user session is actually established. Therefore, all messages originating inside
the trigger that would typically reach the user, such as error messages and messages from the PRINT statement,
are diverted to the SQL Server error log. For more information, see Logon Triggers.
Logon triggers do not fire if authentication fails.
Distributed transactions are not supported in a logon trigger. Error 3969 is returned when a logon trigger
containing a distributed transaction is fired.
Disabling a Logon Trigger
A logon trigger can effectively prevent successful connections to the Database Engine for all users, including
members of the sysadmin fixed server role. When a logon trigger is preventing connections, members of the
sysadmin fixed server role can connect by using the dedicated administrator connection, or by starting the
Database Engine in minimal configuration mode (-f ). For more information, see Database Engine Service
Startup Options.
NOTE
The previous behavior occurs only if the RECURSIVE_TRIGGERS setting is enabled by using ALTER DATABASE. There is no
defined order in which multiple triggers defined for a specific event are executed. Each trigger should be self-contained.
Disabling the RECURSIVE_TRIGGERS setting only prevents direct recursions. To disable indirect recursion also,
set the nested triggers server option to 0 by using sp_configure.
If any one of the triggers performs a ROLLBACK TRANSACTION, regardless of the nesting level, no more
triggers are executed.
Nested Triggers
Triggers can be nested to a maximum of 32 levels. If a trigger changes a table on which there is another trigger,
the second trigger is activated and can then call a third trigger, and so on. If any trigger in the chain sets off an
infinite loop, the nesting level is exceeded and the trigger is canceled. When a Transact-SQL trigger executes
managed code by referencing a CLR routine, type, or aggregate, this reference counts as one level against the
32-level nesting limit. Methods invoked from within managed code do not count against this limit
To disable nested triggers, set the nested triggers option of sp_configure to 0 (off ). The default configuration
allows for nested triggers. If nested triggers are off, recursive triggers are also disabled, regardless of the
RECURSIVE_TRIGGERS setting set by using ALTER DATABASE.
The first AFTER trigger nested inside an INSTEAD OF trigger fires even if the nested triggers server
configuration option is set to 0. However, under this setting, later AFTER triggers do not fire. We recommend
that you review your applications for nested triggers to determine whether the applications comply with your
business rules with regard to this behavior when the nested triggers server configuration option is set to 0, and
then make appropriate modifications.
Deferred Name Resolution
SQL Server allows for Transact-SQL stored procedures, triggers, and batches to refer to tables that do not exist
at compile time. This ability is called deferred name resolution.
Permissions
To create a DML trigger requires ALTER permission on the table or view on which the trigger is being created.
To create a DDL trigger with server scope (ON ALL SERVER ) or a logon trigger requires CONTROL SERVER
permission on the server. To create a DDL trigger with database scope (ON DATABASE ) requires ALTER ANY
DATABASE DDL TRIGGER permission in the current database.
Examples
A. Using a DML trigger with a reminder message
The following DML trigger prints a message to the client when anyone tries to add or change data in the
Customer table in the AdventureWorks2012 database.
USE master;
GO
CREATE LOGIN login_test WITH PASSWORD = '3KHJ6dhx(0xVYsdf' MUST_CHANGE,
CHECK_EXPIRATION = ON;
GO
GRANT VIEW SERVER STATE TO login_test;
GO
CREATE TRIGGER connection_limit_trigger
ON ALL SERVER WITH EXECUTE AS 'login_test'
FOR LOGON
AS
BEGIN
IF ORIGINAL_LOGIN()= 'login_test' AND
(SELECT COUNT(*) FROM sys.dm_exec_sessions
WHERE is_user_process = 1 AND
original_login_name = 'login_test') > 3
ROLLBACK;
END;
See Also
ALTER TABLE (Transact-SQL )
ALTER TRIGGER (Transact-SQL )
COLUMNS_UPDATED (Transact-SQL )
CREATE TABLE (Transact-SQL )
DROP TRIGGER (Transact-SQL )
ENABLE TRIGGER (Transact-SQL )
DISABLE TRIGGER (Transact-SQL )
TRIGGER_NESTLEVEL (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.dm_sql_referenced_entities (Transact-SQL )
sys.dm_sql_referencing_entities (Transact-SQL )
sys.sql_expression_dependencies (Transact-SQL )
sp_help (Transact-SQL )
sp_helptrigger (Transact-SQL )
sp_helptext (Transact-SQL )
sp_rename (Transact-SQL )
sp_settriggerorder (Transact-SQL )
UPDATE () (Transact-SQL )
Get Information About DML Triggers
Get Information About DDL Triggers
sys.triggers (Transact-SQL )
sys.trigger_events (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.assembly_modules (Transact-SQL )
sys.server_triggers (Transact-SQL )
sys.server_trigger_events (Transact-SQL )
sys.server_sql_modules (Transact-SQL )
sys.server_assembly_modules (Transact-SQL )
CREATE TYPE (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an alias data type or a user-defined type in the current database in SQL Server or Azure SQL Database.
The implementation of an alias data type is based on a SQL Server native system type. A user-defined type is
implemented through a class of an assembly in the Microsoft .NET Framework common language runtime (CLR ).
To bind a user-defined type to its implementation, the CLR assembly that contains the implementation of the type
must first be registered in SQL Server by using CREATE ASSEMBLY.
The ability to run CLR code is off by default in SQL Server. You can create, modify and drop database objects that
reference managed code modules, but these references will not execute in SQL Server unless the clr enabled
Option is enabled by using sp_configure.
NOTE
The integration of .NET Framework CLR into SQL Server is discussed in this topic. CLR integration does not apply to Azure
SQL Database.
Syntax
-- Disk-Based Type Syntax
CREATE TYPE [ schema_name. ] type_name
{
FROM base_type
[ ( precision [ , scale ] ) ]
[ NULL | NOT NULL ]
| EXTERNAL NAME assembly_name [ .class_name ]
| AS TABLE ( { <column_definition> | <computed_column_definition> }
[ <table_constraint> ] [ ,...n ] )
} [ ; ]
<column_definition> ::=
column_name <data_type>
[ COLLATE collation_name ]
[ NULL | NOT NULL ]
[
DEFAULT constant_expression ]
| [ IDENTITY [ ( seed ,increment ) ]
]
[ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ]
<column_constraint> ::=
{ { PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH ( <index_option> [ ,...n ] )
]
| CHECK ( logical_expression )
}
<computed_column_definition> ::=
column_name AS computed_column_expression
[ PERSISTED [ NOT NULL ] ]
[
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[
WITH ( <index_option> [ ,...n ] )
]
| CHECK ( logical_expression )
]
<table_constraint> ::=
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
( column [ ASC | DESC ] [ ,...n ] )
[
WITH ( <index_option> [ ,...n ] )
]
| CHECK ( logical_expression )
}
<index_option> ::=
{
IGNORE_DUP_KEY = { ON | OFF }
}
-- Memory-Optimized Table Type Syntax
CREATE TYPE [schema_name. ] type_name
AS TABLE ( { <column_definition> }
| [ <table_constraint> ] [ ,... n ]
| [ <table_index> ] [ ,... n ] } )
[ WITH ( <table_option> [ ,... n ] ) ]
[ ; ]
<column_definition> ::=
column_name <data_type>
[ COLLATE collation_name ] [ NULL | NOT NULL ] [
[ IDENTITY [ (1 , 1) ]
]
[ <column_constraint> [ ... n ] ] [ <column_index> ]
<column_constraint> ::=
{ PRIMARY KEY { NONCLUSTERED HASH WITH (BUCKET_COUNT = bucket_count)
| NONCLUSTERED } }
<column_index> ::=
INDEX index_name
{ { [ NONCLUSTERED ] HASH WITH (BUCKET_COUNT = bucket_count)
| NONCLUSTERED } }
<table_option> ::=
{
[MEMORY_OPTIMIZED = {ON | OFF}]
}
Arguments
schema_name
Is the name of the schema to which the alias data type or user-defined type belongs.
type_name
Is the name of the alias data type or user-defined type. Type names must comply with the rules for identifiers.
base_type
Is the SQL Server supplied data type on which the alias data type is based. base_type is sysname, with no default,
and can be one of the following values:
base_type can also be any data type synonym that maps to one of these system data types.
precision
For decimal or numeric, is a non-negative integer that indicates the maximum total number of decimal digits
that can be stored, both to the left and to the right of the decimal point. For more information, see decimal and
numeric (Transact-SQL ).
scale
For decimal or numeric, is a non-negative integer that indicates the maximum number of decimal digits that can
be stored to the right of the decimal point, and it must be less than or equal to the precision. For more
information, see decimal and numeric (Transact-SQL ).
NULL | NOT NULL
Specifies whether the type can hold a null value. If not specified, NULL is the default.
assembly_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the SQL Server assembly that references the implementation of the user-defined type in the common
language runtime. assembly_name should match an existing assembly in SQL Server in the current database.
NOTE
EXTERNAL_NAME is not available in a contained database.
[. class_name ]
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the class within the assembly that implements the user-defined type. class_name must be a valid
identifier and must exist as a class in the assembly with assembly visibility. class_name is case-sensitive, regardless
of the database collation, and must exactly match the class name in the corresponding assembly. The class name
can be a namespace-qualified name enclosed in square brackets ([ ]) if the programming language that is used to
write the class uses the concept of namespaces, such as C#. If class_name is not specified, SQL Server assumes it
is the same as type_name.
<column_definition>
Defines the columns for a user-defined table type.
<data type>
Defines the data type in a column for a user-defined table type. For more information about data types, see Data
Types (Transact-SQL ). For more information about tables, see CREATE TABLE (Transact-SQL ).
<column_constraint>
Defines the column constraints for a user-defined table type. Supported constraints include PRIMARY KEY,
UNIQUE, and CHECK. For more information about tables, see CREATE TABLE (Transact-SQL ).
<computed_column_definition>
Defines a computed column expression as a column in a user-defined table type. For more information about
tables, see CREATE TABLE (Transact-SQL ).
<table_constraint>
Defines a table constraint on a user-defined table type. Supported constraints include PRIMARY KEY, UNIQUE,
and CHECK.
<index_option>
Specifies the error response to duplicate key values in a multiple-row insert operation on a unique clustered or
unique nonclustered index. For more information about index options, see CREATE INDEX (Transact-SQL ).
INDEX
You must specify column and table indexes as part of the CREATE TABLE statement. CREATE INDEX and DROP
INDEX are not supported for memory-optimized tables.
MEMORY_OPTIMIZED
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates whether the table type is memory optimized. This option is off by default; the table (type) is not a
memory optimized table (type). Memory optimized table types are memory-optimized user tables, the schema of
which is persisted on disk similar to other user tables.
BUCKET_COUNT
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates the number of buckets that should be created in the hash index. The maximum value for
BUCKET_COUNT in hash indexes is 1,073,741,824. For more information about bucket counts, see Indexes for
Memory-Optimized Tables. bucket_count is a required argument.
HASH
Applies to: SQL Server 2014 (12.x) through SQL Server 2017 and Azure SQL Database.
Indicates that a HASH index is created. Hash indexes are supported only on memory optimized tables.
Remarks
The class of the assembly that is referenced in assembly_name, together with its methods, should satisfy all the
requirements for implementing a user-defined type in SQL Server. For more information about these
requirements, see CLR User-Defined Types.
Additional considerations include the following:
The class can have overloaded methods, but these methods can be called only from within managed code,
not from Transact-SQL.
Any static members must be declared as const or readonly if assembly_name is SAFE or
EXTERNAL_ACCESS.
Within a database, there can be only one user-defined type registered against any specified type that has
been uploaded in SQL Server from the CLR. If a user-defined type is created on a CLR type for which a
user-defined type already exists in the database, CREATE TYPE fails with an error. This restriction is
required to avoid ambiguity during SQL Type resolution if a CLR type can be mapped to more than one
user-defined type.
If any mutator method in the type does not return void, the CREATE TYPE statement does not execute.
To modify a user-defined type, you must drop the type by using a DROP TYPE statement and then re-
create it.
Unlike user-defined types that are created by using sp_addtype, the public database role is not
automatically granted REFERENCES permission on types that are created by using CREATE TYPE. This
permission must be granted separately.
In user-defined table types, structured user-defined types that are used in column_name <data type> are
part of the database schema scope in which the table type is defined. To access structured user-defined
types in a different scope within the database, use two-part names.
In user-defined table types, the primary key on computed columns must be PERSISTED and NOT NULL.
Permissions
Requires CREATE TYPE permission in the current database and ALTER permission on schema_name. If
schema_name is not specified, the default name resolution rules for determining the schema for the current user
apply. If assembly_name is specified, a user must either own the assembly or have REFERENCES permission on it.
If any columns in the CREATE TABLE statement are defined to be of a user-defined type, REFERENCES
permission on the user-defined type is required.
NOTE
A user creating a table with a column that uses a user-defined type needs the REFERENCES permission on the user-defined
type. If this table must be created in TempDB, then either the REFERENCES permission needs to be granted explicitly each
time before the table is created, or this data type and REFERENCES permissions need to be added to the Model database. If
this is done, then this data type and permissions will be available in TempDB permanently. Otherwise, the user-defined data
type and permissions will disappear when SQL Server is restarted. For more information, see CREATE TABLE
Examples
A. Creating an alias type based on the varchar data type
The following example creates an alias type based on the system-supplied varchar data type.
See Also
CREATE ASSEMBLY (Transact-SQL )
DROP TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
CREATE USER (Transact-SQL)
5/3/2018 • 15 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds a user to the current database. The eleven types of users are listed below with a sample of the most basic
syntax:
Users based on logins in master This is the most common type of user.
User based on a login based on a Windows Active Directory account. CREATE USER [Contoso\Fritz];
User based on a login based on a Windows group. CREATE USER [Contoso\Sales];
User based on a login using SQL Server authentication. CREATE USER Mary;
Users that authenticate at the database Recommended to help make your database more portable.
Always allowed in SQL Database. Only allowed in a contained database in SQL Server.
User based on a Windows user that has no login. CREATE USER [Contoso\Fritz];
User based on a Windows group that has no login. CREATE USER [Contoso\Sales];
User in SQL Database or SQL Data Warehouse based on an Azure Active Directory user.
CREATE USER [Contoso\Fritz] FROM EXTERNAL PROVIDER;
Contained database user with password. (Not available in SQL Data Warehouse.)
CREATE USER Mary WITH PASSWORD = '********';
Users based on Windows principals that connect through Windows group logins
User based on a Windows user that has no login, but can connect to the Database Engine through
membership in a Windows group. CREATE USER [Contoso\Fritz];
User based on a Windows group that has no login, but can connect to the Database Engine through
membership in a different Windows group. CREATE USER [Contoso\Fritz];
Users that cannot authenticate These users cannot login to SQL Server or SQL Database.
User without a login. Cannot login but can be granted permissions. CREATE USER CustomApp WITHOUT LOGIN;
User based on a certificate. Cannot login but can be granted permissions and can sign modules.
CREATE USER TestProcess FOR CERTIFICATE CarnationProduction50;
User based on an asymmetric key. Cannot login but can be granted permissions and can sign modules.
CREATE User TestProcess FROM ASYMMETRIC KEY PacificSales09;
Syntax
-- Syntax for SQL Server and Azure SQL Database
[ ; ]
--Users based on Windows principals that connect through Windows group logins
CREATE USER
{
windows_principal [ { FOR | FROM } LOGIN windows_principal ]
| user_name { FOR | FROM } LOGIN windows_principal
}
[ WITH <limited_options_list> [ ,... ] ]
[ ; ]
<options_list> ::=
DEFAULT_SCHEMA = schema_name
| DEFAULT_LANGUAGE = { NONE | lcid | language name | language alias }
| SID = sid
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]
<limited_options_list> ::=
DEFAULT_SCHEMA = schema_name ]
| ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]
Arguments
user_name
Specifies the name by which the user is identified inside this database. user_name is a sysname. It can be up to
128 characters long. When creating a user based on a Windows principal, the Windows principal name becomes
the user name unless another user name is specified.
LOGIN login_name
Specifies the login for which the database user is being created. login_name must be a valid login in the server.
Can be a login based on a Windows principal (user or group), or a login using SQL Server authentication. When
this SQL Server login enters the database, it acquires the name and ID of the database user that is being created.
When creating a login mapped from a Windows principal, use the format [<domainName>\<loginName>]. For
examples, see Syntax Summary.
If the CREATE USER statement is the only statement in a SQL batch, Windows Azure SQL Database supports
the WITH LOGIN clause. If the CREATE USER statement is not the only statement in a SQL batch or is executed
in dynamic SQL, the WITH LOGIN clause is not supported.
WITH DEFAULT_SCHEMA = schema_name
Specifies the first schema that will be searched by the server when it resolves the names of objects for this
database user.
'windows_principal'
Specifies the Windows principal for which the database user is being created. The windows_principal can be a
Windows user, or a Windows group. The user will be created even if the windows_principal does not have a login.
When connecting to SQL Server, if the windows_principal does not have a login, the Windows principal must
authenticate at the Database Engine through membership in a Windows group that has a login, or the
connection string must specify the contained database as the initial catalog. When creating a user from a
Windows principal, use the format [<domainName>\<loginName>]. For examples, see Syntax Summary.
Users based on Active Directory users, are limited to names of less than 21 characters.
'Azure_Active_Directory_principal'
Applies to: SQL Database, SQL Data Warehouse.
Specifies the Azure Active Directory principal for which the database user is being created. The
Azure_Active_Directory_principal can be an Azure Active Directory user, or an Azure Active Directory group.
(Azure Active Directory users cannot have Windows Authentication logins in SQL Database; only database
users.) The connection string must specify the contained database as the initial catalog.
For users, you use the full alias of their domain principal.
CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER;
For more information, see Connecting to SQL Database By Using Azure Active Directory Authentication.
WITH PASSWORD = 'password'
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Can only be used in a contained database. Specifies the password for the user that is being created. Beginning
with SQL Server 2012 (11.x), stored password information is calculated using SHA-512 of the salted password.
WITHOUT LOGIN
Specifies that the user should not be mapped to an existing login.
CERTIFICATE cert_name
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies the certificate for which the database user is being created.
ASYMMETRIC KEY asym_key_name
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies the asymmetric key for which the database user is being created.
DEFAULT_L ANGUAGE = { NONE | <lcid> | <language name> | <language alias> }
Applies to: SQL Server 2012 (11.x) through SQL Server 2017, SQL Database.
Specifies the default language for the new user. If a default language is specified for the user and the default
language of the database is later changed, the users default language remains as specified. If no default language
is specified, the default language for the user will be the default language of the database. If the default language
for the user is not specified and the default language of the database is later changed, the default language of the
user will change to the new default language for the database.
IMPORTANT
DEFAULT_LANGUAGE is used only for a contained database user.
SID = sid
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Applies only to users with passwords ( SQL Server authentication) in a contained database. Specifies the SID of
the new database user. If this option is not selected, SQL Server automatically assigns a SID. Use the SID
parameter to create users in multiple databases that have the same identity (SID ). This is useful when creating
users in multiple databases to prepare for Always On failover. To determine the SID of a user, query
sys.database_principals.
ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]
Applies to: SQL Server 2016 (13.x) through SQL Server 2017, SQL Database.
Suppresses cryptographic metadata checks on the server in bulk copy operations. This enables the user to bulk
copy encrypted data between tables or databases, without decrypting the data. The default is OFF.
WARNING
Improper use of this option can lead to data corruption. For more information, see Migrate Sensitive Data Protected by
Always Encrypted.
Remarks
If FOR LOGIN is omitted, the new database user will be mapped to the SQL Server login with the same name.
The default schema will be the first schema that will be searched by the server when it resolves the names of
objects for this database user. Unless otherwise specified, the default schema will be the owner of objects created
by this database user.
If the user has a default schema, that default schema will used. If the user does not have a default schema, but the
user is a member of a group that has a default schema, the default schema of the group will be used. If the user
does not have a default schema, and is a member of more than one group, the default schema for the user will be
that of the Windows group with the lowest principal_id and an explicitly set default schema. (It is not possible to
explicitly select one of the available default schemas as the preferred schema.) If no default schema can be
determined for a user, the dbo schema will be used.
DEFAULT_SCHEMA can be set before the schema that it points to is created.
DEFAULT_SCHEMA cannot be specified when you are creating a user mapped to a certificate, or an asymmetric
key.
The value of DEFAULT_SCHEMA is ignored if the user is a member of the sysadmin fixed server role. All
members of the sysadmin fixed server role have a default schema of dbo .
The WITHOUT LOGIN clause creates a user that is not mapped to a SQL Server login. It can connect to other
databases as guest. Permissions can be assigned to this user without login and when the security context is
changed to a user without login, the original users receives the permissions of the user without login. See
example D. Creating and using a user without a login.
Only users that are mapped to Windows principals can contain the backslash character (\).
CREATE USER cannot be used to create a guest user because the guest user already exists inside every database.
You can enable the guest user by granting it CONNECT permission, as shown:
Syntax Summary
Users based on logins in master
The following list shows possible syntax for users based on logins. The default schema options are not listed.
CREATE USER [Domain1\WindowsUserBarry]
CREATE USER [Domain1\WindowsUserBarry] FOR LOGIN Domain1\WindowsUserBarry
CREATE USER [Domain1\WindowsUserBarry] FROM LOGIN Domain1\WindowsUserBarry
CREATE USER [Domain1\WindowsGroupManagers]
CREATE USER [Domain1\WindowsGroupManagers] FOR LOGIN [Domain1\WindowsGroupManagers]
CREATE USER [Domain1\WindowsGroupManagers] FROM LOGIN [Domain1\WindowsGroupManagers]
CREATE USER SQLAUTHLOGIN
CREATE USER SQLAUTHLOGIN FOR LOGIN SQLAUTHLOGIN
CREATE USER SQLAUTHLOGIN FROM LOGIN SQLAUTHLOGIN
IMPORTANT
This syntax grants users access to the database and also grants new access to the Database Engine.
Security
Creating a user grants access to a database but does not automatically grant any access to the objects in a
database. After creating a user, common actions are to add users to database roles which have permission to
access database objects, or grant object permissions to the user. For information about designing a permissions
system, see Getting Started with Database Engine Permissions.
Special Considerations for Contained Databases
When connecting to a contained database, if the user does not have a login in the master database, the
connection string must include the contained database name as the initial catalog. The initial catalog parameter is
always required for a contained database user with password.
In a contained database, creating users helps separate the database from the instance of the Database Engine so
that the database can easily be moved to another instance of SQL Server. For more information, see Contained
Databases and Contained Database Users - Making Your Database Portable. To change a database user from a
user based on a SQL Server authentication login to a contained database user with password, see
sp_migrate_user_to_contained (Transact-SQL ).
In a contained database, users do not have to have logins in the master database. Database Engine
administrators should understand that access to a contained database can be granted at the database level,
instead of the Database Engine level. For more information, see Security Best Practices with Contained
Databases.
When using contained database users on Azure SQL Database, configure access using a database-level firewall
rule, instead of a server-level firewall rule. For more information, see sp_set_database_firewall_rule (Azure SQL
Database).
For SQL Database and SQL Data Warehouse contained database users, SSMS can support Multi-Factor
Authentication. For more information, see SSMS support for Azure AD MFA with SQL Database and SQL Data
Warehouse.
Permissions
Requires ALTER ANY USER permission on the database.
Examples
A. Creating a database user based on a SQL Server login
The following example first creates a SQL Server login named AbolrousHazem , and then creates a corresponding
database user AbolrousHazem in AdventureWorks2012 .
Change to a user database. For example, in SQL Server use the USE AdventureWorks2012 statement. In Azure SQL
Data Warehouse and Parallel Data Warehouse, you must make a new connection to the user database.
USE AdventureWorks2012;
CREATE CERTIFICATE CarnationProduction50
WITH SUBJECT = 'Carnation Production Facility Supervisors',
EXPIRY_DATE = '11/11/2011';
GO
CREATE USER JinghaoLiu FOR CERTIFICATE CarnationProduction50;
GO
USE AdventureWorks2012 ;
CREATE USER CustomApp WITHOUT LOGIN ;
GRANT IMPERSONATE ON USER::CustomApp TO [adventure-works\tengiz0] ;
GO
To use the CustomApp credentials, the user adventure-works\tengiz0 executes the following statement.
To revert back to the adventure-works\tengiz0 credentials, the user executes the following statement.
REVERT ;
GO
USE AdventureWorks2012 ;
GO
CREATE USER Carlo
WITH PASSWORD='RN92piTCh%$!~3K9844 Bl*'
, DEFAULT_LANGUAGE=[Brazilian]
, DEFAULT_SCHEMA=[dbo]
GO
USE AdventureWorks2012 ;
GO
CREATE USER CarmenW WITH PASSWORD = 'a8ea v*(Rd##+'
, SID = 0x01050000000000090300000063FF0451A9E7664BA705B10E37DDC4B7;
Next steps
Once the user is created, consider adding the user to a database role using the ALTER ROLE statement.
You might also want to GRANT Object Permissions to the role so they can access tables. For general information
about the SQL Server security model, see Permissions.
See Also
Create a Database User
sys.database_principals (Transact-SQL )
ALTER USER (Transact-SQL )
DROP USER (Transact-SQL )
CREATE LOGIN (Transact-SQL )
EVENTDATA (Transact-SQL )
Contained Databases
Connecting to SQL Database By Using Azure Active Directory Authentication
Getting Started with Database Engine Permissions
CREATE VIEW (Transact-SQL)
5/3/2018 • 20 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a virtual table whose contents (columns and rows) are defined by a query. Use this statement to create a
view of the data in one or more tables in the database. For example, a view can be used for the following
purposes:
To focus, simplify, and customize the perception each user has of the database.
As a security mechanism by allowing users to access data through the view, without granting the users
permissions to directly access the underlying base tables.
To provide a backward compatible interface to emulate a table whose schema has changed.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<view_attribute> ::=
{
[ ENCRYPTION ]
[ SCHEMABINDING ]
[ VIEW_METADATA ]
}
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
<select_statement> ::=
[ WITH <common_table_expression> [ ,...n ] ]
SELECT <select_criteria>
Arguments
OR ALTER
Applies to: Azure SQL Database and SQL Server (starting with SQL Server 2016 (13.x) SP1).
Conditionally alters the view only if it already exists.
schema_name
Is the name of the schema to which the view belongs.
view_name
Is the name of the view. View names must follow the rules for identifiers. Specifying the view owner name is
optional.
column
Is the name to be used for a column in a view. A column name is required only when a column is derived from an
arithmetic expression, a function, or a constant; when two or more columns may otherwise have the same name,
typically because of a join; or when a column in a view is specified a name different from that of the column from
which it is derived. Column names can also be assigned in the SELECT statement.
If column is not specified, the view columns acquire the same names as the columns in the SELECT statement.
NOTE
In the columns for the view, the permissions for a column name apply across a CREATE VIEW or ALTER VIEW statement,
regardless of the source of the underlying data. For example, if permissions are granted on the SalesOrderID column in a
CREATE VIEW statement, an ALTER VIEW statement can name the SalesOrderID column with a different column name,
such as OrderRef, and still have the permissions associated with the view using SalesOrderID.
AS
Specifies the actions the view is to perform.
select_statement
Is the SELECT statement that defines the view. The statement can use more than one table and other views.
Appropriate permissions are required to select from the objects referenced in the SELECT clause of the view that
is created.
A view does not have to be a simple subset of the rows and columns of one particular table. A view can be
created that uses more than one table or other views with a SELECT clause of any complexity.
In an indexed view definition, the SELECT statement must be a single table statement or a multitable JOIN with
optional aggregation.
The SELECT clauses in a view definition cannot include the following:
An ORDER BY clause, unless there is also a TOP clause in the select list of the SELECT statement
IMPORTANT
The ORDER BY clause is used only to determine the rows that are returned by the TOP or OFFSET clause in the view
definition. The ORDER BY clause does not guarantee ordered results when the view is queried, unless ORDER BY is
also specified in the query itself.
NOTE
Any updates performed directly to a view's underlying tables are not verified against the view, even if CHECK OPTION is
specified.
ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017 and Azure SQL Database.
Encrypts the entries in sys.syscomments that contain the text of the CREATE VIEW statement. Using WITH
ENCRYPTION prevents the view from being published as part of SQL Server replication.
SCHEMABINDING
Binds the view to the schema of the underlying table or tables. When SCHEMABINDING is specified, the base
table or tables cannot be modified in a way that would affect the view definition. The view definition itself must
first be modified or dropped to remove dependencies on the table that is to be modified. When you use
SCHEMABINDING, the select_statement must include the two-part names (schema.object) of tables, views, or
user-defined functions that are referenced. All referenced objects must be in the same database.
Views or tables that participate in a view created with the SCHEMABINDING clause cannot be dropped unless
that view is dropped or changed so that it no longer has schema binding. Otherwise, the Database Engine raises
an error. Also, executing ALTER TABLE statements on tables that participate in views that have schema binding
fail when these statements affect the view definition.
VIEW_METADATA
Specifies that the instance of SQL Server will return to the DB -Library, ODBC, and OLE DB APIs the metadata
information about the view, instead of the base table or tables, when browse-mode metadata is being requested
for a query that references the view. Browse-mode metadata is additional metadata that the instance of SQL
Server returns to these client-side APIs. This metadata enables the client-side APIs to implement updatable
client-side cursors. Browse-mode metadata includes information about the base table that the columns in the
result set belong to.
For views created with VIEW_METADATA, the browse-mode metadata returns the view name and not the base
table names when it describes columns from the view in the result set.
When a view is created by using WITH VIEW_METADATA, all its columns, except a timestamp column, are
updatable if the view has INSTEAD OF INSERT or INSTEAD OF UPDATE triggers. For more information about
updatable views, see Remarks.
Remarks
A view can be created only in the current database. The CREATE VIEW must be the first statement in a query
batch. A view can have a maximum of 1,024 columns.
When querying through a view, the Database Engine checks to make sure that all the database objects
referenced anywhere in the statement exist and that they are valid in the context of the statement, and that data
modification statements do not violate any data integrity rules. A check that fails returns an error message. A
successful check translates the action into an action against the underlying table or tables.
If a view depends on a table or view that was dropped, the Database Engine produces an error message when
anyone tries to use the view. If a new table or view is created and the table structure does not change from the
previous base table to replace the one dropped, the view again becomes usable. If the new table or view structure
changes, the view must be dropped and re-created.
If a view is not created with the SCHEMABINDING clause, sp_refreshview should be run when changes are
made to the objects underlying the view that affect the definition of the view. Otherwise, the view might produce
unexpected results when it is queried.
When a view is created, information about the view is stored in the following catalog views: sys.views,
sys.columns, and sys.sql_expression_dependencies. The text of the CREATE VIEW statement is stored in the
sys.sql_modules catalog view.
A query that uses an index on a view defined with numeric or float expressions may have a result that is
different from a similar query that does not use the index on the view. This difference may be caused by
rounding errors during INSERT, DELETE, or UPDATE actions on underlying tables.
The Database Engine saves the settings of SET QUOTED_IDENTIFIER and SET ANSI_NULLS when a view is
created. These original settings are used to parse the view when the view is used. Therefore, any client-session
settings for SET QUOTED_IDENTIFIER and SET ANSI_NULLS do not affect the view definition when the view is
accessed.
Updatable Views
You can modify the data of an underlying base table through a view, as long as the following conditions are true:
Any modifications, including UPDATE, INSERT, and DELETE statements, must reference columns from
only one base table.
The columns being modified in the view must directly reference the underlying data in the table columns.
The columns cannot be derived in any other way, such as through the following:
An aggregate function: AVG, COUNT, SUM, MIN, MAX, GROUPING, STDEV, STDEVP, VAR, and
VARP.
A computation. The column cannot be computed from an expression that uses other columns.
Columns that are formed by using the set operators UNION, UNION ALL, CROSSJOIN, EXCEPT,
and INTERSECT amount to a computation and are also not updatable.
The columns being modified are not affected by GROUP BY, HAVING, or DISTINCT clauses.
TOP is not used anywhere in the select_statement of the view together with the WITH CHECK OPTION
clause.
The previous restrictions apply to any subqueries in the FROM clause of the view, just as they apply to the
view itself. Generally, the Database Engine must be able to unambiguously trace modifications from the
view definition to one base table. For more information, see Modify Data Through a View.
If the previous restrictions prevent you from modifying data directly through a view, consider the
following options:
INSTEAD OF Triggers
INSTEAD OF triggers can be created on a view to make a view updatable. The INSTEAD OF trigger is
executed instead of the data modification statement on which the trigger is defined. This trigger lets the
user specify the set of actions that must happen to process the data modification statement. Therefore, if
an INSTEAD OF trigger exists for a view on a specific data modification statement (INSERT, UPDATE, or
DELETE ), the corresponding view is updatable through that statement. For more information about
INSTEAD OF triggers, see DML Triggers.
Partitioned Views
If the view is a partitioned view, the view is updatable, subject to certain restrictions. When it is needed,
the Database Engine distinguishes local partitioned views as the views in which all participating tables and
the view are on the same instance of SQL Server, and distributed partitioned views as the views in which
at least one of the tables in the view resides on a different or remote server.
Partitioned Views
A partitioned view is a view defined by a UNION ALL of member tables structured in the same way, but stored
separately as multiple tables in either the same instance of SQL Server or in a group of autonomous instances of
SQL Server servers, called federated database servers.
NOTE
The preferred method for partitioning data local to one server is through partitioned tables. For more information, see
Partitioned Tables and Indexes.
In designing a partitioning scheme, it must be clear what data belongs to each partition. For example, the data
for the Customers table is distributed in three member tables in three server locations: Customers_33 on
Server1 , Customers_66 on Server2 , and Customers_99 on Server3 .
SELECT <select_list1>
FROM T1
UNION ALL
SELECT <select_list2>
FROM T2
UNION ALL
...
SELECT <select_listn>
FROM Tn;
All columns in the member tables should be selected in the column list of the view definition.
The columns in the same ordinal position of each select list should be of the same type,
including collations. It is not sufficient for the columns to be implicitly convertible types, as is
generally the case for UNION.
Also, at least one column (for example <col> ) must appear in all the select lists in the same ordinal
position. This <col> should be defined in a way that the member tables T1, ..., Tn have CHECK
constraints C1, ..., Cn defined on <col> , respectively.
Constraint C1 defined on table T1 must be of the following form:
The constraints should be in such a way that any specified value of <col> can satisfy, at most, one
of the constraints C1, ..., Cn so that the constraints should form a set of disjointed or
nonoverlapping intervals. The column <col> on which the disjointed constraints are defined is
called the partitioning column. Note that the partitioning column may have different names in the
underlying tables. The constraints should be in an enabled and trusted state for them to meet the
previously mentioned conditions of the partitioning column. If the constraints are disabled, re-
enable constraint checking by using the CHECK CONSTRAINT constraint_name option of ALTER
TABLE, and using the WITH CHECK option to validate them.
The following examples show valid sets of constraints:
{ [col < 10], [col between 11 and 20] , [col > 20] }
{ [col between 11 and 20], [col between 21 and 30], [col between 31 and 100] }
The same column cannot be used multiple times in the select list.
2. Partitioning column
The partitioning column is a part of the PRIMARY KEY of the table.
It cannot be a computed, identity, default, or timestamp column.
If there is more than one constraint on the same column in a member table, the Database Engine
ignores all the constraints and does not consider them when determining whether the view is a
partitioned view. To meet the conditions of the partitioned view, there should be only one
partitioning constraint on the partitioning column.
There are no restrictions on the updatability of the partitioning column.
3. Member tables, or underlying tables T1, ..., Tn
The tables can be either local tables or tables from other computers that are running SQL Server
that are referenced either through a four-part name or an OPENDATASOURCE - or
OPENROWSET-based name. The OPENDATASOURCE and OPENROWSET syntax can specify a
table name, but not a pass-through query. For more information, see OPENDATASOURCE
(Transact-SQL ) and OPENROWSET (Transact-SQL ).
If one or more of the member tables are remote, the view is called distributed partitioned view, and
additional conditions apply. They are described later in this section.
The same table cannot appear two times in the set of tables that are being combined with the
UNION ALL statement.
The member tables cannot have indexes created on computed columns in the table.
The member tables should have all PRIMARY KEY constraints on the same number of columns.
All member tables in the view should have the same ANSI padding setting. This can be set by using
either the user options option in sp_configure or the SET statement.
NOTE
To update a partitioned view, the user must have INSERT, UPDATE, and DELETE permissions on the member tables.
Permissions
Requires CREATE VIEW permission in the database and ALTER permission on the schema in which the view is
being created.
Examples
The following examples use the AdventureWorks 2012 or AdventureWorksDW database.
A. Using a simple CREATE VIEW
The following example creates a view by using a simple SELECT statement. A simple view is helpful when a
combination of columns is queried frequently. The data from this view comes from the HumanResources.Employee
and Person.Person tables of the AdventureWorks2012 database. The data provides name and hire date
information for the employees of Adventure Works Cycles. The view could be created for the person in charge of
tracking work anniversaries but without giving this person access to all the data in these tables.
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a Resource Governor workload group and associates the workload group with a Resource Governor
resource pool. Resource Governor is not available in every edition of Microsoft SQL Server. For a list of features
that are supported by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016.
Transact-SQL Syntax Conventions.
Syntax
CREATE WORKLOAD GROUP group_name
[ WITH
( [ IMPORTANCE = { LOW | MEDIUM | HIGH } ]
[ [ , ] REQUEST_MAX_MEMORY_GRANT_PERCENT = value ]
[ [ , ] REQUEST_MAX_CPU_TIME_SEC = value ]
[ [ , ] REQUEST_MEMORY_GRANT_TIMEOUT_SEC = value ]
[ [ , ] MAX_DOP = value ]
[ [ , ] GROUP_MAX_REQUESTS = value ] )
]
[ USING {
[ pool_name | "default" ]
[ [ , ] EXTERNAL external_pool_name | "default" ] ]
} ]
[ ; ]
Arguments
group_name
Is the user-defined name for the workload group. group_name is alphanumeric, can be up to 128 characters,
must be unique within an instance of SQL Server, and must comply with the rules for identifiers.
IMPORTANCE = { LOW | MEDIUM | HIGH }
Specifies the relative importance of a request in the workload group. Importance is one of the following, with
MEDIUM being the default:
LOW
MEDIUM (default)
HIGH
NOTE
Internally each importance setting is stored as a number that is used for calculations.
IMPORTANCE is local to the resource pool; workload groups of different importance inside the same resource
pool affect each other, but do not affect workload groups in another resource pool.
REQUEST_MAX_MEMORY_GRANT_PERCENT =value
Specifies the maximum amount of memory that a single request can take from the pool. This percentage is
relative to the resource pool size specified by MAX_MEMORY_PERCENT.
NOTE
The amount specified only refers to query execution grant memory.
value must be 0 or a positive integer. The allowed range for value is from 0 through 100. The default setting for
value is 25.
Note the following:
Setting value to 0 prevents queries with SORT and HASH JOIN operations in user-defined workload
groups from running.
We do not recommend setting value greater than 70 because the server may be unable to set aside
enough free memory if other concurrent queries are running. This may eventually lead to query time-out
error 8645.
NOTE
If the query memory requirements exceed the limit that is specified by this parameter, the server does the following:
For user-defined workload groups, the server tries to reduce the query degree of parallelism until the memory requirement
falls under the limit, or until the degree of parallelism equals 1. If the query memory requirement is still greater than the
limit, error 8657 occurs.
For internal and default workload groups, the server permits the query to obtain the required memory.
Be aware that both cases are subject to time-out error 8645 if the server has insufficient physical memory.
REQUEST_MAX_CPU_TIME_SEC =value
Specifies the maximum amount of CPU time, in seconds, that a request can use. value must be 0 or a positive
integer. The default setting for value is 0, which means unlimited.
NOTE
By default, Resource Governor will not prevent a request from continuing if the maximum time is exceeded. However, an
event will be generated. For more information, see CPU Threshold Exceeded Event Class.
IMPORTANT
Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3, and using trace flag 2422, Resource Governor
will abort a request when the maximum time is exceeded.
REQUEST_MEMORY_GRANT_TIMEOUT_SEC =value
Specifies the maximum time, in seconds, that a query can wait for a memory grant (work buffer memory) to
become available.
NOTE
A query does not always fail when memory grant time-out is reached. A query will only fail if there are too many
concurrent queries running. Otherwise, the query may only get the minimum memory grant, resulting in reduced query
performance.
value must be 0 or a positive integer. The default setting for value, 0, uses an internal calculation based on query
cost to determine the maximum time.
MAX_DOP =value
Specifies the maximum degree of parallelism (DOP ) for parallel requests. value must be 0 or a positive integer.
The allowed range for value is from 0 through 64. The default setting for value, 0, uses the global setting.
MAX_DOP is handled as follows:
MAX_DOP as a query hint is effective as long as it does not exceed workload group MAX_DOP. If the
MAXDOP query hint value exceeds the value that is configured by using the Resource Governor, the
Database Engine uses the Resource Governor MAXDOP value.
MAX_DOP as a query hint always overrides sp_configure 'max degree of parallelism'.
Workload group MAX_DOP overrides sp_configure 'max degree of parallelism'.
If the query is marked as serial at compile time, it cannot be changed back to parallel at run time
regardless of the workload group or sp_configure setting.
After DOP is configured, it can only be lowered on grant memory pressure. Workload group
reconfiguration is not visible while waiting in the grant memory queue.
GROUP_MAX_REQUESTS =value
Specifies the maximum number of simultaneous requests that are allowed to execute in the workload
group. value must be a 0 or a positive integer. The default setting for value, 0, allows unlimited requests.
When the maximum concurrent requests are reached, a user in that group can log in, but is placed in a
wait state until concurrent requests are dropped below the value specified.
USING { pool_name | "default" }
Associates the workload group with the user-defined resource pool identified by pool_name. This in effect
puts the workload group in the resource pool. If pool_name is not provided, or if the USING argument is
not used, the workload group is put in the predefined Resource Governor default pool.
"default" is a reserved word and when used with USING, must be enclosed by quotation marks ("") or
brackets ([]).
NOTE
Predefined workload groups and resource pools all use lower case names, such as "default". This should be taken into
account for servers that use case-sensitive collation. Servers with case-insensitive collation, such as
SQL_Latin1_General_CP1_CI_AS, will treat "default" and "Default" as the same.
Remarks
REQUEST_MEMORY_GRANT_PERCENT: Index creation is allowed to use more workspace memory than what
is initially granted for improved performance. This special handling is supported by Resource Governor in SQL
Server 2017. However, the initial grant and any additional memory grant are limited by resource pool and
workload group settings.
Index Creation on a Partitioned Table
The memory consumed by index creation on non-aligned partitioned table is proportional to the number of
partitions involved. If the total required memory exceeds the per-query limit
(REQUEST_MAX_MEMORY_GRANT_PERCENT) imposed by the Resource Governor workload group setting,
this index creation may fail to execute. Because the "default" workload group allows a query to exceed the per-
query limit with the minimum required memory, the user may be able to run the same index creation in "default"
workload group, if the "default" resource pool has enough total memory configured to run such query.
Permissions
Requires CONTROL SERVER permission.
Examples
The following example shows how to create a workload group named newReports . It uses the Resource
Governor default settings and is in the Resource Governor default pool. The example specifies the default pool,
but this is not required.
See Also
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
CREATE XML INDEX (Transact-SQL)
5/3/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates an XML index on a specified table. An index can be created before there is data in the table. XML indexes
can be created on tables in another database by specifying a qualified database name.
NOTE
To create a relational index, see CREATE INDEX (Transact-SQL). For information about how to create a spatial index, see
CREATE SPATIAL INDEX (Transact-SQL).
Syntax
Create XML Index
CREATE [ PRIMARY ] XML INDEX index_name
ON <object> ( xml_column_name )
[ USING XML INDEX xml_index_name
[ FOR { VALUE | PATH | PROPERTY } ] ]
[ WITH ( <xml_index_option> [ ,...n ] ) ]
[ ; ]
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_name
}
<xml_index_option> ::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = OFF
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
}
Arguments
[PRIMARY ] XML
Creates an XML index on the specified xml column. When PRIMARY is specified, a clustered index is created
with the clustered key formed from the clustering key of the user table and an XML node identifier. Each table can
have up to 249 XML indexes. Note the following when you create an XML index:
A clustered index must exist on the primary key of the user table.
The clustering key of the user table is limited to 15 columns.
Each xml column in a table can have one primary XML index and multiple secondary XML indexes.
A primary XML index on an xml column must exist before a secondary XML index can be created on the
column.
An XML index can only be created on a single xml column. You cannot create an XML index on a non-xml
column, nor can you create a relational index on an xml column.
You cannot create an XML index, either primary or secondary, on an xml column in a view, on a table-
valued variable with xml columns, or xml type variables.
You cannot create a primary XML index on a computed xml column.
The SET option settings must be the same as those required for indexed views and computed column
indexes. Specifically, the option ARITHABORT must be set to ON when an XML index is created and when
inserting, deleting, or updating values in the xml column.
For more information, see XML Indexes (SQL Server).
index_name
Is the name of the index. Index names must be unique within a table but do not have to be unique within a
database. Index names must follow the rules of identifiers.
Primary XML index names cannot start with the following characters: #, ##, @, or @@.
xml_column_name
Is the xml column on which the index is based. Only one xml column can be specified in a single XML
index definition; however, multiple secondary XML indexes can be created on an xml column.
USING XML INDEX xml_index_name
Specifies the primary XML index to use in creating a secondary XML index.
FOR { VALUE | PATH | PROPERTY }
Specifies the type of secondary XML index.
VALUE
Creates a secondary XML index on columns where key columns are (node value and path) of the primary
XML index.
PATH
Creates a secondary XML index on columns built on path values and node values in the primary XML
index. In the PATH secondary index, the path and node values are key columns that allow efficient seeks
when searching for paths.
PROPERTY
Creates a secondary XML index on columns (PK, path and node value) of the primary XML index where
PK is the primary key of the base table.
<object>::=
Is the fully qualified or nonfully qualified object to be indexed.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to be indexed.
<xml_index_option> ::=
Specifies the options to use when you create the index.
PAD_INDEX = { ON | OFF }
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by fillfactor is applied to the intermediate-level pages of the
index.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity, leaving sufficient space for at least one row of the
maximum size the index can have, considering the set of keys on the intermediate pages.
The PAD_INDEX option is useful only when FILLFACTOR is specified, because PAD_INDEX uses the
percentage specified by FILLFACTOR. If the percentage specified for FILLFACTOR is not large enough to
allow for one row, the Database Engine internally overrides the percentage to allow for the minimum. The
number of rows on an intermediate index page is never less than two, regardless of how low the value of
fillfactor.
FILLFACTOR =fillfactor
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index
page during index creation or rebuild. fillfactor must be an integer value from 1 to 100. The default is 0. If
fillfactor is 100 or 0, the Database Engine creates indexes with leaf pages filled to capacity.
NOTE
Fill factor values 0 and 100 are the same in all respects.
The FILLFACTOR setting applies only when the index is created or rebuilt. The Database Engine does not
dynamically keep the specified percentage of empty space in the pages. To view the fill factor setting, use the
sys.indexes catalog view.
IMPORTANT
Creating a clustered index with a FILLFACTOR less than 100 affects the amount of storage space the data occupies because
the Database Engine redistributes the data when it creates the clustered index.
NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016.
ALLOW_ROW_LOCKS = { ON | OFF }
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when accessing the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
MAXDOP =max_degree_of_parallelism
Overrides the Configure the max degree of parallelism Server Configuration Option configuration option for the
duration of the index operation. Use MAXDOP to limit the number of processors used in a parallel plan execution.
The maximum is 64 processors.
IMPORTANT
Although the MAXDOP option is syntactically supported for all XML indexes, for a primary XML index, CREATE XML INDEX
uses only a single processor.
NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016.
Remarks
Computed columns derived from xml data types can be indexed either as a key or included nonkey column as
long as the computed column data type is allowable as an index key column or nonkey column. You cannot create
a primary XML index on a computed xml column.
To view information about XML indexes, use the sys.xml_indexes catalog view.
For more information about XML indexes, see XML Indexes (SQL Server).
Examples
A. Creating a primary XML index
The following example creates a primary XML index on the CatalogDescription column in the
Production.ProductModel table.
USE AdventureWorks2012;
GO
IF EXISTS (SELECT * FROM sys.indexes
WHERE name = N'PXML_ProductModel_CatalogDescription')
DROP INDEX PXML_ProductModel_CatalogDescription
ON Production.ProductModel;
GO
CREATE PRIMARY XML INDEX PXML_ProductModel_CatalogDescription
ON Production.ProductModel (CatalogDescription);
GO
USE AdventureWorks2012;
GO
IF EXISTS (SELECT name FROM sys.indexes
WHERE name = N'IXML_ProductModel_CatalogDescription_Path')
DROP INDEX IXML_ProductModel_CatalogDescription_Path
ON Production.ProductModel;
GO
CREATE XML INDEX IXML_ProductModel_CatalogDescription_Path
ON Production.ProductModel (CatalogDescription)
USING XML INDEX PXML_ProductModel_CatalogDescription FOR PATH ;
GO
See Also
ALTER INDEX (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE PARTITION FUNCTION (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
CREATE TABLE (Transact-SQL )
Data Types (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP INDEX (Transact-SQL )
XML Indexes (SQL Server)
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
sys.xml_indexes (Transact-SQL )
EVENTDATA (Transact-SQL )
XML Indexes (SQL Server)
CREATE XML INDEX (Selective XML Indexes)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Creates a new secondary selective XML index on a single path that is already indexed by an existing selective XML
index. You can also create primary selective XML indexes. For information, see Create, Alter, and Drop Selective
XML Indexes.
Transact-SQL Syntax Conventions
Syntax
CREATE XML INDEX index_name
ON <table_object> ( xml_column_name )
USING XML INDEX sxi_index_name
FOR ( <xquery_or_sql_values_path> )
[WITH ( <index_options> )]
<table_object> ::=
{ [database_name. [schema_name ] . | schema_name. ] table_name }
<xquery_or_sql_values_path>::=
<path_name>
<path_name> ::=
character string literal
<xmlnamespace_list> ::=
<xmlnamespace_item> [, <xmlnamespace_list>]
<xmlnamespace_item> ::=
xmlnamespace_uri AS xmlnamespace_prefix
<index_options> ::=
(
| PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = OFF
| DROP_EXISTING = { ON | OFF }
| ONLINE = OFF
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
)
Arguments
index_name
Is the name of the new index to create. Index names must be unique within a table, but do not have to be unique
within a database. Index names must follow the rules of identifiers.
ON <table_object> Is the table that contains the XML column to index. You can use the following formats:
database_name.schema_name.table_name
database_name..table_name
schema_name.table_name
xml_column_name
Is the name of the XML column that contains the path to index.
USING XML INDEX sxi_index_name
Is the name of the existing selective XML index.
FOR ( <xquery_or_sql_values_path> ) Is the name of the indexed path on which to create the secondary
selective XML index. The path to index is the assigned name from the CREATE SELECTIVE XML INDEX
statement. For more information, see CREATE SELECTIVE XML INDEX (Transact-SQL ).
WITH <index_options> For information about the index options, see CREATE XML INDEX.
Remarks
There can be multiple secondary selective XML indexes on every XML column in the base table.
Security
Permissions
Requires ALTER permission on the table or view. User must be a member of the sysadmin fixed server role or the
db_ddladmin and db_owner fixed database roles.
Examples
The following example creates a secondary selective XML index on the path pathabc . The path to index is the
assigned name from the CREATE SELECTIVE XML INDEX (Transact-SQL ).
See Also
Selective XML Indexes (SXI)
Create, Alter, and Drop Secondary Selective XML Indexes
CREATE XML SCHEMA COLLECTION (Transact-
SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Imports the schema components into a database.
Transact-SQL Syntax Conventions
Syntax
CREATE XML SCHEMA COLLECTION [ <relational_schema>. ]sql_identifier AS Expression
Arguments
relational_schema
Identifies the relational schema name. If not specified, default relational schema is assumed.
sql_identifier
Is the SQL identifier for the XML schema collection.
Expression
Is a string constant or scalar variable. Is varchar, varbinary, nvarchar, or xml type.
Remarks
You can also add new namespaces to the collection or add new components to existing namespaces in the
collection by using ALTER XML SCHEMA COLLECTION.
To remove collections, use DROP XML SCHEMA COLLECTION (Transact-SQL ).
Permissions
To create an XML SCHEMA COLLECTION requires at least one of the following sets of permissions:
CONTROL permission on the server
ALTER ANY DATABASE permission on the server
ALTER permission on the database
CONTROL permission in the database
ALTER ANY SCHEMA permission and CREATE XML SCHEMA COLLECTION permission in the database
ALTER or CONTROL permission on the relational schema and CREATE XML SCHEMA COLLECTION
permission in the database
Examples
A. Creating XML schema collection in the database
The following example creates the XML schema collection ManuInstructionsSchemaCollection . The collection has
only one schema namespace.
<xsd:element name="root">
<xsd:complexType mixed="true">
<xsd:sequence>
<xsd:element name="Location" minOccurs="1" maxOccurs="unbounded">
<xsd:complexType mixed="true">
<xsd:sequence>
<xsd:element name="step" type="StepType" minOccurs="1" maxOccurs="unbounded" />
</xsd:sequence>
<xsd:attribute name="LocationID" type="xsd:integer" use="required"/>
<xsd:attribute name="SetupHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="MachineHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="LaborHours" type="xsd:decimal" use="optional"/>
<xsd:attribute name="LotSize" type="xsd:decimal" use="optional"/>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>' ;
GO
-- Verify - list of collections in the database.
SELECT *
FROM sys.xml_schema_collections;
-- Verify - list of namespaces in the database.
SELECT name
FROM sys.xml_schema_namespaces;
-- Use it. Create a typed xml variable. Note collection name specified.
DECLARE @x xml (ManuInstructionsSchemaCollection);
GO
--Or create a typed xml column.
CREATE TABLE T (
i int primary key,
x xml (ManuInstructionsSchemaCollection));
GO
-- Clean up
DROP TABLE T;
GO
DROP XML SCHEMA COLLECTION ManuInstructionsSchemaCollection;
Go
USE master;
GO
DROP DATABASE SampleDB;
Alternatively, you can assign the schema collection to a variable and specify the variable in the
CREATE XML SCHEMA COLLECTION statement as follows:
The variable in the example is of nvarchar(max) type. The variable can also be of xml data type, in which case, it is
implicitly converted to a string.
For more information, see View a Stored XML Schema Collection.
You may store schema collections in an xml type column. In this case, to create XML schema collection, perform
the following:
1. Retrieve the schema collection from the column by using a SELECT statement and assign it to a variable of
xml type, or a varchar type.
2. Specify the variable name in the CREATE XML SCHEMA COLLECTION statement.
The CREATE XML SCHEMA COLLECTION stores only the schema components that SQL Server
understands; everything in the XML schema is not stored in the database. Therefore, if you want the XML
schema collection back exactly the way it was supplied, we recommend that you save your XML schemas
in a database column or some other folder on your computer.
B. Specifying multiple schema namespaces in a schema collection
You can specify multiple XML schemas when you create an XML schema collection. For example:
The following example creates the XML schema collection ProductDescriptionSchemaCollection that includes two
XML schema namespaces.
CREATE XML SCHEMA COLLECTION ProductDescriptionSchemaCollection AS
'<xsd:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelWarrAndMain"
xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain"
elementFormDefault="qualified"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" >
<xsd:element name="Warranty" >
<xsd:complexType>
<xsd:sequence>
<xsd:element name="WarrantyPeriod" type="xsd:string" />
<xsd:element name="Description" type="xsd:string" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
<xs:schema targetNamespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-
works/ProductModelDescription"
xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription"
elementFormDefault="qualified"
xmlns:mstns="http://tempuri.org/XMLSchema.xsd"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:wm="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain" >
<xs:import
namespace="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain" />
<xs:element name="ProductDescription" type="ProductDescription" />
<xs:complexType name="ProductDescription">
<xs:sequence>
<xs:element name="Summary" type="Summary" minOccurs="0" />
</xs:sequence>
<xs:attribute name="ProductModelID" type="xs:string" />
<xs:attribute name="ProductModelName" type="xs:string" />
</xs:complexType>
<xs:complexType name="Summary" mixed="true" >
<xs:sequence>
<xs:any processContents="skip" namespace="http://www.w3.org/1999/xhtml" minOccurs="0"
maxOccurs="unbounded" />
</xs:sequence>
</xs:complexType>
</xs:schema>'
;
GO -- Clean up
DROP XML SCHEMA COLLECTION ProductDescriptionSchemaCollection;
GO
See Also
ALTER XML SCHEMA COLLECTION (Transact-SQL )
DROP XML SCHEMA COLLECTION (Transact-SQL )
EVENTDATA (Transact-SQL )
Compare Typed XML to Untyped XML
DROP XML SCHEMA COLLECTION (Transact-SQL )
Requirements and Limitations for XML Schema Collections on the Server
Collations
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Is a clause that can be applied to a database definition or a column definition to define the collation, or to a
character string expression to apply a collation cast.
IMPORTANT
On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. See Azure SQL Database
Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.
Syntax
COLLATE { <collation_name> | database_default }
<collation_name> :: =
{ Windows_collation_name } | { SQL_collation_name }
Arguments
collation_name
Is the name of the collation to be applied to the expression, column definition, or database definition.
collation_name can be only a specified Windows_collation_name or a SQL_collation_name. collation_name must
be a literal value. collation_name cannot be represented by a variable or expression.
Windows_collation_name is the collation name for a Windows Collation Name.
SQL_collation_name is the collation name for a SQL Server Collation Name.
When applying a collation at the database definition level, Unicode-only Windows collations cannot be used with
the COLL ATE clause.
database_default
Causes the COLL ATE clause to inherit the collation of the current database.
Remarks
The COLL ATE clause can be specified at several levels. These include the following:
1. Creating or altering a database.
You can use the COLL ATE clause of the CREATE DATABASE or ALTER DATABASE statement to specify
the default collation of the database. You can also specify a collation when you create a database using
SQL Server Management Studio. If you do not specify a collation, the database is assigned the default
collation of the instance of SQL Server.
NOTE
Windows Unicode-only collations can only be used with the COLLATE clause to apply collations to the nchar,
nvarchar, and ntext data types on column-level and expression-level data; they cannot be used with the COLLATE
clause to change the collation of a database or server instance.
SQL Server can support only code pages that are supported by the underlying operating system. When you
perform an action that depends on collations, the SQL Server collation used by the referenced object must use a
code page supported by the operating system running on the computer. These actions can include the following:
Specifying a default collation for a database when you create or alter the database.
Specifying a collation for a column when you create or alter a table.
When restoring or attaching a database, the default collation of the database and the collation of any char,
varchar, and text columns or parameters in the database must be supported by the operating system.
NOTE
Azure SQL Database Managed Instance server collation is SQL_Latin1_General_CP1_CI_AS and cannot be changed.
NOTE
Code page translations are supported for char and varchar data types, but not for text data type. Data loss during code
page translations is not reported.
NOTE
If the collation specified or the collation used by the referenced object uses a code page not supported by Windows, SQL
Server displays an error.
Examples
A. Specifying collation during a select
The following example creates a simple table and inserts 4 rows. Then the example applies two collations when
selecting data from the table, demonstrating how Chiapas is sorted differently.
Place
-------------
California
Chiapas
Cinco Rios
Colima
B. Additional examples
For additional examples that use COLLATE, see CREATE DATABASE (SQL Server Transact-SQL ) example G.
Creating a database and specifying a collation name and options, and ALTER TABLE (Transact-SQL )
example V. Changing column collation.
See Also
ALTER TABLE (Transact-SQL )
Collation and Unicode Support
Collation Precedence (Transact-SQL )
Constants (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
CREATE TABLE (Transact-SQL )
DECL ARE @local_variable (Transact-SQL )
table (Transact-SQL )
SQL Server Collation Name (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Is a single string that specifies the collation name for a SQL Server collation.
SQL Server supports Windows collations. SQL Server also supports a limited number (<80) of collations called
SQL Server collations which were developed before SQL Server supported Windows collations. SQL Server
collations are still supported for backward compatibility, but should not be used for new development work. For
more information about Windows collations, see Windows Collation Name (Transact-SQL ).
Transact-SQL Syntax Conventions
Syntax
<SQL_collation_name> :: =
SQL_SortRules[_Pref]_CPCodepage_<ComparisonStyle>
<ComparisonStyle> ::=
_CaseSensitivity_AccentSensitivity | _BIN
Arguments
SortRules
A string identifying the alphabet or language whose sorting rules are applied when dictionary sorting is specified.
Examples are Latin1_General or Polish.
Pref
Specifies uppercase preference. Even if comparison is case-insensitive, the uppercase version of a letter sorts
before the lowercase version, when there is no other distinction.
Codepage
Specifies a one- to four-digit number that identifies the code page used by the collation. CP1 specifies code page
1252, for all other code pages the complete code page number is specified. For example, CP1251 specifies code
page 1251 and CP850 specifies code page 850.
CaseSensitivity
CI specifies case-insensitive, CS specifies case-sensitive.
AccentSensitivity
AI specifies accent-insensitive, AS specifies accent-sensitive.
BIN
Specifies the binary sort order to be used.
Remarks
To list the SQL Server collations supported by your server, execute the following query.
SELECT * FROM sys.fn_helpcollations()
WHERE name LIKE 'SQL%';
NOTE
For Sort Order ID 80, use any of the Window collations with the code page of 1250, and binary order. For example:
Albanian_BIN, Croatian_BIN, Czech_BIN, Romanian_BIN, Slovak_BIN, Slovenian_BIN.
See Also
ALTER TABLE (Transact-SQL )
Constants (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
CREATE TABLE (Transact-SQL )
DECL ARE @local_variable (Transact-SQL )
table (Transact-SQL )
sys.fn_helpcollations (Transact-SQL )
Windows Collation Name (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the Windows collation name in the COLL ATE clause in SQL Server. The Windows collation name is
composed of the collation designator and the comparison styles.
Transact-SQL Syntax Conventions
Syntax
<Windows_collation_name> :: =
CollationDesignator_<ComparisonStyle>
<ComparisonStyle> :: =
{ CaseSensitivity_AccentSensitivity [ _KanatypeSensitive ] [ _WidthSensitive ]
}
| { _BIN | _BIN2 }
Arguments
CollationDesignator
Specifies the base collation rules used by the Windows collation. The base collation rules cover the following:
The sorting rules that are applied when dictionary sorting is specified. Sorting rules are based on alphabet
or language.
The code page used to store non-Unicode character data.
Some examples are:
Latin1_General or French: both use code page 1252.
Turkish: uses code page 1254.
CaseSensitivity
CI specifies case-insensitive, CS specifies case-sensitive.
AccentSensitivity
AI specifies accent-insensitive, AS specifies accent-sensitive.
KanatypeSensitive
Omitted specifies kanatype-insensitive, KS specifies kanatype-sensitive.
WidthSensitivity
Omitted specifies width-insensitive, WS specifies width-sensitive.
BIN
Specifies the backward-compatible binary sort order to be used.
BIN2
Specifies the binary sort order that uses code-point comparison semantics.
Remarks
Depending on the version of the collations some code points may be undefined. For example compare:
The first line returns an uppercase character when the collation is Latin1_General_CI_AS, because this code point
is undefined in this collation.
When working with some languages, it can be critical to avoid the older collations. For example, this is true for
Telegu.
In some cases Windows collations and SQL Server collations can generate different query plans for the same
query.
Examples
The following are some examples of Windows collation names:
Latin1_General_100_
Collation uses the Latin1 General dictionary sorting rules, code page 1252. Is case-insensitive and accent-
sensitive. Collation uses the Latin1 General dictionary sorting rules and maps to code page 1252. Shows
the version number of the collation if it is a Windows collation: _90 or _100. Is case-insensitive (CI), and
accent-sensitive (AS ).
Estonian_CS_AS
Collation uses the Estonian dictionary sorting rules, code page 1257. Is case-sensitive and accent-sensitive.
Latin1_General_BIN
Collation uses code page 1252 and binary sorting rules. The Latin1 General dictionary sorting rules are
ignored.
Windows Collations
To list the Windows collations supported by your instance of SQL Server, execute the following query.
The following table lists all Windows collations supported in SQL Server 2017.
1Unicode-only Windows collations can only be applied to column-level or expression-level data. They cannot be
used as server or database collations.
2Like the Chinese ( Taiwan) collation, Chinese ( Macau) uses the rules of Simplified Chinese; unlike Chinese
(Taiwan), it uses code page 950.
See Also
Collation and Unicode Support
ALTER TABLE (Transact-SQL )
Constants (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
CREATE TABLE (Transact-SQL )
DECL ARE @local_variable (Transact-SQL )
table (Transact-SQL )
sys.fn_helpcollations (Transact-SQL )
Collation Precedence (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Collation precedence, also known as collation coercion rules, determines the following:
The collation of the final result of an expression that is evaluated to a character string.
The collation that is used by collation-sensitive operators that use character string inputs but do not return a
character string, such as LIKE and IN.
The collation precedence rules apply only to the character string data types: char, varchar, text, nchar,
nvarchar, and ntext. Objects that have other data types do not participate in collation evaluations.
Collation Labels
The following table lists and describes the four categories in which the collations of all objects are identified. The
name of each category is called the collation label.
Collation Rules
The collation label of a simple expression that references only one character string object is the collation label of
the referenced object.
The collation label of a complex expression that references two operand expressions with the same collation label
is the collation label of the operand expressions.
The collation label of the final result of a complex expression that references two operand expressions with
different collations is based on the following rules:
Explicit takes precedence over implicit. Implicit takes precedence over Coercible-default:
Explicit > Implicit > Coercible-default
Combining two Explicit expressions that have been assigned different collations generates an error:
Explicit X + Explicit Y = Error
Combining two Implicit expressions that have different collations yields a result of No-collation:
Implicit X + Implicit Y = No-collation
Combining an expression with No-collation with an expression of any label, except Explicit collation (see the
following rule), yields a result that has the No-collation label:
No-collation + anything = No-collation
Combining an expression with No-collation with an expression that has an Explicit collation, yields an
expression with an Explicit label:
No-collation + Explicit X = Explicit
The following table summarizes the rules.
OPERAND COERCION
LABEL EXPLICIT X IMPLICIT X COERCIBLE-DEFAULT NO-COLLATION
Code page conversions for text data types are not allowed. You cannot cast a text expression from one
collation to another if they have the different code pages. The assignment operator cannot assign values
when the collation of the right text operand has a different code page than the left text operand.
Collation precedence is determined after data type conversion. The operand from which the resulting
collation is taken can be different from the operand that supplies the data type of the final result. For
example, consider the following batch:
CREATE TABLE TestTab
(PrimaryKey int PRIMARY KEY,
CharCol char(10) COLLATE French_CI_AS
)
SELECT *
FROM TestTab
WHERE CharCol LIKE N'abc'
The Unicode data type of the simple expression N'abc' has a higher data type precedence. Therefore, the resulting
expression has the Unicode data type assigned to N'abc' . However, the expression CharCol has a collation label
of Implicit, and N'abc' has a lower coercion label of Coercible-default. Therefore, the collation that is used is the
French_CI_AS collation of CharCol .
USE tempdb;
GO
SELECT *
FROM TestTab
WHERE GreekCol = LatinCol;
SELECT *
FROM TestTab
WHERE GreekCol = LatinCol COLLATE greek_ci_as;
id GreekCol LatinCol
----------- -------------------- --------------------
1 A a
(1 row affected)
No-Collation Labels
The CASE expressions in the following queries have a No-collation label; therefore, they cannot appear in the
select list or be operated on by collation-sensitive operators. However, the expressions can be operated on by
collation-insensitive operators.
SELECT PATINDEX((CASE WHEN id > 10 THEN GreekCol ELSE LatinCol END), 'a')
FROM TestTab;
SELECT (CASE WHEN id > 10 THEN GreekCol ELSE LatinCol END) COLLATE Latin1_General_CI_AS
FROM TestTab;
--------------------
a
(1 row affected)
CHARINDEX REPLACE
DIFFERENCE REVERSE
ISNUMERIC RIGHT
LEFT SOUNDEX
LEN STUFF
LOWER SUBSTRING
PATINDEX UPPER
See Also
COLL ATE (Transact-SQL )
Data Type Conversion (Database Engine)
Operators (Transact-SQL )
Expressions (Transact-SQL )
DELETE (Transact-SQL)
5/30/2018 • 14 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more rows from a table or view in SQL Server.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<object> ::=
{
[ server_name.database_name.schema_name.
| database_name. [ schema_name ] .
| schema_name.
]
table_or_view_name
}
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
WITH <common_table_expression>
Specifies the temporary named result set, also known as common table expression, defined within the scope of
the DELETE statement. The result set is derived from a SELECT statement.
Common table expressions can also be used with the SELECT, INSERT, UPDATE, and CREATE VIEW statements.
For more information, see WITH common_table_expression (Transact-SQL ).
TOP (expression) [ PERCENT ]
Specifies the number or percent of random rows that will be deleted. expression can be either a number or a
percent of the rows. The rows referenced in the TOP expression used with INSERT, UPDATE, or DELETE are not
arranged in any order. For more information, see TOP (Transact-SQL ).
FROM
An optional keyword that can be used between the DELETE keyword and the target table_or_view_name, or
rowset_function_limited.
table_alias
The alias specified in the FROM table_source clause representing the table or view from which the rows are to be
deleted.
server_name
Applies to: SQL Server 2008 through SQL Server 2017.
The name of the server (using a linked server name or the OPENDATASOURCE function as the server name) on
which the table or view is located. If server_name is specified, database_name and schema_name are required.
database_name
The name of the database.
schema_name
The name of the schema to which the table or view belongs.
table_or_view_name
The name of the table or view from which the rows are to be removed.
A table variable, within its scope, also can be used as a table source in a DELETE statement.
The view referenced by table_or_view_name must be updatable and reference exactly one base table in the
FROM clause of the view definition. For more information about updatable views, see CREATE VIEW (Transact-
SQL ).
rowset_function_limited
Applies to: SQL Server 2008 through SQL Server 2017.
Either the OPENQUERY or OPENROWSET function, subject to provider capabilities.
WITH ( <table_hint_limited> [... n] )
Specifies one or more table hints that are allowed for a target table. The WITH keyword and the parentheses are
required. NOLOCK and READUNCOMMITTED are not allowed. For more information about table hints, see
Table Hints (Transact-SQL ).
<OUTPUT_Clause>
Returns deleted rows, or expressions based on them, as part of the DELETE operation. The OUTPUT clause is not
supported in any DML statements targeting views or remote tables. For more information, see OUTPUT Clause
(Transact-SQL ).
FROM table_source
Specifies an additional FROM clause. This Transact-SQL extension to DELETE allows specifying data from
<table_source> and deleting the corresponding rows from the table in the first FROM clause.
This extension, specifying a join, can be used instead of a subquery in the WHERE clause to identify rows to be
removed.
For more information, see FROM (Transact-SQL ).
WHERE
Specifies the conditions used to limit the number of rows that are deleted. If a WHERE clause is not supplied,
DELETE removes all the rows from the table.
There are two forms of delete operations based on what is specified in the WHERE clause:
Searched deletes specify a search condition to qualify the rows to delete. For example, WHERE
column_name = value.
Positioned deletes use the CURRENT OF clause to specify a cursor. The delete operation occurs at the
current position of the cursor. This can be more accurate than a searched DELETE statement that uses a
WHERE search_condition clause to qualify the rows to be deleted. A searched DELETE statement deletes
multiple rows if the search condition does not uniquely identify a single row.
<search_condition>
Specifies the restricting conditions for the rows to be deleted. There is no limit to the number of predicates that
can be included in a search condition. For more information, see Search Condition (Transact-SQL ).
CURRENT OF
Specifies that the DELETE is performed at the current position of the specified cursor.
GLOBAL
Specifies that cursor_name refers to a global cursor.
cursor_name
Is the name of the open cursor from which the fetch is made. If both a global and a local cursor with the name
cursor_name exist, this argument refers to the global cursor if GLOBAL is specified; otherwise, it refers to the local
cursor. The cursor must allow updates.
cursor_variable_name
The name of a cursor variable. The cursor variable must reference a cursor that allows updates.
OPTION ( <query_hint> [ ,... n] )
Keywords that indicate which optimizer hints are used to customize the way the Database Engine processes the
statement. For more information, see Query Hints (Transact-SQL ).
Best Practices
To delete all the rows in a table, use TRUNCATE TABLE. TRUNCATE TABLE is faster than DELETE and uses fewer
system and transaction log resources. TRUNCATE TABLE has restrictions, for example, the table cannot
participate in replication. For more information, see TRUNCATE TABLE (Transact-SQL )
Use the @@ROWCOUNT function to return the number of deleted rows to the client application. For more
information, see @@ROWCOUNT (Transact-SQL ).
Error Handling
You can implement error handling for the DELETE statement by specifying the statement in a TRY…CATCH
construct.
The DELETE statement may fail if it violates a trigger or tries to remove a row referenced by data in another table
with a FOREIGN KEY constraint. If the DELETE removes multiple rows, and any one of the removed rows violates
a trigger or constraint, the statement is canceled, an error is returned, and no rows are removed.
When a DELETE statement encounters an arithmetic error (overflow, divide by zero, or a domain error) occurring
during expression evaluation, the Database Engine handles these errors as if SET ARITHABORT is set ON. The
rest of the batch is canceled, and an error message is returned.
Interoperability
DELETE can be used in the body of a user-defined function if the object modified is a table variable.
When you delete a row that contains a FILESTREAM column, you also delete its underlying file system files. The
underlying files are removed by the FILESTREAM garbage collector. For more information, see Access
FILESTREAM Data with Transact-SQL.
The FROM clause cannot be specified in a DELETE statement that references, either directly or indirectly, a view
with an INSTEAD OF trigger defined on it. For more information about INSTEAD OF triggers, see CREATE
TRIGGER (Transact-SQL ).
Locking Behavior
By default, a DELETE statement always acquires an exclusive (X) lock on the table it modifies, and holds that lock
until the transaction completes. With an exclusive (X) lock, no other transactions can modify data; read operations
can take place only with the use of the NOLOCK hint or read uncommitted isolation level. You can specify table
hints to override this default behavior for the duration of the DELETE statement by specifying another locking
method, however, we recommend that hints be used only as a last resort by experienced developers and database
administrators. For more information, see Table Hints (Transact-SQL ).
When rows are deleted from a heap the Database Engine may use row or page locking for the operation. As a
result, the pages made empty by the delete operation remain allocated to the heap. When empty pages are not
deallocated, the associated space cannot be reused by other objects in the database.
To delete rows in a heap and deallocate pages, use one of the following methods.
Specify the TABLOCK hint in the DELETE statement. Using the TABLOCK hint causes the delete operation
to take an exclusive lock on the table instead of a row or page lock. This allows the pages to be deallocated.
For more information about the TABLOCK hint, see Table Hints (Transact-SQL ).
Use TRUNCATE TABLE if all rows are to be deleted from the table.
Create a clustered index on the heap before deleting the rows. You can drop the clustered index after the
rows are deleted. This method is more time consuming than the previous methods and uses more
temporary resources.
NOTE
Empty pages can be removed from a heap at any time by using the ALTER TABLE <table_name> REBUILD statement.
Logging Behavior
The DELETE statement is always fully logged.
Security
Permissions
DELETE permissions are required on the target table. SELECT permissions are also required if the statement
contains a WHERE clause.
DELETE permissions default to members of the sysadmin fixed server role, the db_owner and db_datawriter
fixed database roles, and the table owner. Members of the sysadmin, db_owner, and the db_securityadmin
roles, and the table owner can transfer permissions to other users.
Examples
CATEGORY FEATURED SYNTAX ELEMENTS
Deleting rows from a remote table Linked server • OPENQUERY rowset function •
OPENDATASOURCE rowset function
Basic Syntax
Examples in this section demonstrate the basic functionality of the DELETE statement using the minimum
required syntax.
A. Using DELETE with no WHERE clause
The following example deletes all rows from the SalesPersonQuotaHistory table in the AdventureWorks2012
database because a WHERE clause is not used to limit the number of rows deleted.
The following example shows a more complex WHERE clause. The WHERE clause defines two conditions that
must be met to determine the rows to delete. The value in the StandardCost column must be between 12.00 and
14.00 and the value in the column SellEndDate must be null. The example also prints the value from the
@@ROWCOUNT function to return the number of deleted rows.
DELETE Production.ProductCostHistory
WHERE StandardCost BETWEEN 12.00 AND 14.00
AND EndDate IS NULL;
PRINT 'Number of rows deleted is ' + CAST(@@ROWCOUNT as char(3));
D. Using joins and subqueries to data in one table to delete rows in another table
The following examples show two ways to delete rows in one table based on data in another table. In both
examples, rows from the SalesPersonQuotaHistory table in the AdventureWorks2012 database are deleted based
on the year-to-date sales stored in the SalesPerson table. The first DELETE statement shows the ISO -compatible
subquery solution, and the second DELETE statement shows the Transact-SQL FROM extension to join the two
tables.
-- Transact-SQL extension
DELETE spqh
FROM
Sales.SalesPersonQuotaHistory AS spqh
INNER JOIN Sales.SalesPerson AS sp
ON spqh.BusinessEntityID = sp.BusinessEntityID
WHERE sp.SalesYTD > 2500000.00;
If you have to use TOP to delete rows in a meaningful chronological order, you must use TOP together with
ORDER BY in a subselect statement. The following query deletes the 10 rows of the PurchaseOrderDetail table
that have the earliest due dates. To ensure that only 10 rows are deleted, the column specified in the subselect
statement ( PurchaseOrderID ) is the primary key of the table. Using a nonkey column in the subselect statement
may result in the deletion of more than 10 rows if the specified column contains duplicate values.
USE master;
GO
-- Create a link to the remote data source.
-- Specify a valid server name for @datasrc as 'server_name' or 'server_name\instance_name'.
DELETE MyLinkServer.AdventureWorks2012.HumanResources.Department
WHERE DepartmentID > 16;
GO
DELETE Sales.ShoppingCartItem
OUTPUT DELETED.*
WHERE ShoppingCartID = 20621;
--Verify the rows in the table matching the WHERE clause have been deleted.
SELECT COUNT(*) AS [Rows in Table]
FROM Sales.ShoppingCartItem
WHERE ShoppingCartID = 20621;
GO
DELETE Production.ProductProductPhoto
OUTPUT DELETED.ProductID,
p.Name,
p.ProductModelID,
DELETED.ProductPhotoID
INTO @MyTableVar
FROM Production.ProductProductPhoto AS ph
JOIN Production.Product as p
ON ph.ProductID = p.ProductID
WHERE p.ProductModelID BETWEEN 120 and 130;
See Also
CREATE TRIGGER (Transact-SQL )
INSERT (Transact-SQL )
SELECT (Transact-SQL )
TRUNCATE TABLE (Transact-SQL )
UPDATE (Transact-SQL )
WITH common_table_expression (Transact-SQL )
@@ROWCOUNT (Transact-SQL )
DISABLE TRIGGER (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Disables a trigger.
Transact-SQL Syntax Conventions
Syntax
DISABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL }
ON { object_name | DATABASE | ALL SERVER } [ ; ]
Arguments
schema_name
Is the name of the schema to which the trigger belongs. schema_name cannot be specified for DDL or logon
triggers.
trigger_name
Is the name of the trigger to be disabled.
ALL
Indicates that all triggers defined at the scope of the ON clause are disabled.
Cau t i on
SQL Server creates triggers in databases that are published for merge replication. Specifying ALL in published
databases disables these triggers, which disrupts replication. Verify that the current database is not published for
merge replication before specifying ALL.
object_name
Is the name of the table or view on which the DML trigger trigger_name was created to execute.
DATABASE
For a DDL trigger, indicates that trigger_name was created or modified to execute with database scope.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
For a DDL trigger, indicates that trigger_name was created or modified to execute with server scope. ALL SERVER
also applies to logon triggers.
NOTE
This option is not available in a contained database.
Remarks
Triggers are enabled by default when they are created. Disabling a trigger does not drop it. The trigger still exists
as an object in the current database. However, the trigger does not fire when any Transact-SQL statements on
which it was programmed are executed. Triggers can be re-enabled by using ENABLE TRIGGER. DML triggers
defined on tables can be also be disabled or enabled by using ALTER TABLE.
Changing the trigger by using the ALTER TRIGGER statement enables the trigger.
Permissions
To disable a DML trigger, at a minimum, a user must have ALTER permission on the table or view on which the
trigger was created.
To disable a DDL trigger with server scope (ON ALL SERVER ) or a logon trigger, a user must have CONTROL
SERVER permission on the server. To disable a DDL trigger with database scope (ON DATABASE ), at a minimum,
a user must have ALTER ANY DATABASE DDL TRIGGER permission in the current database.
Examples
The following examples are described in the AdventureWorks2012 database.
A. Disabling a DML trigger on a table
The following example disables trigger uAddress that was created on table Address .
C. Disabling all triggers that were defined with the same scope
The following example disables all DDL triggers that were created at the server scope.
See Also
ENABLE TRIGGER (Transact-SQL )
ALTER TRIGGER (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
DROP TRIGGER (Transact-SQL )
sys.triggers (Transact-SQL )
DROP AGGREGATE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a user-defined aggregate function from the current database. User-defined aggregate functions are
created by using CREATE AGGREGATE.
Transact-SQL Syntax Conventions
Syntax
DROP AGGREGATE [ IF EXISTS ] [ schema_name . ] aggregate_name
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the aggregate only if it already exists.
schema_name
Is the name of the schema to which the user-defined aggregate function belongs.
aggregate_name
Is the name of the user-defined aggregate function you want to drop.
Remarks
DROP AGGREGATE does not execute if there are any views, functions, or stored procedures created with schema
binding that reference the user-defined aggregate function you want to drop.
Permissions
To execute DROP AGGREGATE, at a minimum, a user must have ALTER permission on the schema to which the
user-defined aggregate belongs, or CONTROL permission on the aggregate.
Examples
The following example drops the aggregate Concatenate .
See Also
CREATE AGGREGATE (Transact-SQL )
Create User-defined Aggregates
DROP APPLICATION ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an application role from the current database.
Transact-SQL Syntax Conventions
Syntax
DROP APPLICATION ROLE rolename
Arguments
rolename
Specifies the name of the application role to be dropped.
Remarks
If the application role owns any securables it cannot be dropped. Before dropping an application role that owns
securables, you must first transfer ownership of the securables, or drop them.
Cau t i on
Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER AUTHORIZATION.
In such databases you must instead use the new catalog views. The new catalog views take into account the
separation of principals and schemas that was introduced in SQL Server 2005. For more information about
catalog views, see Catalog Views (Transact-SQL ).
Permissions
Requires ALTER ANY APPLICATION ROLE permission on the database.
Examples
Drop application role "weekly_ledger" from the database.
See Also
Application Roles
CREATE APPLICATION ROLE (Transact-SQL )
ALTER APPLICATION ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP ASSEMBLY (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an assembly and all its associated files from the current database. Assemblies are created by using
CREATE ASSEMBLY and modified by using ALTER ASSEMBLY.
Transact-SQL Syntax Conventions
Syntax
DROP ASSEMBLY [ IF EXISTS ] assembly_name [ ,...n ]
[ WITH NO DEPENDENTS ]
[ ; ]
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the assembly only if it already exists.
assembly_name
Is the name of the assembly you want to drop.
WITH NO DEPENDENTS
If specified, drops only assembly_name and none of the dependent assemblies that are referenced by the
assembly. If not specified, DROP ASSEMBLY drops assembly_name and all dependent assemblies.
Remarks
Dropping an assembly removes an assembly and all its associated files, such as source code and debug files, from
the database.
If WITH NO DEPENDENTS is not specified, DROP ASSEMBLY drops assembly_name and all dependent
assemblies. If an attempt to drop any dependent assemblies fails, DROP ASSEMBLY returns an error.
DROP ASSEMBLY returns an error if the assembly is referenced by another assembly that exists in the database
or if it is used by common language runtime (CLR ) functions, procedures, triggers, user-defined types or
aggregates in the current database.
DROP ASSEMBLY does not interfere with any code referencing the assembly that is currently running. However,
after DROP ASSEMBLY executes, any attempts to invoke the assembly code will fail.
Permissions
Requires ownership of the assembly, or CONTROL permission on it.
Examples
The following example assumes the assembly HelloWorld is already created in the instance of SQL Server.
See Also
CREATE ASSEMBLY (Transact-SQL )
ALTER ASSEMBLY (Transact-SQL )
EVENTDATA (Transact-SQL )
Getting Information About Assemblies
DROP ASYMMETRIC KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an asymmetric key from the database.
Transact-SQL Syntax Conventions
Syntax
DROP ASYMMETRIC KEY key_name [ REMOVE PROVIDER KEY ]
Arguments
key_name
Is the name of the asymmetric key to be dropped from the database.
REMOVE PROVIDER KEY
Removes an Extenisble Key Management (EKM ) key from an EKM device. For more information about Extensible
Key Management, see Extensible Key Management (EKM ).
Remarks
An asymmetric key with which a symmetric key in the database has been encrypted, or to which a user or login is
mapped, cannot be dropped. Before you drop such a key, you must drop any user or login that is mapped to the
key. You must also drop or change any symmetric key encrypted with the asymmetric key. You can use the DROP
ENCRYPTION option of ALTER SYMMETRIC KEY to remove encryption by an asymmetric key.
Metadata of asymmetric keys can be accessed by using the sys.asymmetric_keys catalog view. The keys
themselves cannot be directly viewed from inside the database.
If the asymmetric key is mapped to an Extensible Key Management (EKM ) key on an EKM device and the
REMOVE PROVIDER KEY option is not specified, the key will be dropped from the database but not the device. A
warning will be issued.
Permissions
Requires CONTROL permission on the asymmetric key.
Examples
The following example removes the asymmetric key MirandaXAsymKey6 from the AdventureWorks2012 database.
USE AdventureWorks2012;
DROP ASYMMETRIC KEY MirandaXAsymKey6;
See Also
CREATE ASYMMETRIC KEY (Transact-SQL )
ALTER ASYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
ALTER SYMMETRIC KEY (Transact-SQL )
DROP AVAILABILITY GROUP (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes the specified availability group and all of its replicas. If a server instance that hosts one of the availability
replicas is offline when you delete an availability group, after coming online, the server instance will drop the local
availability replica. Dropping an availability group also deletes the associated availability group listener, if any.
IMPORTANT
If possible, remove the availability group only while connected to the server instance that hosts the primary replica. When
the availability group is dropped from the primary replica, changes are allowed in the former primary databases (without
high availability protection). Deleting an availability group from a secondary replica leaves the primary replica in the
RESTORING state, and changes are not allowed on the databases.
For information about alternative ways to drop an availability group, see Remove an Availability Group (SQL
Server).
Transact-SQL Syntax Conventions
Syntax
DROP AVAILABILITY GROUP group_name
[ ; ]
Arguments
group_name
Specifies the name of the availability group to be dropped.
On a secondary replica, DROP AVAILABILITY GROUP should only be used only for emergency
purposes. This is because dropping an availability group takes the availability group offline. If you drop the
availability group from a secondary replica, the primary replica cannot determine whether the OFFLINE
state occurred because of quorum loss, a forced failover, or a DROP AVAILABILITY GROUP command.
The primary replica transitions to the RESTORING state to prevent a possible split-brain situation. For
more information, see How It Works: DROP AVAIL ABILITY GROUP Behaviors (CSS SQL Server
Engineers blog).
Security
Permissions
Requires ALTER AVAILABILITY GROUP permission on the availability group, CONTROL AVAILABILITY
GROUP permission, ALTER ANY AVAILABILITY GROUP permission, or CONTROL SERVER permission. To
drop an availability group that is not hosted by the local server instance you need CONTROL SERVER
permission or CONTROL permission on that availability group.
Examples
The following example drops the AccountsAG availability group.
Related Content
How It Works: DROP AVAIL ABILITY GROUP Behaviors (CSS SQL Server Engineers blog)
See Also
ALTER AVAIL ABILITY GROUP (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
Remove an Availability Group (SQL Server)
DROP BROKER PRIORITY (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a conversation priority from the current database.
Transact-SQL Syntax Conventions
Syntax
DROP BROKER PRIORITY ConversationPriorityName
[;]
Arguments
ConversationPriorityName
Specifies the name of the conversation priority to be removed.
Remarks
When you drop a conversation priority, any existing conversations continue to operate with the priority levels they
were assigned from the conversation priority.
Permissions
Permission for creating a conversation priority defaults to members of the db_ddladmin or db_owner fixed
database roles, and to the sysadmin fixed server role. Requires ALTER permission on the database.
Examples
The following example drops the conversation priority named InitiatorAToTargetPriority .
See Also
ALTER BROKER PRIORITY (Transact-SQL )
CREATE BROKER PRIORITY (Transact-SQL )
sys.conversation_priorities (Transact-SQL )
DROP CERTIFICATE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a certificate from the database.
IMPORTANT
A backup of the certificate used for database encryption should be retained even if the encryption is no longer enabled on a
database. Even though the database is not encrypted anymore, parts of the transaction log may still remain protected, and
the certificate may be needed for some operations until the full backup of the database is performed. The certificate is also
needed to be able to restore from the backups created at the time the database was encrypted.
Syntax
DROP CERTIFICATE certificate_name
Arguments
certificate_name
Is the unique name by which the certificate is known in the database.
Remarks
Certificates can only be dropped if no entities are associated with them.
Permissions
Requires CONTROL permission on the certificate.
Examples
The following example drops the certificate Shipping04 from the AdventureWorks database.
USE AdventureWorks2012;
DROP CERTIFICATE Shipping04;
USE master;
DROP CERTIFICATE Shipping04;
See Also
BACKUP CERTIFICATE (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
ALTER CERTIFICATE (Transact-SQL )
Encryption Hierarchy
EVENTDATA (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a column encryption key from a database.
Transact-SQL Syntax Conventions
Syntax
DROP COLUMN ENCRYPTION KEY key_name [;]
Arguments
key_name
Is the name by which the column encryption key to be dropped from the database.
Remarks
A column encryption key cannot be dropped if it is used to encrypt any column in the database. All columns using
the column encryption key must first be dropped.
Permissions
Requires ALTER ANY COLUMN ENCRYPTION KEY permission on the database.
Examples
A. Dropping a column encryption key
The following example drops a column encryption key called MyCEK .
See Also
Always Encrypted (Database Engine)
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
ALTER COLUMN ENCRYPTION KEY (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL )
DROP COLUMN MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a column master key from a database. This is a metadata operation.
Transact-SQL Syntax Conventions
Syntax
DROP COLUMN MASTER KEY key_name;
Arguments
key_name
The name of the column master key.
Remarks
The column master key can only be dropped if there are no column encryption key values encrypted with the
column master key. To drop column encryption key values, use the DROP COLUMN ENCRYPTION KEY
statement.
Permissions
Requires ALTER ANY COLUMN MASTER KEY permission on the database.
Examples
A. Dropping a column master key
The following example drops a column master key called MyCMK .
See Also
CREATE COLUMN MASTER KEY (Transact-SQL )
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL )
Always Encrypted (Database Engine)
sys.column_master_keys (Transact-SQL )
DROP CONTRACT (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing contract from a database.
Transact-SQL Syntax Conventions
Syntax
DROP CONTRACT contract_name
[ ; ]
Arguments
contract_name
The name of the contract to drop. Server, database, and schema names cannot be specified.
Remarks
You cannot drop a contract if any services or conversation priorities refer to the contract.
When you drop a contract, Service Broker ends any existing conversations that use the contract with an error.
Permissions
Permission for dropping a contract defaults to the owner of the contract, members of the db_ddladmin or
db_owner fixed database roles, and members of the sysadmin fixed server role.
Examples
The following example removes the contract //Adventure-Works.com/Expenses/ExpenseSubmission from the database.
DROP CONTRACT
[//Adventure-Works.com/Expenses/ExpenseSubmission] ;
See Also
ALTER BROKER PRIORITY (Transact-SQL )
ALTER SERVICE (Transact-SQL )
CREATE CONTRACT (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
DROP SERVICE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP CREDENTIAL (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a credential from the server.
Transact-SQL Syntax Conventions
Syntax
DROP CREDENTIAL credential_name
Arguments
credential_name
Is the name of the credential to remove from the server.
Remarks
To drop the secret associated with a credential without dropping the credential itself, use ALTER CREDENTIAL.
Information about credentials is visible in the sys.credentials catalog view.
WARNING
Proxies are associated with a credential. Deleting a credential that is used by a proxy leaves the associated proxy in an
unusable state. When dropping a credential used by a proxy, delete the proxy (by using sp_delete_proxy (Transact-SQL) and
recreate the associated proxy by using sp_add_proxy (Transact-SQL).
Permissions
Requires ALTER ANY CREDENTIAL permission. If dropping a system credential, requires CONTROL SERVER
permission.
Examples
The following example removes the credential called Saddles .
See Also
Credentials (Database Engine)
CREATE CREDENTIAL (Transact-SQL )
ALTER CREDENTIAL (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
DROP CRYPTOGRAPHIC PROVIDER (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a cryptographic provider within SQL Server.
Transact-SQL Syntax Conventions
Syntax
DROP CRYPTOGRAPHIC PROVIDER provider_name
Arguments
provider_name
Is the name of the Extensible Key Management provider.
Remarks
To delete an Extensible Key Management (EKM ) provider, all sessions that use the provider must be stopped.
An EKM provider can only be dropped if there are no credentials mapped to it.
If there are keys mapped to an EKM provider when it is dropped the GUIDs for the keys remain stored in SQL
Server. If a provider is created later with the same key GUIDs, the keys will be reused.
Permissions
Requires CONTROL permission on the symmetric key.
Examples
The following example drops a cryptographic provider called SecurityProvider .
See Also
Extensible Key Management (EKM )
CREATE CRYPTOGRAPHIC PROVIDER (Transact-SQL )
ALTER CRYPTOGRAPHIC PROVIDER (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
DROP DATABASE (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more user databases or database snapshots from an instance of SQL Server.
Transact-SQL Syntax Conventions
Syntax
-- SQL Server Syntax
DROP DATABASE [ IF EXISTS ] { database_name | database_snapshot_name } [ ,...n ] [;]
-- Azure SQL Database, Azure SQL Data Warehouse and Parallel Data Warehouse Syntax
DROP DATABASE database_name [;]
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the database only if it already exists.
database_name
Specifies the name of the database to be removed. To display a list of databases, use the sys.databases catalog
view.
database_snapshot_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the name of a database snapshot to be removed.
General Remarks
A database can be dropped regardless of its state: offline, read-only, suspect, and so on. To display the current
state of a database, use the sys.databases catalog view.
A dropped database can be re-created only by restoring a backup. Database snapshots cannot be backed up
and, therefore, cannot be restored.
When a database is dropped, the master database should be backed up.
Dropping a database deletes the database from an instance of SQL Server and deletes the physical disk files
used by the database. If the database or any one of its files is offline when it is dropped, the disk files are not
deleted. These files can be deleted manually by using Windows Explorer. To remove a database from the current
server without deleting the files from the file system, use sp_detach_db.
WARNING
Dropping a database that has FILE_SNAPSHOT backups associated with it will succeed, but the database files that have
associated snapshots will not be deleted to avoid invalidating the backups referring to these database files. The file will be
truncated, but will not be physically deleted in order to keep the FILE_SNAPSHOT backups intact. For more information,
see SQL Server Backup and Restore with Microsoft Azure Blob Storage Service. Applies to: SQL Server 2016 (13.x)
through current version.
SQL Server
Dropping a database snapshot deletes the database snapshot from an instance of SQL Server and deletes the
physical NTFS File System sparse files used by the snapshot. For information about using sparse files by
database snapshots, see Database Snapshots (SQL Server). Dropping a database snapshot clears the plan cache
for the instance of SQL Server. Clearing the plan cache causes a recompilation of all subsequent execution plans
and can cause a sudden, temporary decrease in query performance. For each cleared cachestore in the plan
cache, the SQL Server error log contains the following informational message: " SQL Server has encountered
%d occurrence(s) of cachestore flush for the '%s' cachestore (part of plan cache) due to some database
maintenance or reconfigure operations". This message is logged every five minutes as long as the cache is
flushed within that time interval.
Interoperability
SQL Server
To drop a database published for transactional replication, or published or subscribed to merge replication, you
must first remove replication from the database. If a database is damaged or replication cannot first be removed
or both, in most cases you still can drop the database by using ALTER DATABASE to set the database offline and
then dropping it.
If the database is involved in log shipping, remove log shipping before dropping the database. For more
information, see About Log Shipping (SQL Server).
WARNING
This is not a fail-proof approach, since first consecutive connection made by any thread will receive the SINGLE_USER
thread, causing your connection to fail. Sql server does not provide a built-in way to drop databases under load.
SQL Server
Any database snapshots on a database must be dropped before the database can be dropped.
Dropping a database enable for Stretch Database does not remove the remote data. If you want to delete the
remote data, you have to remove it manually.
Azure SQL Database
You must be connected to the master database to drop a database.
The DROP DATABASE statement must be the only statement in a SQL batch and you can drop only one
database at a time.
Azure SQL Data Warehouse
You must be connected to the master database to drop a database.
The DROP DATABASE statement must be the only statement in a SQL batch and you can drop only one
database at a time.
Permissions
SQL Server
Requires the CONTROL permission on the database, or ALTER ANY DATABASE permission, or membership
in the db_owner fixed database role.
Azure SQL Database
Only the server-level principal login (created by the provisioning process) or members of the dbmanager
database role can drop a database.
Parallel Data Warehouse
Requires the CONTROL permission on the database, or ALTER ANY DATABASE permission, or membership
in the db_owner fixed database role.
Examples
A. Dropping a single database
The following example removes the Sales database.
See Also
ALTER DATABASE (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
EVENTDATA (Transact-SQL )
sys.databases (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a database audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions
Syntax
DROP DATABASE AUDIT SPECIFICATION audit_specification_name
[ ; ]
Arguments
audit_specification_name
Name of an existing audit specification object.
Remarks
A DROP DATABASE AUDIT SPECIFICATION removes the metadata for the audit specification, but not the audit
data collected before the DROP command was issued. You must set the state of a database audit specification to
OFF using ALTER DATABASE AUDIT SPECIFICATION before it can be dropped.
Permissions
Users with the ALTER ANY DATABASE AUDIT permission can drop database audit specifications.
Examples
A. Dropping a Database Audit Specification
The following example drops an audit called HIPAA_Audit_DB_Specification .
For a full example of creating an audit, see SQL Server Audit (Database Engine).
See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
sys.dm_audit_class_type_map (Transact-SQL )
Create a Server Audit and Server Audit Specification
DROP DATABASE ENCRYPTION KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a database encryption key that is used in transparent database encryption. For more information about
transparent database encryption, see Transparent Data Encryption (TDE ).
IMPORTANT
The backup of the certificate that was protecting the database encryption key should be retained even if the encryption is no
longer enabled on a database. Even though the database is not encrypted anymore, parts of the transaction log may still
remain protected, and the certificate may be needed for some operations until the full backup of the database is performed.
Syntax
DROP DATABASE ENCRYPTION KEY
Remarks
If the database is encrypted, you must first remove encryption from the database by using the ALTER DATABASE
statement. Wait for decryption to complete before removing the database encryption key. For more information
about the ALTER DATABASE statement, see ALTER DATABASE SET Options (Transact-SQL ). To view the state of
the database, use the sys.dm_database_encryption_keys dynamic management view.
Permissions
Requires CONTROL permission on the database.
Examples
The following example removes the database encryption and drops the database encryption key.
See Also
Transparent Data Encryption (TDE )
SQL Server Encryption
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
ALTER DATABASE SET Options (Transact-SQL )
CREATE DATABASE ENCRYPTION KEY (Transact-SQL )
ALTER DATABASE ENCRYPTION KEY (Transact-SQL )
sys.dm_database_encryption_keys (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Removes a database scoped credential from the server.
Transact-SQL Syntax Conventions
Syntax
DROP DATABASE SCOPED CREDENTIAL credential_name
Arguments
credential_name
Is the name of the database scoped credential to remove from the server.
Remarks
To drop the secret associated with a database scoped credential without dropping the database scoped credential
itself, use ALTER CREDENTIAL.
Information about database scoped credentials is visible in the sys.database_scoped_credentials catalog view.
Permissions
Requires ALTER permission on the credential.
Examples
The following example removes the database scoped credential called SalesAccess .
See Also
Credentials (Database Engine)
CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL )
ALTER DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.database_scoped_credentials
CREATE CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
DROP DEFAULT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more user-defined defaults from the current database.
IMPORTANT
DROP DEFAULT will be removed in the next version of Microsoft SQL Server. Do not use DROP DEFAULT in new
development work, and plan to modify applications that currently use them. Instead, use default definitions that you can
create by using the DEFAULT keyword of ALTER TABLE or CREATE TABLE.
Syntax
DROP DEFAULT [ IF EXISTS ] { [ schema_name . ] default_name } [ ,...n ] [ ; ]
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the default only if it already exists.
schema_name
Is the name of the schema to which the default belongs.
default_name
Is the name of an existing default. To see a list of defaults that exist, execute sp_help. Defaults must comply with
the rules for identifiers. Specifying the default schema name is optional.
Remarks
Before dropping a default, unbind the default by executing sp_unbindefault if the default is currently bound to a
column or an alias data type.
After a default is dropped from a column that allows for null values, NULL is inserted in that position when rows
are added and no value is explicitly supplied. After a default is dropped from a NOT NULL column, an error
message is returned when rows are added and no value is explicitly supplied. These rows are added later as part of
the typical INSERT statement behavior.
Permissions
To execute DROP DEFAULT, at a minimum, a user must have ALTER permission on the schema to which the
default belongs.
Examples
A. Dropping a default
If a default has not been bound to a column or to an alias data type, it can just be dropped using DROP DEFAULT.
The following example removes the user-created default named datedflt .
USE AdventureWorks2012;
GO
IF EXISTS (SELECT name FROM sys.objects
WHERE name = 'datedflt'
AND type = 'D')
DROP DEFAULT datedflt;
GO
Beginning with SQL Server 2016 (13.x) you can use the following syntax.
USE AdventureWorks2012;
GO
BEGIN
EXEC sp_unbindefault 'Person.Contact.Phone'
DROP DEFAULT phonedflt
END;
GO
See Also
CREATE DEFAULT (Transact-SQL )
sp_helptext (Transact-SQL )
sp_help (Transact-SQL )
sp_unbindefault (Transact-SQL )
DROP ENDPOINT (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing endpoint.
Transact-SQL Syntax Conventions
Syntax
DROP ENDPOINT endPointName
Arguments
endPointName
Is the name of the endpoint to be removed.
Remarks
The ENDPOINT DDL statements cannot be executed inside a user transaction.
Permissions
User must be a member of the sysadmin fixed server role, the owner of the endpoint, or have been granted
CONTROL permission on the endpoint.
Examples
The following example removes a previously created endpoint called sql_endpoint .
See Also
CREATE ENDPOINT (Transact-SQL )
ALTER ENDPOINT (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP EXTERNAL DATA SOURCE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a PolyBase external data source.
Transact-SQL Syntax Conventions
Syntax
-- Drop an external data source
DROP EXTERNAL DATA SOURCE external_data_source_name
[;]
Arguments
external_data_source_name
The name of the external data source to drop.
Metadata
To view a list of external data sources use the sys.external_data_sources system view.
Permissions
Requires ALTER ANY EXTERNAL DATA SOURCE.
Locking
Takes a shared lock on the external data source object.
General Remarks
Dropping an external data source does not remove the external data.
Examples
A. Using basic syntax
See Also
CREATE EXTERNAL DATA SOURCE (Transact-SQL )
DROP EXTERNAL FILE FORMAT (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a PolyBase external file format.
Transact-SQL Syntax Conventions
Syntax
-- Drop an external file format
DROP EXTERNAL FILE FORMAT external_file_format_name
[;]
Arguments
external_file_format_name
The name of the external file format to drop.
Metadata
To view a list of external file formats use the sys.external_file_formats (Transact-SQL ) system view.
Permissions
Requires ALTER ANY EXTERNAL FILE FORMAT.
General Remarks
Dropping an external file format does not remove the external data.
Locking
Takes a shared lock on the external file format object.
Examples
A. Using basic syntax
See Also
CREATE EXTERNAL FILE FORMAT (Transact-SQL )
DROP EXTERNAL LIBRARY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Deletes an existing package library. Package libraries are used by supported external runtimes, such as R or
Python.
Syntax
DROP EXTERNAL LIBRARY library_name
[ AUTHORIZATION owner_name ];
Arguments
library_name
Specifies the name of an existing package library.
Libraries are scoped to the user. Library names must be unique within the context of a specific user or owner.
owner_name
Specifies the name of the user or role that owns the external library.
Database owners can delete libraries created by other users.
Permissions
To delete a library requires the privilege ALTER ANY EXTERNAL LIBRARY. By default, any database owner, or the
owner of the object, can also delete an external library.
Return values
An informational message is returned if the statement was successful.
Remarks
Unlike other DROP statements in SQL Server, this statement supports specifying an optional authorization clause.
This allows dbo or users in the db_owner role to drop a package library uploaded by a regular user in the
database.
Examples
Add the custom R package, customPackage , to a database:
See also
CREATE EXTERNAL LIBRARY (Transact-SQL )
ALTER EXTERNAL LIBRARY (Transact-SQL )
sys.external_library_files
sys.external_libraries
DROP EXTERNAL RESOURCE POOL (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Deletes a Resource Governor external resource pool used to define resources for external processes. For R
Services the external pool governs rterm.exe , BxlServer.exe , and other processes spawned by them. External
resource pools are created by using CREATE EXTERNAL RESOURCE POOL (Transact-SQL ) and modified by
using ALTER EXTERNAL RESOURCE POOL (Transact-SQL ).
Transact-SQL Syntax Conventions.
Syntax
DROP EXTERNAL RESOURCE POOL pool_name
Arguments
pool_name
The name of the external resource pool to be deleted.
Remarks
You cannot drop an external resource pool if it contains workload groups.
You cannot drop the Resource Governor default or internal pools.
The reconfiguration does n
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states. For
more information, see Resource Governor.
Permissions
Requires CONTROL SERVER permission.
Examples
The following example drops the external resource pool named ex_pool .
See Also
external scripts enabled Server Configuration Option
SQL Server R Services
Known Issues for SQL Server R Services
CREATE EXTERNAL RESOURCE POOL (Transact-SQL )
ALTER EXTERNAL RESOURCE POOL (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
DROP EXTERNAL TABLE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a PolyBase external table from. This does not delete the external data.
Transact-SQL Syntax Conventions
Syntax
DROP EXTERNAL TABLE [ database_name . [schema_name ] . | schema_name . ] table_name
[;]
Arguments
[ database_name . [schema_name] . | schema_name . ] table_name
The one- to three-part name of the external table to remove. The table name can optionally include the schema, or
the database and schema.
Permissions
Requires ALTER permission on the schema to which the table belongs.
General Remarks
Dropping an external table removes all table-related metadata. It does not delete the external data.
Examples
A. Using basic syntax
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an event notification trigger from the current database.
Transact-SQL Syntax Conventions
Syntax
DROP EVENT NOTIFICATION notification_name [ ,...n ]
ON { SERVER | DATABASE | QUEUE queue_name }
[ ; ]
Arguments
notification_name
Is the name of the event notification to remove. Multiple event notifications can be specified. To see a list of
currently created event notifications, use sys.event_notifications (Transact-SQL ).
SERVER
Indicates the scope of the event notification applies to the current server. SERVER must be specified if it was
specified when the event notification was created.
DATABASE
Indicates the scope of the event notification applies to the current database. DATABASE must be specified if it was
specified when the event notification was created.
QUEUE queue_name
Indicates the scope of the event notification applies to the queue specified by queue_name. QUEUE must be
specified if it was specified when the event notification was created. queue_name is the name of the queue and
must also be specified.
Remarks
If an event notification fires within a transaction and is dropped within the same transaction, the event notification
instance is sent, and then the event notification is dropped.
Permissions
To drop an event notification that is scoped at the database level, at a minimum, a user must be the owner of the
event notification or have ALTER ANY DATABASE EVENT NOTIFICATION permission in the current database.
To drop an event notification that is scoped at the server level, at a minimum, a user must be the owner of the
event notification or have ALTER ANY EVENT NOTIFICATION permission in the server.
To drop an event notification on a specific queue, at a minimum, a user must be the owner of the event notification
or have ALTER permission on the parent queue.
Examples
The following example creates a database-scoped event notification, then drops it:
USE AdventureWorks2012;
GO
CREATE EVENT NOTIFICATION NotifyALTER_T1
ON DATABASE
FOR ALTER_TABLE
TO SERVICE 'NotifyService',
'8140a771-3c4b-4479-8ac0-81008ab17984';
GO
DROP EVENT NOTIFICATION NotifyALTER_T1
ON DATABASE;
See Also
CREATE EVENT NOTIFICATION (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.event_notifications (Transact-SQL )
sys.events (Transact-SQL )
DROP EVENT SESSION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an event session.
Transact-SQL Syntax Conventions
Syntax
DROP EVENT SESSION event_session_name
ON SERVER
Arguments
event_session_name
Is the name of an existing event session.
Remarks
When you drop an event session, all configuration information, such as targets and session parameters, is
completely removed.
Permissions
Requires the ALTER ANY EVENT SESSION permission.
Examples
The following example shows how to drop an event session.
See Also
CREATE EVENT SESSION (Transact-SQL )
ALTER EVENT SESSION (Transact-SQL )
sys.server_event_sessions (Transact-SQL )
DROP FULLTEXT CATALOG (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a full-text catalog from a database. You must drop all full-text indexes associated with the catalog before
you drop the catalog.
Transact-SQL Syntax Conventions
Syntax
DROP FULLTEXT CATALOG catalog_name
Arguments
catalog_name
Is the name of the catalog to be removed. If catalog_name does not exist, Microsoft SQL Server returns an error
and does not perform the DROP operation. The filegroup of the full-text catalog must not be marked OFFLINE or
READONLY for the command to succeed.
Permissions
User must have DROP permission on the full-text catalog or be a member of the db_owner, or db_ddladmin
fixed database roles.
See Also
sys.fulltext_catalogs (Transact-SQL )
ALTER FULLTEXT CATALOG (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
Full-Text Search
DROP FULLTEXT INDEX (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a full-text index from a specified table or indexed view.
Transact-SQL Syntax Conventions
Syntax
DROP FULLTEXT INDEX ON table_name
Arguments
table_name
Is the name of the table or indexed view containing the full-text index to be removed.
Remarks
You do not need to drop all columns from the full-text index before using the DROP FULLTEXT INDEX command.
Permissions
The user must have ALTER permission on the table or indexed view, or be a member of the sysadmin fixed server
role, or db_owner or db_ddladmin fixed database roles.
Examples
The following example drops the full-text index that exists on the JobCandidate table.
USE AdventureWorks2012;
GO
DROP FULLTEXT INDEX ON HumanResources.JobCandidate;
GO
See Also
sys.fulltext_indexes (Transact-SQL )
ALTER FULLTEXT INDEX (Transact-SQL )
CREATE FULLTEXT INDEX (Transact-SQL )
Full-Text Search
DROP FULLTEXT STOPLIST (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a full-text stoplist from the database in SQL Server.
Transact-SQL Syntax Conventions
IMPORTANT
CREATE FULLTEXT STOPLIST is supported only for compatibility level 100 and higher. For compatibility levels 80 and 90, the
system stoplist is always assigned to the database.
Syntax
DROP FULLTEXT STOPLIST stoplist_name
;
Arguments
stoplist_name
Is the name of the full-text stoplist to drop from the database.
Remarks
DROP FULLTEXT STOPLIST fails if any full-text indexes refer to the full-text stoplist being dropped.
Permissions
To drop a stoplist requires having DROP permission on the stoplist or membership in the db_owner or
db_ddladmin fixed database roles.
Examples
The following example drops a full-text stoplist named myStoplist .
See Also
ALTER FULLTEXT STOPLIST (Transact-SQL )
CREATE FULLTEXT STOPLIST (Transact-SQL )
sys.fulltext_stoplists (Transact-SQL )
sys.fulltext_stopwords (Transact-SQL )
DROP FUNCTION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more user-defined functions from the current database. User-defined functions are created by
using CREATE FUNCTION and modified by using ALTER FUNCTION.
The DROP function supports natively compiled, scalar user-defined functions. For more information, see Scalar
User-Defined Functions for In-Memory OLTP.
Transact-SQL Syntax Conventions
Syntax
-- SQL Server, Azure SQL Database
Arguments
IF EXISTS
Conditionally drops the function only if it already exists. Available beginning with SQL Server 2016 and in SQL
Database.
schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the name of the user-defined function or functions to be removed. Specifying the schema name is optional. The
server name and database name cannot be specified.
Remarks
DROP FUNCTION will fail if there are Transact-SQL functions or views in the database that reference this
function and were created by using SCHEMABINDING, or if there are computed columns, CHECK constraints, or
DEFAULT constraints that reference the function.
DROP FUNCTION will fail if there are computed columns that reference this function and have been indexed.
Permissions
To execute DROP FUNCTION, at a minimum, a user must have ALTER permission on the schema to which the
function belongs, or CONTROL permission on the function.
Examples
A. Dropping a function
The following example drops the fn_SalesByStore user-defined function from the Sales schema in the
AdventureWorks2012 sample database. To create this function, see Example B in CREATE FUNCTION (Transact-
SQL ).
See Also
ALTER FUNCTION (Transact-SQL )
CREATE FUNCTION (Transact-SQL )
OBJECT_ID (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.parameters (Transact-SQL )
DROP INDEX (Transact-SQL)
5/3/2018 • 14 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more relational, spatial, filtered, or XML indexes from the current database. You can drop a
clustered index and move the resulting table to another filegroup or partition scheme in a single transaction by
specifying the MOVE TO option.
The DROP INDEX statement does not apply to indexes created by defining PRIMARY KEY or UNIQUE
constraints. To remove the constraint and corresponding index, use ALTER TABLE with the DROP CONSTRAINT
clause.
IMPORTANT
The syntax defined in <drop_backward_compatible_index> will be removed in a future version of Microsoft SQL Server.
Avoid using this syntax in new development work, and plan to modify applications that currently use the feature. Use the
syntax specified under <drop_relational_or_xml_index> instead. XML indexes cannot be dropped using backward
compatible syntax.
Syntax
-- Syntax for SQL Server (All options except filegroup and filestream apply to Azure SQL Database.)
<drop_relational_or_xml_or_spatial_index> ::=
index_name ON <object>
[ WITH ( <drop_clustered_index_option> [ ,...n ] ) ]
<drop_backward_compatible_index> ::=
[ owner_name. ] table_or_view_name.index_name
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}
<drop_clustered_index_option> ::=
{
MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
| MOVE TO { partition_scheme_name ( column_name )
| filegroup_name
| "default"
}
[ FILESTREAM_ON { partition_scheme_name
| filestream_filegroup_name
| "default" } ]
}
DROP INDEX
{ <drop_relational_or_xml_or_spatial_index> [ ,...n ]
}
<drop_relational_or_xml_or_spatial_index> ::=
index_name ON <object>
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the index only if it already exists.
index_name
Is the name of the index to be dropped.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or_view_name
Is the name of the table or view associated with the index. Spatial indexes are supported only on tables.
To display a report of the indexes on an object, use the sys.indexes catalog view.
Windows Azure SQL Database supports the three-part name format database_name.
[schema_name].object_name when the database_name is the current database or the database_name is tempdb
and the object_name starts with #.
<drop_clustered_index_option>
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Controls clustered index options. These options cannot be used with other index types.
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database (Performance Levels P2 and P3 only).
Overrides the max degree of parallelism configuration option for the duration of the index operation. For more
information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to limit
the number of processors used in a parallel plan execution. The maximum is 64 processors.
IMPORTANT
MAXDOP is not allowed for spatial indexes or XML indexes.
NOTE
Parallel index operations are not available in every edition of SQL Server. For a list of features that are supported by the
editions of SQL Server, see Editions and Supported Features for SQL Server 2016.
ONLINE = ON | OFF
Applies to: SQL Server 2008 through SQL Server 2017, Azure SQL Database.
Specifies whether underlying tables and associated indexes are available for queries and data modification during
the index operation. The default is OFF.
ON
Long-term table locks are not held. This allows queries or updates to the underlying table to continue.
OFF
Table locks are applied and the table is unavailable for the duration of the index operation.
The ONLINE option can only be specified when you drop clustered indexes. For more information, see the
Remarks section.
NOTE
Online index operations are not available in every edition of SQL Server. For a list of features that are supported by the
editions of SQL Server, see Editions and Supported Features for SQL Server 2016.
NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in MOVE TO
"default" or MOVE TO [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be set ON for the current
session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).
FILESTREAM_ON { partition_scheme_name | filestream_filegroup_name | "default" }
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies a location to move the FILESTREAM table that currently is in the leaf level of the clustered index. The
data is moved to the new location in the form of a heap. You can specify either a partition scheme or filegroup as
the new location, but the partition scheme or filegroup must already exist. FILESTREAM ON is not valid for
indexed views or nonclustered indexes. If a partition scheme is not specified, the data will be located in the same
partition scheme as was defined for the clustered index.
partition_scheme_name
Specifies a partition scheme for the FILESTREAM data. The partition scheme must have already been created by
executing either CREATE PARTITION SCHEME or ALTER PARTITION SCHEME. If no location is specified and
the table is partitioned, the table is included in the same partition scheme as the existing clustered index.
If you specify a partition scheme for MOVE TO, you must use the same partition scheme for FILESTREAM ON.
filestream_filegroup_name
Specifies a FILESTREAM filegroup for FILESTREAM data. If no location is specified and the table is not
partitioned, the data is included in the default FILESTREAM filegroup.
"default"
Specifies the default location for the FILESTREAM data.
NOTE
In this context, default is not a keyword. It is an identifier for the default filegroup and must be delimited, as in MOVE TO
"default" or MOVE TO [default]. If "default" is specified, the QUOTED_IDENTIFIER option must be ON for the current
session. This is the default setting. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL).
Remarks
When a nonclustered index is dropped, the index definition is removed from metadata and the index data pages
(the B -tree) are removed from the database files. When a clustered index is dropped, the index definition is
removed from metadata and the data rows that were stored in the leaf level of the clustered index are stored in
the resulting unordered table, a heap. All the space previously occupied by the index is regained. This space can
then be used for any database object.
An index cannot be dropped if the filegroup in which it is located is offline or set to read-only.
When the clustered index of an indexed view is dropped, all nonclustered indexes and auto-created statistics on
the same view are automatically dropped. Manually created statistics are not dropped.
The syntaxtable_or_view_name.index_name is maintained for backward compatibility. An XML index or spatial
index cannot be dropped by using the backward compatible syntax.
When indexes with 128 extents or more are dropped, the Database Engine defers the actual page deallocations,
and their associated locks, until after the transaction commits.
Sometimes indexes are dropped and re-created to reorganize or rebuild the index, such as to apply a new fill
factor value or to reorganize data after a bulk load. To do this, using ALTER INDEXis more efficient, especially for
clustered indexes. ALTER INDEX REBUILD has optimizations to prevent the overhead of rebuilding the
nonclustered indexes.
XML Indexes
Options cannot be specified when you drop anXML index. Also, you cannot use the
table_or_view_name.index_name syntax. When a primary XML index is dropped, all associated secondary XML
indexes are automatically dropped. For more information, see XML Indexes (SQL Server).
Spatial Indexes
Spatial indexes are supported only on tables. When you drop a spatial index, you cannot specify any options or
use .index_name. The correct syntax is as follows:
DROP INDEX spatial_index_name ON spatial_table_name;
For more information about spatial indexes, see Spatial Indexes Overview.
Permissions
To execute DROP INDEX, at a minimum, ALTER permission on the table or view is required. This permission is
granted by default to the sysadmin fixed server role and the db_ddladmin and db_owner fixed database roles.
Examples
A. Dropping an index
The following example deletes the index IX_ProductVendor_VendorID on the ProductVendor table in the
AdventureWorks2012 database.
DROP INDEX
IX_PurchaseOrderHeader_EmployeeID ON Purchasing.PurchaseOrderHeader,
IX_Address_StateProvinceID ON Person.Address;
GO
D. Dropping a clustered index online and moving the table to a new filegroup
The following example deletes a clustered index online and moves the resulting table (heap) to the filegroup
NewGroup by using the MOVE TO clause. The sys.indexes , sys.tables , and sys.filegroups catalog views are
queried to verify the index and table placement in the filegroups before and after the move. (Beginning with SQL
Server 2016 (13.x) you can use the DROP INDEX IF EXISTS syntax.)
Applies to: SQL Server 2008 through SQL Server 2017.
--Create a clustered index on the PRIMARY filegroup if the index does not exist.
CREATE UNIQUE CLUSTERED INDEX
AK_BillOfMaterials_ProductAssemblyID_ComponentID_StartDate
ON Production.BillOfMaterials (ProductAssemblyID, ComponentID,
StartDate)
ON 'PRIMARY';
GO
-- Verify filegroup location of the clustered index.
SELECT t.name AS [Table Name], i.name AS [Index Name], i.type_desc,
i.data_space_id, f.name AS [Filegroup Name]
FROM sys.indexes AS i
JOIN sys.filegroups AS f ON i.data_space_id = f.data_space_id
JOIN sys.tables as t ON i.object_id = t.object_id
AND i.object_id = OBJECT_ID(N'Production.BillOfMaterials','U')
GO
--Create filegroup NewGroup if it does not exist.
IF NOT EXISTS (SELECT name FROM sys.filegroups
WHERE name = N'NewGroup')
BEGIN
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP NewGroup;
ALTER DATABASE AdventureWorks2012
ADD FILE (NAME = File1,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\File1.ndf')
TO FILEGROUP NewGroup;
END
GO
--Verify new filegroup
SELECT * from sys.filegroups;
GO
-- Drop the clustered index and move the BillOfMaterials table to
-- the Newgroup filegroup.
-- Set ONLINE = OFF to execute this example on editions other than Enterprise Edition.
DROP INDEX AK_BillOfMaterials_ProductAssemblyID_ComponentID_StartDate
ON Production.BillOfMaterials
WITH (ONLINE = ON, MOVE TO NewGroup);
GO
-- Verify filegroup location of the moved table.
SELECT t.name AS [Table Name], i.name AS [Index Name], i.type_desc,
i.data_space_id, f.name AS [Filegroup Name]
FROM sys.indexes AS i
JOIN sys.filegroups AS f ON i.data_space_id = f.data_space_id
JOIN sys.tables as t ON i.object_id = t.object_id
AND i.object_id = OBJECT_ID(N'Production.BillOfMaterials','U');
GO
-- Set ONLINE = OFF to execute this example on editions other than Enterprise Edition.
ALTER TABLE Production.TransactionHistoryArchive
DROP CONSTRAINT PK_TransactionHistoryArchive_TransactionID
WITH (ONLINE = ON);
See Also
ALTER INDEX (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
ALTER TABLE (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE PARTITION SCHEME (Transact-SQL )
CREATE SPATIAL INDEX (Transact-SQL )
CREATE XML INDEX (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.indexes (Transact-SQL )
sys.tables (Transact-SQL )
sys.filegroups (Transact-SQL )
sp_spaceused (Transact-SQL )
DROP INDEX (Selective XML Indexes)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing selective XML index or secondary selective XML index in SQL Server. For more information, see
Selective XML Indexes (SXI).
Transact-SQL Syntax Conventions
Syntax
DROP INDEX index_name ON <object>
[ WITH ( <drop_index_option> [ ,...n ] ) ]
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}
<drop_index_option> ::=
{
MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
}
Arguments
index_name
Is the name of the existing index to drop.
< object> Is the table that contains the indexed XML column. Use one of the following formats:
database_name.schema_name.table_name
database_name..table_name
schema_name.table_name
table_name
<drop_index_option> For information about the drop index options, see DROP INDEX (Transact-SQL ).
Security
Permissions
ALTER permission on the table or view is required to run DROP INDEX. This permission is granted by default to
the sysadmin fixed server role and the db_ddladmin and db_owner fixed database roles.
Example
The following example shows a DROP INDEX statement.
DROP INDEX sxi_index ON tbl;
See Also
Selective XML Indexes (SXI)
Create, Alter, and Drop Selective XML Indexes
DROP LOGIN (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a SQL Server login account.
Transact-SQL Syntax Conventions
Syntax
DROP LOGIN login_name
Arguments
login_name
Specifies the name of the login to be dropped.
Remarks
A login cannot be dropped while it is logged in. A login that owns any securable, server-level object, or SQL
Server Agent job cannot be dropped.
You can drop a login to which database users are mapped; however, this will create orphaned users. For more
information, see Troubleshoot Orphaned Users (SQL Server).
In SQL Database, login data required to authenticate a connection and server-level firewall rules are temporarily
cached in each database. This cache is periodically refreshed. To force a refresh of the authentication cache and
make sure that a database has the latest version of the logins table, execute DBCC FLUSHAUTHCACHE
(Transact-SQL ).
Permissions
Requires ALTER ANY LOGIN permission on the server.
Examples
A. Dropping a login
The following example drops the login WilliJo .
See Also
CREATE LOGIN (Transact-SQL )
ALTER LOGIN (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes the master key from the current database.
Transact-SQL Syntax Conventions
Syntax
DROP MASTER KEY
Arguments
This statement takes no arguments.
Remarks
The drop will fail if any private key in the database is protected by the master key.
Permissions
Requires CONTROL permission on the database.
Examples
The following example removes the master key for the AdventureWorks2012 database.
USE AdventureWorks2012;
DROP MASTER KEY;
GO
USE master;
DROP MASTER KEY;
GO
See Also
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
BACKUP MASTER KEY (Transact-SQL )
RESTORE MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
Encryption Hierarchy
DROP MESSAGE TYPE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing message type.
Transact-SQL Syntax Conventions
Syntax
DROP MESSAGE TYPE message_type_name
[ ; ]
Arguments
message_type_name
The name of the message type to delete. Server, database, and schema names cannot be specified.
Permissions
Permission for dropping a message type defaults to the owner of the message type, members of the db_ddladmin
or db_owner fixed database roles, and members of the sysadmin fixed server role.
Remarks
You cannot drop a message type if any contracts refer to the message type.
Examples
The following example deletes the //Adventure-Works.com/Expenses/SubmitExpense message type from the database.
See Also
ALTER MESSAGE TYPE (Transact-SQL )
CREATE MESSAGE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP PARTITION FUNCTION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a partition function from the current database. Partition functions are created by using CREATE
PARTITION FUNCTION and modified by using ALTER PARTITION FUNCTION.
Transact-SQL Syntax Conventions
Syntax
DROP PARTITION FUNCTION partition_function_name [ ; ]
Arguments
partition_function_name
Is the name of the partition function that is to be dropped.
Remarks
A partition function can be dropped only if there are no partition schemes currently using the partition function. If
there are partition schemes using the partition function, DROP PARTITION FUNCTION returns an error.
Permissions
Any one of the following permissions can be used to execute DROP PARTITION FUNCTION:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition function was created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition function was created.
Examples
The following example assumes the partition function myRangePF has been created in the current database.
See Also
CREATE PARTITION FUNCTION (Transact-SQL )
ALTER PARTITION FUNCTION (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.partition_functions (Transact-SQL )
sys.partition_parameters (Transact-SQL )
sys.partition_range_values (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
DROP PARTITION SCHEME (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a partition scheme from the current database. Partition schemes are created by using CREATE
PARTITION SCHEME and modified by using ALTER PARTITION SCHEME.
Transact-SQL Syntax Conventions
Syntax
DROP PARTITION SCHEME partition_scheme_name [ ; ]
Arguments
partition_scheme_name
Is the name of the partition scheme to be dropped.
Remarks
A partition scheme can be dropped only if there are no tables or indexes currently using the partition scheme. If
there are tables or indexes using the partition scheme, DROP PARTITION SCHEME returns an error. DROP
PARTITION SCHEME does not remove the filegroups themselves.
Permissions
The following permissions can be used to execute DROP PARTITION SCHEME:
ALTER ANY DATASPACE permission. This permission defaults to members of the sysadmin fixed server
role and the db_owner and db_ddladmin fixed database roles.
CONTROL or ALTER permission on the database in which the partition scheme was created.
CONTROL SERVER or ALTER ANY DATABASE permission on the server of the database in which the
partition scheme was created.
Examples
The following example drops the partition scheme myRangePS1 from the current database:
See Also
CREATE PARTITION SCHEME (Transact-SQL )
ALTER PARTITION SCHEME (Transact-SQL )
sys.partition_schemes (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.data_spaces (Transact-SQL )
sys.destination_data_spaces (Transact-SQL )
sys.partitions (Transact-SQL )
sys.tables (Transact-SQL )
sys.indexes (Transact-SQL )
sys.index_columns (Transact-SQL )
DROP PROCEDURE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more stored procedures or procedure groups from the current database in SQL Server 2017.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the procedure only if it already exists.
schema_name
The name of the schema to which the procedure belongs. A server name or database name cannot be specified.
procedure
The name of the stored procedure or stored procedure group to be removed. Individual procedures within a
numbered procedure group cannot be dropped; the whole procedure group is dropped.
Best Practices
Before removing any stored procedure, check for dependent objects and modify these objects accordingly.
Dropping a stored procedure can cause dependent objects and scripts to fail when these objects are not updated.
For more information, see View the Dependencies of a Stored Procedure
Metadata
To display a list of existing procedures, query the sys.objects catalog view. To display the procedure definition,
query the sys.sql_modules catalog view.
Security
Permissions
Requires CONTROL permission on the procedure, or ALTER permission on the schema to which the procedure
belongs, or membership in the db_ddladmin fixed server role.
Examples
The following example removes the dbo.uspMyProc stored procedure in the current database.
The following example removes several stored procedures in the current database.
The following example removes the dbo.uspMyProc stored procedure if it exists but does not cause an error if the
procedure does not exist. This syntax is new in SQL Server 2016 (13.x).
See Also
ALTER PROCEDURE (Transact-SQL )
CREATE PROCEDURE (Transact-SQL )
sys.objects (Transact-SQL )
sys.sql_modules (Transact-SQL )
Delete a Stored Procedure
DROP QUEUE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing queue.
Transact-SQL Syntax Conventions
Syntax
DROP QUEUE <object>
[ ; ]
<object> ::=
{
[ database_name . [ schema_name ] . | schema_name . ]
queue_name
}
Arguments
database_name
The name of the database that contains the queue to drop. When no database_name is provided, defaults to the
current database.
schema_name (object)
The name of the schema that owns the queue to drop. When no schema_name is provided, defaults to the default
schema for the current user.
queue_name
The name of the queue to drop.
Remarks
You cannot drop a queue if any services refer to the queue.
Permissions
Permission for dropping a queue defaults to the owner of the queue, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.
Examples
The following example drops the ExpenseQueue queue from the current database.
See Also
CREATE QUEUE (Transact-SQL )
ALTER QUEUE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP REMOTE SERVICE BINDING (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a remote service binding.
Transact-SQL Syntax Conventions
Syntax
DROP REMOTE SERVICE BINDING binding_name
[ ; ]
Arguments
binding_name
Is the name of the remote service binding to drop. Server, database, and schema names cannot be specified.
Permissions
Permission for dropping a remote service binding defaults to the owner of the remote service binding, members
of the db_owner fixed database role, and members of the sysadmin fixed server role.
Examples
The following example deletes the remote service binding APBinding from the database.
See Also
CREATE REMOTE SERVICE BINDING (Transact-SQL )
ALTER REMOTE SERVICE BINDING (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a user-defined Resource Governor resource pool.
Transact-SQL Syntax Conventions.
Syntax
DROP RESOURCE POOL pool_name
[ ; ]
Arguments
pool_name
Is the name of an existing user-defined resource pool.
Remarks
You cannot drop a resource pool if it contains workload groups.
You cannot drop the Resource Governor default or internal pools.
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states.
For more information, see Resource Governor.
Permissions
Requires CONTROL SERVER permission.
Examples
The following example drops the resource pool named big_pool .
See Also
Resource Governor
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
DROP ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a role from the database.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the role only if it already exists.
role_name
Specifies the role to be dropped from the database.
Remarks
Roles that own securables cannot be dropped from the database. To drop a database role that owns securables,
you must first transfer ownership of those securables or drop them from the database. Roles that have members
cannot be dropped from the database. To drop a role that has members, you must first remove members of the
role.
To remove members from a database role, use ALTER ROLE (Transact-SQL ).
You cannot use DROP ROLE to drop a fixed database role.
Information about role membership can be viewed in the sys.database_role_members catalog view.
Cau t i on
Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).
To remove a server role, use DROP SERVER ROLE (Transact-SQL ).
Permissions
Requires ALTER ANY ROLE permission on the database, or CONTROL permission on the role, or membership
in the db_securityadmin.
Examples
The following example drops the database role purchasing from the AdventureWorks2012 database.
See Also
CREATE ROLE (Transact-SQL )
ALTER ROLE (Transact-SQL )
Principals (Database Engine)
EVENTDATA (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
Security Functions (Transact-SQL )
DROP ROUTE (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a route, deleting the information for the route from the routing table of the current database.
Transact-SQL Syntax Conventions
Syntax
DROP ROUTE route_name
[ ; ]
Arguments
route_name
The name of the route to drop. Server, database, and schema names cannot be specified.
Remarks
The routing table that stores the routes is a metadata table that can be read through the catalog view sys.routes.
The routing table can only be updated through the CREATE ROUTE, ALTER ROUTE, and DROP ROUTE
statements.
You can drop a route regardless of whether any conversations use the route. However, if there is no other route to
the remote service, messages for those conversations will remain in the transmission queue until a route to the
remote service is created or the conversation times out.
Permissions
Permission for dropping a route defaults to the owner of the route, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.
Examples
The following example deletes the ExpenseRoute route.
See Also
ALTER ROUTE (Transact-SQL )
CREATE ROUTE (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.routes (Transact-SQL )
DROP RULE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more user-defined rules from the current database.
IMPORTANT
DROP RULE will be removed in the next version of Microsoft SQL Server. Do not use DROP RULE in new development work,
and plan to modify applications that currently use them. Instead, use CHECK constraints that you can create by using the
CHECK keyword of CREATE TABLE or ALTER TABLE. For more information, see Unique Constraints and Check Constraints.
Syntax
DROP RULE [ IF EXISTS ] { [ schema_name . ] rule_name } [ ,...n ] [ ; ]
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the rule only if it already exists.
schema_name
Is the name of the schema to which the rule belongs.
rule
Is the rule to be removed. Rule names must comply with the rules for identifiers. Specifying the rule schema name
is optional.
Remarks
To drop a rule, first unbind it if the rule is currently bound to a column or to an alias data type. To unbind the rule,
use sp_unbindrule. If the rule is bound when you try to drop it, an error message is displayed and the DROP
RULE statement is canceled.
After a rule is dropped, new data entered into the columns previously governed by the rule is entered without the
constraints of the rule. Existing data is not affected in any way.
The DROP RULE statement does not apply to CHECK constraints. For more information about dropping CHECK
constraints, see ALTER TABLE (Transact-SQL ).
Permissions
To execute DROP RULE, at a minimum, a user must have ALTER permission on the schema to which the rule
belongs.
Examples
The following example unbinds and then drops the rule named VendorID_rule .
sp_unbindrule 'Production.ProductVendor.VendorID'
DROP RULE VendorID_rule
GO
See Also
CREATE RULE (Transact-SQL )
sp_bindrule (Transact-SQL )
sp_help (Transact-SQL )
sp_helptext (Transact-SQL )
sp_unbindrule (Transact-SQL )
USE (Transact-SQL )
DROP SCHEMA (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a schema from the database.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the schema only if it already exists.
schema_name
Is the name by which the schema is known within the database.
Remarks
The schema that is being dropped must not contain any objects. If the schema contains objects, the DROP
statement fails.
Information about schemas is visible in the sys.schemas catalog view.
Caution Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that
schemas are equivalent to database users may no longer return correct results. Old catalog views, including
sysobjects, should not be used in a database in which any of the following DDL statements have ever been used:
CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE
ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER
AUTHORIZATION. In such databases you must instead use the new catalog views. The new catalog views take
into account the separation of principals and schemas that was introduced in SQL Server 2005. For more
information about catalog views, see Catalog Views (Transact-SQL ).
Permissions
Requires CONTROL permission on the schema or ALTER ANY SCHEMA permission on the database.
Examples
The following example starts with a single CREATE SCHEMA statement. The statement creates the schema Sprockets
that is owned by Krishna and a table Sprockets.NineProngs , and then grants SELECT permission to Anibal and
denies SELECT permission to Hung-Fu .
The following statements drop the schema. Note that you must first drop the table that is contained by the
schema.
See Also
CREATE SCHEMA (Transact-SQL )
ALTER SCHEMA (Transact-SQL )
DROP SCHEMA (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP SEARCH PROPERTY LIST (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a property list from the current database if the search property list is currently not associated with any full-
text index in the database.
Syntax
DROP SEARCH PROPERTY LIST property_list_name
;
Arguments
property_list_name
Is the name of the search property list to be dropped. property_list_name is an identifier.
To view the names of the existing property lists, use the sys.registered_search_property_lists catalog view, as
follows:
Remarks
You cannot drop a search property list from a database while the list is associated with any full-text index, and
attempts to do so fail. To drop a search property list from a given full-text index, use the ALTER FULLTEXT INDEX
statement, and specify the SET SEARCH PROPERTY LIST clause with either OFF or the name of another search
property list.
To view the property lists on a server instance
sys.registered_search_property_lists (Transact-SQL )
To view the property lists associated with full-text indexes
sys.fulltext_indexes (Transact-SQL )
To remove a property list from a full-text index
ALTER FULLTEXT INDEX (Transact-SQL )
Permissions
Requires CONTROL permission on the search property list.
NOTE
The property list owner can grant CONTROL permissions on the list. By default, the user who creates a search property list
is its owner. The owner can be changed by using the ALTER AUTHORIZATION Transact-SQL statement.
Examples
The following example drops the JobCandidateProperties property list from the AdventureWorks2012 database.
See Also
ALTER SEARCH PROPERTY LIST (Transact-SQL )
CREATE SEARCH PROPERTY LIST (Transact-SQL )
Search Document Properties with Search Property Lists
sys.registered_search_properties (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
DROP SECURITY POLICY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Deletes a security policy.
Transact-SQL Syntax Conventions
Syntax
DROP SECURITY POLICY [ IF EXISTS ] [schema_name. ] security_policy_name
[;]
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the security policy only if it already exists.
schema_name
Is the name of the schema to which the security policy belongs.
security_policy_name
The name of the security policy. Security policy names must comply with the rules for identifiers and must be
unique within the database and to its schema.
Remarks
Permissions
Requires the ALTER ANY SECURITY POLICY permission and ALTER permission on the schema.
Example
DROP SECURITY POLICY secPolicy;
See Also
Row -Level Security
CREATE SECURITY POLICY (Transact-SQL )
ALTER SECURITY POLICY (Transact-SQL )
sys.security_policies (Transact-SQL )
sys.security_predicates (Transact-SQL )
DROP SEQUENCE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a sequence object from the current database.
Transact-SQL Syntax Conventions
Syntax
DROP SEQUENCE [ IF EXISTS ] { [ database_name . [ schema_name ] . | schema_name. ] sequence_name } [ ,...n
]
[ ; ]
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the sequence only if it already exists.
database_name
Is the name of the database in which the sequence object was created.
schema_name
Is the name of the schema to which the sequence object belongs.
sequence_name
Is the name of the sequence to be dropped. Type is sysname.
Remarks
After generating a number, a sequence object has no continuing relationship to the number it generated, so the
sequence object can be dropped, even though the number generated is still in use.
A sequence object can be dropped while it is referenced by a stored procedure, or trigger, because it is not schema
bound. A sequence object cannot be dropped if it is referenced as a default value in a table. The error message will
list the object referencing the sequence.
To list all sequence objects in the database, execute the following statement.
Security
Permissions
Requires ALTER or CONTROL permission on the schema.
Audit
To audit DROP SEQUENCE, monitor the SCHEMA_OBJECT_CHANGE_GROUP.
Examples
The following example removes a sequence object named CountBy1 from the current database.
See Also
ALTER SEQUENCE (Transact-SQL )
CREATE SEQUENCE (Transact-SQL )
NEXT VALUE FOR (Transact-SQL )
Sequence Numbers
DROP SERVER AUDIT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a Server Audit Object using the SQL Server Audit feature. For more information on SQL Server Audit,
see SQL Server Audit (Database Engine).
Transact-SQL Syntax Conventions
Syntax
DROP SERVER AUDIT audit_name
[ ; ]
Remarks
You must set the State of an audit to the OFF option in order to make any changes to an Audit. If DROP AUDIT
is run while an audit is enabled with any options other than STATE=OFF, you will receive a
MSG_NEED_AUDIT_DISABLED error message.
A DROP SERVER AUDIT removes the metadata for the Audit, but not the audit data that was collected before
the command was issued.
DROP SERVER AUDIT does not drop associated server or database audit specifications. These specifications
must be dropped manually or left orphaned and later mapped to a new server audit.
Permissions
To create, alter or drop a Server Audit Principals require the ALTER ANY SERVER AUDIT or the CONTROL
SERVER permission.
Examples
The following example drops an audit called HIPAA_Audit .
See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
sys.dm_audit_class_type_map (Transact-SQL )
Create a Server Audit and Server Audit Specification
DROP SERVER AUDIT SPECIFICATION (Transact-
SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a server audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions
Syntax
DROP SERVER AUDIT SPECIFICATION audit_specification_name
[ ; ]
Arguments
audit_specification_name
Name of an existing server audit specification object.
Remarks
A DROP SERVER AUDIT SPECIFICATION removes the metadata for the audit specification, but not the audit
data collected before the DROP command was issued. You must set the state of a server audit specification to
OFF using ALTER SERVER AUDIT SPECIFICATION before it can be dropped.
Permissions
Users with the ALTER ANY SERVER AUDIT permission can drop server audit specifications.
Examples
The following example drops a server audit specification called HIPAA_Audit_Specification .
For a full example about how to create an audit, see SQL Server Audit (Database Engine).
See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
sys.dm_audit_class_type_map (Transact-SQL )
Create a Server Audit and Server Audit Specification
DROP SERVER ROLE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a user-defined server role.
User-defined server roles are new in SQL Server 2012 (11.x).
Transact-SQL Syntax Conventions
Syntax
DROP SERVER ROLE role_name
Arguments
role_name
Specifies the user-defined server role to be dropped from the server.
Remarks
User-defined server roles that own securables cannot be dropped from the server. To drop a user-defined server
role that owns securables, you must first transfer ownership of those securables or delete them.
User-defined server roles that have members cannot be dropped. To drop a user-defined server role that has
members, you must first remove members of the role by using ALTER SERVER ROLE.
Fixed server roles cannot be removed.
You can view information about role membership by querying the sys.server_role_members catalog view.
Permissions
Requires CONTROL permission on the server role or ALTER ANY SERVER ROLE permission.
Examples
A. To drop a server role
The following example drops the server role purchasing .
See Also
ALTER ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
Principals (Database Engine)
DROP ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
sp_addrolemember (Transact-SQL )
sys.database_role_members (Transact-SQL )
sys.database_principals (Transact-SQL )
DROP SERVICE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing service.
Transact-SQL Syntax Conventions
Syntax
DROP SERVICE service_name
[ ; ]
Arguments
service_name
The name of the service to drop. Server, database, and schema names cannot be specified.
Remarks
You cannot drop a service if any conversation priorities refer to it.
Dropping a service deletes all messages for the service from the queue that the service uses. Service Broker sends
an error to the remote side of any open conversations that use the service.
Permissions
Permission for dropping a service defaults to the owner of the service, members of the db_ddladmin or db_owner
fixed database roles, and members of the sysadmin fixed server role.
Examples
The following example drops the service //Adventure-Works.com/Expenses .
See Also
ALTER BROKER PRIORITY (Transact-SQL )
ALTER SERVICE (Transact-SQL )
CREATE SERVICE (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP SIGNATURE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops a digital signature from a stored procedure, function, trigger, or assembly.
Transact-SQL Syntax Conventions
Syntax
DROP [ COUNTER ] SIGNATURE FROM module_name
BY <crypto_list> [ ,...n ]
<crypto_list> ::=
CERTIFICATE cert_name
| ASYMMETRIC KEY Asym_key_name
Arguments
module_name
Is the name of a stored procedure, function, assembly, or trigger.
CERTIFICATE cert_name
Is the name of a certificate with which the stored procedure, function, assembly, or trigger is signed.
ASYMMETRIC KEY Asym_key_name
Is the name of an asymmetric key with which the stored procedure, function, assembly, or trigger is signed.
Remarks
Information about signatures is visible in the sys.crypt_properties catalog view.
Permissions
Requires ALTER permission on the object and CONTROL permission on the certificate or asymmetric key. If an
associated private key is protected by a password, the user also must have the password.
Examples
The following example removes the signature of certificate HumanResourcesDP from the stored procedure
HumanResources.uspUpdateEmployeeLogin .
USE AdventureWorks2012;
DROP SIGNATURE FROM HumanResources.uspUpdateEmployeeLogin
BY CERTIFICATE HumanResourcesDP;
GO
See Also
sys.crypt_properties (Transact-SQL )
ADD SIGNATURE (Transact-SQL )
DROP STATISTICS (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops statistics for multiple collections within the specified tables in the current database.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
table | view
Is the name of the target table or indexed view for which statistics should be dropped. Table and view names must
comply with the rules for Database Identifiers. Specifying the table or view owner name is optional.
statistics_name
Is the name of the statistics group to drop. Statistics names must comply with the rules for identifiers
Remarks
Be careful when you drop statistics. Doing so may affect the execution plan chosen by the query optimizer.
Statistics on indexes cannot be dropped by using DROP STATISTICS. Statistics remain as long as the index exists.
For more information about displaying statistics, see DBCC SHOW_STATISTICS (Transact-SQL ).
Permissions
Requires ALTER permission on the table or view.
Examples
A. Dropping statistics from a table
The following example drops the statistics groups (collections) of two tables. The VendorCredit statistics group
(collection) of the Vendor table and the CustomerTotal statistics (collection) of the SalesOrderHeader table are
dropped.
-- Create the statistics groups.
USE AdventureWorks2012;
GO
CREATE STATISTICS VendorCredit
ON Purchasing.Vendor (Name, CreditRating)
WITH SAMPLE 50 PERCENT
CREATE STATISTICS CustomerTotal
ON Sales.SalesOrderHeader (CustomerID, TotalDue)
WITH FULLSCAN;
GO
DROP STATISTICS Purchasing.Vendor.VendorCredit, Sales.SalesOrderHeader.CustomerTotal;
See Also
ALTER DATABASE (Transact-SQL )
CREATE INDEX (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
sys.stats (Transact-SQL )
sys.stats_columns (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
sp_autostats (Transact-SQL )
sp_createstats (Transact-SQL )
UPDATE STATISTICS (Transact-SQL )
EVENTDATA (Transact-SQL )
USE (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a symmetric key from the current database.
Transact-SQL Syntax Conventions
Syntax
DROP SYMMETRIC KEY symmetric_key_name [REMOVE PROVIDER KEY]
Arguments
symmetric_key_name
Is the name of the symmetric key to be dropped.
REMOVE PROVIDER KEY
Removes an Extensible Key Management (EKM ) key from an EKM device. For more information about Extensible
Key Management, see Extensible Key Management (EKM ).
Remarks
If the key is open in the current session the statement will fail.
If the asymmetric key is mapped to an Extensible Key Management (EKM ) key on an EKM device and the
REMOVE PROVIDER KEY option is not specified, the key will be dropped from the database but not the device,
and a warning will be issued.
Permissions
Requires CONTROL permission on the symmetric key.
Examples
The following example removes a symmetric key named GailSammamishKey6 from the current database.
See Also
CREATE SYMMETRIC KEY (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
ALTER SYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
CLOSE SYMMETRIC KEY (Transact-SQL )
Extensible Key Management (EKM )
DROP SYNONYM (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a synonym from a specified schema.
Transact-SQL Syntax Conventions
Syntax
DROP SYNONYM [ IF EXISTS ] [ schema. ] synonym_name
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version)
Conditionally drops the synonym only if it already exists.
schema
Specifies the schema in which the synonym exists. If schema is not specified, SQL Server uses the default schema
of the current user.
synonym_name
Is the name of the synonym to be dropped.
Remarks
References to synonyms are not schema-bound; therefore, you can drop a synonym at any time. References to
dropped synonyms will be found only at run time.
Synonyms can be created, dropped and referenced in dynamic SQL.
Permissions
To drop a synonym, a user must satisfy at least one of the following conditions. The user must be:
The current owner of a synonym.
A grantee holding CONTROL on a synonym.
A grantee holding ALTER SCHEMA permission on the containing schema.
Examples
The following example first creates a synonym, MyProduct , and then drops the synonym.
USE tempdb;
GO
-- Create a synonym for the Product table in AdventureWorks2012.
CREATE SYNONYM MyProduct
FOR AdventureWorks2012.Production.Product;
GO
-- Drop synonym MyProduct.
USE tempdb;
GO
DROP SYNONYM MyProduct;
GO
See Also
CREATE SYNONYM (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP TABLE (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more table definitions and all data, indexes, triggers, constraints, and permission specifications
for those tables. Any view or stored procedure that references the dropped table must be explicitly dropped by
using DROP VIEW or DROP PROCEDURE. To report the dependencies on a table, use
sys.dm_sql_referencing_entities.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
database_name
Is the name of the database in which the table was created.
Windows Azure SQL Database supports the three-part name format database_name.
[schema_name].object_name when the database_name is the current database or the database_name is tempdb
and the object_name starts with #. Windows Azure SQL Database does not support four-part names.
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the table only if it already exists.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to be removed.
Remarks
DROP TABLE cannot be used to drop a table that is referenced by a FOREIGN KEY constraint. The referencing
FOREIGN KEY constraint or the referencing table must first be dropped. If both the referencing table and the
table that holds the primary key are being dropped in the same DROP TABLE statement, the referencing table
must be listed first.
Multiple tables can be dropped in any database. If a table being dropped references the primary key of another
table that is also being dropped, the referencing table with the foreign key must be listed before the table holding
the primary key that is being referenced.
When a table is dropped, rules or defaults on the table lose their binding, and any constraints or triggers
associated with the table are automatically dropped. If you re-create a table, you must rebind the appropriate
rules and defaults, re-create any triggers, and add all required constraints.
If you delete all rows in a table by using DELETE tablename or use the TRUNCATE TABLE statement, the table
exists until it is dropped.
Large tables and indexes that use more than 128 extents are dropped in two separate phases: logical and physical.
In the logical phase, the existing allocation units used by the table are marked for deallocation and locked until the
transaction commits. In the physical phase, the IAM pages marked for deallocation are physically dropped in
batches.
If you drop a table that contains a VARBINARY (MAX) column with the FILESTREAM attribute, any data stored in
the file system will not be removed.
IMPORTANT
DROP TABLE and CREATE TABLE should not be executed on the same table in the same batch. Otherwise an unexpected
error may occur.
Permissions
Requires ALTER permission on the schema to which the table belongs, CONTROL permission on the table, or
membership in the db_ddladmin fixed database role.
Examples
A. Dropping a table in the current database
The following example removes the ProductVendor1 table and its data and indexes from the current database.
See Also
ALTER TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL )
DELETE (Transact-SQL )
sp_help (Transact-SQL )
sp_spaceused (Transact-SQL )
TRUNCATE TABLE (Transact-SQL )
DROP VIEW (Transact-SQL )
DROP PROCEDURE (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.sql_expression_dependencies (Transact-SQL )
DROP TRIGGER (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more DML or DDL triggers from the current database.
Transact-SQL Syntax Conventions
Syntax
-- Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger)
-- Trigger on a CREATE, ALTER, DROP, GRANT, DENY, REVOKE or UPDATE statement (DDL Trigger)
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version, SQL Database).
Conditionally drops the trigger only if it already exists.
schema_name
Is the name of the schema to which a DML trigger belongs. DML triggers are scoped to the schema of the table or
view on which they are created. schema_name cannot be specified for DDL or logon triggers.
trigger_name
Is the name of the trigger to remove. To see a list of currently created triggers, use sys.server_assembly_modules
or sys.server_triggers.
DATABASE
Indicates the scope of the DDL trigger applies to the current database. DATABASE must be specified if it was also
specified when the trigger was created or modified.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates the scope of the DDL trigger applies to the current server. ALL SERVER must be specified if it was also
specified when the trigger was created or modified. ALL SERVER also applies to logon triggers.
NOTE
This option is not available in a contained database.
Remarks
You can remove a DML trigger by dropping it or by dropping the trigger table. When a table is dropped, all
associated triggers are also dropped.
When a trigger is dropped, information about the trigger is removed from the sys.objects, sys.triggers and
sys.sql_modules catalog views.
Multiple DDL triggers can be dropped per DROP TRIGGER statement only if all triggers were created using
identical ON clauses.
To rename a trigger, use DROP TRIGGER and CREATE TRIGGER. To change the definition of a trigger, use ALTER
TRIGGER.
For more information about determining dependencies for a specific trigger, see sys.sql_expression_dependencies,
sys.dm_sql_referenced_entities (Transact-SQL ), and sys.dm_sql_referencing_entities (Transact-SQL ).
For more information about viewing the text of the trigger, see sp_helptext (Transact-SQL ) and sys.sql_modules
(Transact-SQL ).
For more information about viewing a list of existing triggers, see sys.triggers (Transact-SQL ) and
sys.server_triggers (Transact-SQL ).
Permissions
To drop a DML trigger requires ALTER permission on the table or view on which the trigger is defined.
To drop a DDL trigger defined with server scope (ON ALL SERVER ) or a logon trigger requires CONTROL
SERVER permission in the server. To drop a DDL trigger defined with database scope (ON DATABASE ) requires
ALTER ANY DATABASE DDL TRIGGER permission in the current database.
Examples
A. Dropping a DML trigger
The following example drops the employee_insupd trigger in the AdventureWorks2012 database. (Beginning with
SQL Server 2016 (13.x) you can use the DROP TRIGGER IF EXISTS syntax.)
IMPORTANT
Because DDL triggers are not schema-scoped and, therefore do not appear in the sys.objects catalog view, the OBJECT_ID
function cannot be used to query whether they exist in the database. Objects that are not schema-scoped must be queried
by using the appropriate catalog view. For DDL triggers, use sys.triggers.
DROP TRIGGER safety
ON DATABASE;
See Also
ALTER TRIGGER (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
ENABLE TRIGGER (Transact-SQL )
DISABLE TRIGGER (Transact-SQL )
EVENTDATA (Transact-SQL )
Get Information About DML Triggers
sp_help (Transact-SQL )
sp_helptrigger (Transact-SQL )
sys.triggers (Transact-SQL )
sys.trigger_events (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.assembly_modules (Transact-SQL )
sys.server_triggers (Transact-SQL )
sys.server_trigger_events (Transact-SQL )
sys.server_sql_modules (Transact-SQL )
sys.server_assembly_modules (Transact-SQL )
DROP TYPE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes an alias data type or a common language runtime (CLR ) user-defined type from the current database.
Transact-SQL Syntax Conventions
Syntax
DROP TYPE [ IF EXISTS ] [ schema_name. ] type_name [ ; ]
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
Conditionally drops the type only if it already exists.
schema_name
Is the name of the schema to which the alias or user-defined type belongs.
type_name
Is the name of the alias data type or the user-defined type you want to drop.
Remarks
The DROP TYPE statement will not execute when any of the following is true:
There are tables in the database that contain columns of the alias data type or the user-defined type.
Information about alias or user-defined type columns can be obtained by querying the sys.columns or
sys.column_type_usages catalog views.
There are computed columns, CHECK constraints, schema-bound views, and schema-bound functions
whose definitions reference the alias or user-defined type. Information about these references can be
obtained by querying the sys.sql_expression_dependencies catalog view.
There are functions, stored procedures, or triggers created in the database, and these routines use variables
and parameters of the alias or user-defined type. Information about alias or user-defined type parameters
can be obtained by querying the sys.parameters or sys.parameter_type_usages catalog views.
Permissions
Requires either CONTROL permission on type_name or ALTER permission on schema_name.
Examples
The following example assumes a type named ssn is already created in the current database.
DROP TYPE ssn ;
See Also
CREATE TYPE (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP USER (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a user from the current database.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version, SQL Database).
Conditionally drops the user only if it already exists.
user_name
Specifies the name by which the user is identified inside this database.
Remarks
Users that own securables cannot be dropped from the database. Before dropping a database user that owns
securables, you must first drop or transfer ownership of those securables.
The guest user cannot be dropped, but guest user can be disabled by revoking its CONNECT permission by
executing REVOKE CONNECT FROM GUEST within any database other than master or tempdb.
Cau t i on
Beginning with SQL Server 2005, the behavior of schemas changed. As a result, code that assumes that schemas
are equivalent to database users may no longer return correct results. Old catalog views, including sysobjects,
should not be used in a database in which any of the following DDL statements have ever been used: CREATE
SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP USER, CREATE ROLE,
ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE, ALTER AUTHORIZATION.
In such databases you must instead use the new catalog views. The new catalog views take into account the
separation of principals and schemas that was introduced in SQL Server 2005. For more information about
catalog views, see Catalog Views (Transact-SQL ).
Permissions
Requires ALTER ANY USER permission on the database.
Examples
The following example removes database user AbolrousHazem from the AdventureWorks2012 database.
See Also
CREATE USER (Transact-SQL )
ALTER USER (Transact-SQL )
EVENTDATA (Transact-SQL )
DROP VIEW (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes one or more views from the current database. DROP VIEW can be executed against indexed views.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
IF EXISTS
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version, SQL Database).|
Conditionally drops the view only if it already exists.
schema_name
Is the name of the schema to which the view belongs.
view_name
Is the name of the view to remove.
Remarks
When you drop a view, the definition of the view and other information about the view is deleted from the system
catalog. All permissions for the view are also deleted.
Any view on a table that is dropped by using DROP TABLE must be dropped explicitly by using DROP VIEW.
When executed against an indexed view, DROP VIEW automatically drops all indexes on a view. To display all
indexes on a view, use sp_helpindex.
When querying through a view, the Database Engine checks to make sure that all the database objects referenced
in the statement exist and that they are valid in the context of the statement, and that data modification statements
do not violate any data integrity rules. A check that fails returns an error message. A successful check translates
the action into an action against the underlying table or tables. If the underlying tables or views have changed
since the view was originally created, it may be useful to drop and re-create the view.
For more information about determining dependencies for a specific view, see sys.sql_dependencies (Transact-
SQL ).
For more information about viewing the text of the view, see sp_helptext (Transact-SQL ).
Permissions
Requires CONTROL permission on the view, ALTER permission on the schema containing the view, or
membership in the db_ddladmin fixed server role.
Examples
A. Drop a view
The following example removes the view Reorder .
See Also
ALTER VIEW (Transact-SQL )
CREATE VIEW (Transact-SQL )
EVENTDATA (Transact-SQL )
sys.columns (Transact-SQL )
sys.objects (Transact-SQL )
USE (Transact-SQL )
sys.sql_expression_dependencies (Transact-SQL )
DROP WORKLOAD GROUP (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Drops an existing user-defined Resource Governor workload group.
Transact-SQL Syntax Conventions.
Syntax
DROP WORKLOAD GROUP group_name
[;]
Arguments
group_name
Is the name of an existing user-defined workload group.
Remarks
The DROP WORKLOAD GROUP statement is not allowed on the Resource Governor internal or default groups.
When you are executing DDL statements, we recommend that you be familiar with Resource Governor states.
For more information, see Resource Governor.
If a workload group contains active sessions, dropping or moving the workload group to a different resource
pool will fail when the ALTER RESOURCE GOVERNOR RECONFIGURE statement is called to apply the
change. To avoid this problem, you can take one of the following actions:
Wait until all the sessions from the affected group have disconnected, and then rerun the ALTER
RESOURCE GOVERNOR RECONFIGURE statement.
Explicitly stop sessions in the affected group by using the KILL command, and then rerun the ALTER
RESOURCE GOVERNOR RECONFIGURE statement.
Restart the server. After the restart process is completed, the deleted group will not be created, and a
moved group will use the new resource pool assignment.
In a scenario in which you have issued the DROP WORKLOAD GROUP statement but decide that you do
not want to explicitly stop sessions to apply the change, you can re-create the group by using the same
name that it had before you issued the DROP statement, and then move the group to the original resource
pool. To apply the changes, run the ALTER RESOURCE GOVERNOR RECONFIGURE statement.
Permissions
Requires CONTROL SERVER permission.
Examples
The following example drops the workload group named adhoc .
See Also
Resource Governor
CREATE WORKLOAD GROUP (Transact-SQL )
ALTER WORKLOAD GROUP (Transact-SQL )
CREATE RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
DROP RESOURCE POOL (Transact-SQL )
ALTER RESOURCE GOVERNOR (Transact-SQL )
DROP XML SCHEMA COLLECTION (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Deletes the whole XML schema collection and all of its components.
Transact-SQL Syntax Conventions
Syntax
DROP XML SCHEMA COLLECTION [ relational_schema. ]sql_identifier
Arguments
relational_schema
Identifies the relational schema name. If not specified, the default relational schema is assumed.
sql_identifier
Is the name of the XML schema collection to drop.
Remarks
Dropping an XML schema collection is a transactional operation. This means when you drop an XML schema
collection inside a transaction and later roll back the transaction, the XML schema collection is not dropped.
You cannot drop an XML schema collection when it is in use. This means that the collection being dropped cannot
be any of the following:
Associated with any xml type parameter or column.
Specified in any table constraints.
Referenced in a schema-bound function or stored procedure. For example, the following function will lock
the XML schema collection MyCollection because the function specifies WITH SCHEMABINDING . If you
remove it, there is no lock on the XML SCHEMA COLLECTION.
Permissions
To drop an XML SCHEMA COLLECTION requires DROP permission on the collection.
Examples
The following example shows removing an XML schema collection.
See Also
CREATE XML SCHEMA COLLECTION (Transact-SQL )
ALTER XML SCHEMA COLLECTION (Transact-SQL )
EVENTDATA (Transact-SQL )
Compare Typed XML to Untyped XML
Requirements and Limitations for XML Schema Collections on the Server
ENABLE TRIGGER (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Enables a DML, DDL, or logon trigger.
Transact-SQL Syntax Conventions
Syntax
ENABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL }
ON { object_name | DATABASE | ALL SERVER } [ ; ]
Arguments
schema_name
Is the name of the schema to which the trigger belongs. schema_name cannot be specified for DDL or logon
triggers.
trigger_name
Is the name of the trigger to be enabled.
ALL
Indicates that all triggers defined at the scope of the ON clause are enabled.
object_name
Is the name of the table or view on which the DML trigger trigger_name was created to execute.
DATABASE
For a DDL trigger, indicates that trigger_name was created or modified to execute with database scope.
ALL SERVER
Applies to: SQL Server 2008 through SQL Server 2017.
For a DDL trigger, indicates that trigger_name was created or modified to execute with server scope. ALL
SERVER also applies to logon triggers.
NOTE
This option is not available in a contained database.
Remarks
Enabling a trigger does not re-create it. A disabled trigger still exists as an object in the current database, but does
not fire. Enabling a trigger causes it to fire when any Transact-SQL statements on which it was originally
programmed are executed. Triggers are disabled by using DISABLE TRIGGER. DML triggers defined on tables
can be also be disabled or enabled by using ALTER TABLE.
Permissions
To enable a DML trigger, at a minimum, a user must have ALTER permission on the table or view on which the
trigger was created.
To enable a DDL trigger with server scope (ON ALL SERVER ) or a logon trigger, a user must have CONTROL
SERVER permission on the server. To enable a DDL trigger with database scope (ON DATABASE ), at a minimum,
a user must have ALTER ANY DATABASE DDL TRIGGER permission in the current database.
Examples
A. Enabling a DML trigger on a table
The following example disables trigger uAddress that was created on table Address in the AdventureWorks
database, and then enables it.
C. Enabling all triggers that were defined with the same scope
The following example enables all DDL triggers that were created at the server scope.
Applies to: SQL Server 2008 through SQL Server 2017.
See Also
DISABLE TRIGGER (Transact-SQL )
ALTER TRIGGER (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
DROP TRIGGER (Transact-SQL )
sys.triggers (Transact-SQL )
INSERT (Transact-SQL)
5/3/2018 • 34 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds one or more rows to a table or a view in SQL Server. For examples, see Examples.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<object> ::=
{
[ server_name . database_name . schema_name .
| database_name .[ schema_name ] .
| schema_name .
]
table_or_view_name
}
<dml_table_source> ::=
SELECT <select_list>
FROM ( <dml_statement_with_output_clause> )
[AS] table_alias [ ( column_alias [ ,...n ] ) ]
[ WHERE <search_condition> ]
[ OPTION ( <query_hint> [ ,...n ] ) ]
-- External tool only syntax
INSERT
{
[BULK]
[ database_name . [ schema_name ] . | schema_name . ]
[ table_name | view_name ]
( <column_definition> )
[ WITH (
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] KEEP_NULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] TABLOCK ]
) ]
}
[; ] <column_definition> ::=
column_name <data_type>
[ COLLATE collation_name ]
[ NULL | NOT NULL ]
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
WITH <common_table_expression>
Specifies the temporary named result set, also known as common table expression, defined within the scope of
the INSERT statement. The result set is derived from a SELECT statement. For more information, see WITH
common_table_expression (Transact-SQL ).
TOP (expression) [ PERCENT ]
Specifies the number or percent of random rows that will be inserted. expression can be either a number or a
percent of the rows. For more information, see TOP (Transact-SQL ).
INTO
Is an optional keyword that can be used between INSERT and the target table.
server_name
Applies to: SQL Server 2008 through SQL Server 2017.
Is the name of the linked server on which the table or view is located. server_name can be specified as a linked
server name, or by using the OPENDATASOURCE function.
When server_name is specified as a linked server, database_name and schema_name are required. When
server_name is specified with OPENDATASOURCE, database_name and schema_name may not apply to all
data sources and is subject to the capabilities of the OLE DB provider that accesses the remote object.
database_name
Applies to: SQL Server 2008 through SQL Server 2017.
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or view_name
Is the name of the table or view that is to receive the data.
A table variable, within its scope, can be used as a table source in an INSERT statement.
The view referenced by table_or_view_name must be updatable and reference exactly one base table in the
FROM clause of the view. For example, an INSERT into a multi-table view must use a column_list that
references only columns from one base table. For more information about updatable views, see CREATE VIEW
(Transact-SQL ).
rowset_function_limited
Applies to: SQL Server 2008 through SQL Server 2017.
Is either the OPENQUERY or OPENROWSET function. Use of these functions is subject to the capabilities of
the OLE DB provider that accesses the remote object.
WITH ( <table_hint_limited> [... n ] )
Specifies one or more table hints that are allowed for a target table. The WITH keyword and the parentheses
are required.
READPAST, NOLOCK, and READUNCOMMITTED are not allowed. For more information about table hints,
see Table Hints (Transact-SQL ).
IMPORTANT
The ability to specify the HOLDLOCK, SERIALIZABLE, READCOMMITTED, REPEATABLEREAD, or UPDLOCK hints on tables
that are targets of INSERT statements will be removed in a future version of SQL Server. These hints do not affect the
performance of INSERT statements. Avoid using them in new development work, and plan to modify applications that
currently use them.
Specifying the TABLOCK hint on a table that is the target of an INSERT statement has the same effect as
specifying the TABLOCKX hint. An exclusive lock is taken on the table.
(column_list)
Is a list of one or more columns in which to insert data. column_list must be enclosed in parentheses and
delimited by commas.
If a column is not in column_list, the Database Engine must be able to provide a value based on the definition of
the column; otherwise, the row cannot be loaded. The Database Engine automatically provides a value for the
column if the column:
Has an IDENTITY property. The next incremental identity value is used.
Has a default. The default value for the column is used.
Has a timestamp data type. The current timestamp value is used.
Is nullable. A null value is used.
Is a computed column. The calculated value is used.
column_list must be used when explicit values are inserted into an identity column, and the SET
IDENTITY_INSERT option must be ON for the table.
OUTPUT Clause
Returns inserted rows as part of the insert operation. The results can be returned to the processing application
or inserted into a table or table variable for further processing.
The OUTPUT clause is not supported in DML statements that reference local partitioned views, distributed
partitioned views, or remote tables, or INSERT statements that contain an execute_statement. The OUTPUT
INTO clause is not supported in INSERT statements that contain a <dml_table_source> clause.
VALUES
Introduces the list or lists of data values to be inserted. There must be one data value for each column in
column_list, if specified, or in the table. The value list must be enclosed in parentheses.
If the values in the Value list are not in the same order as the columns in the table or do not have a value for
each column in the table, column_list must be used to explicitly specify the column that stores each incoming
value.
You can use the Transact-SQL row constructor (also called a table value constructor) to specify multiple rows in
a single INSERT statement. The row constructor consists of a single VALUES clause with multiple value lists
enclosed in parentheses and separated by a comma. For more information, see Table Value Constructor
(Transact-SQL ).
DEFAULT
Forces the Database Engine to load the default value defined for a column. If a default does not exist for the
column and the column allows null values, NULL is inserted. For a column defined with the timestamp data
type, the next timestamp value is inserted. DEFAULT is not valid for an identity column.
expression
Is a constant, a variable, or an expression. The expression cannot contain an EXECUTE statement.
When referencing the Unicode character data types nchar, nvarchar, and ntext, 'expression' should be
prefixed with the capital letter 'N'. If 'N' is not specified, SQL Server converts the string to the code page that
corresponds to the default collation of the database or column. Any characters not found in this code page are
lost.
derived_table
Is any valid SELECT statement that returns rows of data to be loaded into the table. The SELECT statement
cannot contain a common table expression (CTE ).
execute_statement
Is any valid EXECUTE statement that returns data with SELECT or READTEXT statements. For more
information, see EXECUTE (Transact-SQL ).
The RESULT SETS options of the EXECUTE statement cannot be specified in an INSERT…EXEC statement.
If execute_statement is used with INSERT, each result set must be compatible with the columns in the table or
in column_list.
execute_statement can be used to execute stored procedures on the same server or a remote server. The
procedure in the remote server is executed, and the result sets are returned to the local server and loaded into
the table in the local server. In a distributed transaction, execute_statement cannot be issued against a loopback
linked server when the connection has multiple active result sets (MARS ) enabled.
If execute_statement returns data with the READTEXT statement, each READTEXT statement can return a
maximum of 1 MB (1024 KB ) of data. execute_statement can also be used with extended procedures.
execute_statement inserts the data returned by the main thread of the extended procedure; however, output
from threads other than the main thread are not inserted.
You cannot specify a table-valued parameter as the target of an INSERT EXEC statement; however, it can be
specified as a source in the INSERT EXEC string or stored-procedure. For more information, see Use Table-
Valued Parameters (Database Engine).
<dml_table_source>
Specifies that the rows inserted into the target table are those returned by the OUTPUT clause of an INSERT,
UPDATE, DELETE, or MERGE statement, optionally filtered by a WHERE clause. If <dml_table_source> is
specified, the target of the outer INSERT statement must meet the following restrictions:
It must be a base table, not a view.
It cannot be a remote table.
It cannot have any triggers defined on it.
It cannot participate in any primary key-foreign key relationships.
It cannot participate in merge replication or updatable subscriptions for transactional replication.
The compatibility level of the database must be set to 100 or higher. For more information, see OUTPUT
Clause (Transact-SQL ).
<select_list>
Is a comma-separated list specifying which columns returned by the OUTPUT clause to insert. The
columns in <select_list> must be compatible with the columns into which values are being inserted.
<select_list> cannot reference aggregate functions or TEXTPTR.
NOTE
Any variables listed in the SELECT list refer to their original values, regardless of any changes made to them in
<dml_statement_with_output_clause>.
<dml_statement_with_output_clause>
Is a valid INSERT, UPDATE, DELETE, or MERGE statement that returns affected rows in an OUTPUT clause.
The statement cannot contain a WITH clause, and cannot target remote tables or partitioned views. If UPDATE
or DELETE is specified, it cannot be a cursor-based UPDATE or DELETE. Source rows cannot be referenced as
nested DML statements.
WHERE <search_condition>
Is any WHERE clause containing a valid <search_condition> that filters the rows returned by
<dml_statement_with_output_clause>. For more information, see Search Condition (Transact-SQL ). When
used in this context, <search_condition> cannot contain subqueries, scalar user-defined functions that perform
data access, aggregate functions, TEXTPTR, or full-text search predicates.
DEFAULT VALUES
Applies to: SQL Server 2008 through SQL Server 2017.
Forces the new row to contain the default values defined for each column.
BULK
Applies to: SQL Server 2008 through SQL Server 2017.
Used by external tools to upload a binary data stream. This option is not intended for use with tools such as
SQL Server Management Studio, SQLCMD, OSQL, or data access application programming interfaces such as
SQL Server Native Client.
FIRE_TRIGGERS
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies that any insert triggers defined on the destination table execute during the binary data stream upload
operation. For more information, see BULK INSERT (Transact-SQL ).
CHECK_CONSTRAINTS
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies that all constraints on the target table or view must be checked during the binary data stream upload
operation. For more information, see BULK INSERT (Transact-SQL ).
KEEPNULLS
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies that empty columns should retain a null value during the binary data stream upload operation. For
more information, see Keep Nulls or Use Default Values During Bulk Import (SQL Server).
KILOBYTES_PER_BATCH = kilobytes_per_batch
Specifies the approximate number of kilobytes (KB ) of data per batch as kilobytes_per_batch. For more
information, see BULK INSERT (Transact-SQL ).
ROWS_PER_BATCH =rows_per_batch
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates the approximate number of rows of data in the binary data stream. For more information, see BULK
INSERT (Transact-SQL ).
NOTE
A syntax error is raised if a column list is not provided.
Remarks
For information specific to inserting data into SQL graph tables, see INSERT (SQL Graph).
Best Practices
Use the @@ROWCOUNT function to return the number of inserted rows to the client application. For more
information, see @@ROWCOUNT (Transact-SQL ).
Best Practices for Bulk Importing Data
Using INSERT INTO…SELECT to Bulk Import Data with Minimal Logging
You can use INSERT INTO <target_table> SELECT <columns> FROM <source_table> to efficiently transfer a large
number of rows from one table, such as a staging table, to another table with minimal logging. Minimal logging
can improve the performance of the statement and reduce the possibility of the operation filling the available
transaction log space during the transaction.
Minimal logging for this statement has the following requirements:
The recovery model of the database is set to simple or bulk-logged.
The target table is an empty or nonempty heap.
The target table is not used in replication.
The TABLOCK hint is specified for the target table.
Rows that are inserted into a heap as the result of an insert action in a MERGE statement may also be
minimally logged.
Unlike the BULK INSERT statement, which holds a less restrictive Bulk Update lock, INSERT INTO…SELECT
with the TABLOCK hint holds an exclusive (X) lock on the table. This means that you cannot insert rows using
parallel insert operations.
Using OPENROWSET and BULK to Bulk Import Data
The OPENROWSET function can accept the following table hints, which provide bulk-load optimizations with
the INSERT statement:
The TABLOCK hint can minimize the number of log records for the insert operation. The recovery model
of the database must be set to simple or bulk-logged and the target table cannot be used in replication.
For more information, see Prerequisites for Minimal Logging in Bulk Import.
The IGNORE_CONSTRAINTS hint can temporarily disable FOREIGN KEY and CHECK constraint
checking.
The IGNORE_TRIGGERS hint can temporarily disable trigger execution.
The KEEPDEFAULTS hint allows the insertion of a table column's default value, if any, instead of NULL
when the data record lacks a value for the column.
The KEEPIDENTITY hint allows the identity values in the imported data file to be used for the identity
column in the target table.
These optimizations are similar to those available with the BULK INSERT command. For more information, see
Table Hints (Transact-SQL ).
Data Types
When you insert rows, consider the following data type behavior:
If a value is being loaded into columns with a char, varchar, or varbinary data type, the padding or
truncation of trailing blanks (spaces for char and varchar, zeros for varbinary) is determined by the
SET ANSI_PADDING setting defined for the column when the table was created. For more information,
see SET ANSI_PADDING (Transact-SQL ).
The following table shows the default operation for SET ANSI_PADDING OFF.
If an empty string (' ') is loaded into a column with a varchar or text data type, the default operation is
to load a zero-length string.
Inserting a null value into a text or image column does not create a valid text pointer, nor does it
preallocate an 8-KB text page.
Columns created with the uniqueidentifier data type store specially formatted 16-byte binary values.
Unlike with identity columns, the Database Engine does not automatically generate values for columns
with the uniqueidentifier data type. During an insert operation, variables with a data type of
uniqueidentifier and string constants in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (36 characters
including hyphens, where x is a hexadecimal digit in the range 0-9 or a-f ) can be used for
uniqueidentifier columns. For example, 6F9619FF -8B86-D011-B42D -00C04FC964FF is a valid value
for a uniqueidentifier variable or column. Use the NEWID () function to obtain a globally unique ID
(GUID ).
Inserting Values into User-Defined Type Columns
You can insert values in user-defined type columns by:
Supplying a value of the user-defined type.
Supplying a value in a SQL Server system data type, as long as the user-defined type supports implicit
or explicit conversion from that type. The following example shows how to insert a value in a column of
user-defined type Point , by explicitly converting from a string.
A binary value can also be supplied without performing explicit conversion, because all user-defined
types are implicitly convertible from binary.
Calling a user-defined function that returns a value of the user-defined type. The following example uses
a user-defined function CreateNewPoint() to create a new value of user-defined type Point and insert
the value into the Cities table.
Error Handling
You can implement error handling for the INSERT statement by specifying the statement in a TRY…CATCH
construct.
If an INSERT statement violates a constraint or rule, or if it has a value incompatible with the data type of the
column, the statement fails and an error message is returned.
If INSERT is loading multiple rows with SELECT or EXECUTE, any violation of a rule or constraint that occurs
from the values being loaded causes the statement to be stopped, and no rows are loaded.
When an INSERT statement encounters an arithmetic error (overflow, divide by zero, or a domain error)
occurring during expression evaluation, the Database Engine handles these errors as if SET ARITHABORT is
set to ON. The batch is stopped, and an error message is returned. During expression evaluation when SET
ARITHABORT and SET ANSI_WARNINGS are OFF, if an INSERT, DELETE or UPDATE statement encounters
an arithmetic error, overflow, divide-by-zero, or a domain error, SQL Server inserts or updates a NULL value. If
the target column is not nullable, the insert or update action fails and the user receives an error.
Interoperability
When an INSTEAD OF trigger is defined on INSERT actions against a table or view, the trigger executes
instead of the INSERT statement. For more information about INSTEAD OF triggers, see CREATE TRIGGER
(Transact-SQL ).
Limitations and Restrictions
When you insert values into remote tables and not all values for all columns are specified, you must identify the
columns to which the specified values are to be inserted.
When TOP is used with INSERT the referenced rows are not arranged in any order and the ORDER BY clause
can not be directly specified in this statements. If you need to use TOP to insert rows in a meaningful
chronological order, you must use TOP together with an ORDER BY clause that is specified in a subselect
statement. See the Examples section that follows in this topic.
INSERT queries that use SELECT with ORDER BY to populate rows guarantees how identity values are
computed but not the order in which the rows are inserted.
In Parallel Data Warehouse, the ORDER BY clause is invalid in VIEWS, CREATE TABLE AS SELECT, INSERT
SELECT, inline functions, derived tables, subqueries and common table expressions, unless TOP is also
specified.
Logging Behavior
The INSERT statement is always fully logged except when using the OPENROWSET function with the BULK
keyword or when using INSERT INTO <target_table> SELECT <columns> FROM <source_table> . These operations
can be minimally logged. For more information, see the section "Best Practices for Bulk Loading Data" earlier in
this topic.
Security
During a linked server connection, the sending server provides a login name and password to connect to the
receiving server on its behalf. For this connection to work, you must create a login mapping between the linked
servers by using sp_addlinkedsrvlogin.
When you use OPENROWSET(BULK…), it is important to understand how SQL Server handles
impersonation. For more information, see "Security Considerations" in Import Bulk Data by Using BULK
INSERT or OPENROWSET(BULK...) (SQL Server).
Permissions
INSERT permission is required on the target table.
INSERT permissions default to members of the sysadmin fixed server role, the db_owner and db_datawriter
fixed database roles, and the table owner. Members of the sysadmin, db_owner, and the db_securityadmin
roles, and the table owner can transfer permissions to other users.
To execute INSERT with the OPENROWSET function BULK option, you must be a member of the sysadmin
fixed server role or of the bulkadmin fixed server role.
Examples
CATEGORY FEATURED SYNTAX ELEMENTS
Inserting data from other tables INSERT…SELECT • INSERT…EXECUTE • WITH common table
expression • TOP • OFFSET FETCH
CATEGORY FEATURED SYNTAX ELEMENTS
Specifying target objects other than standard tables Views • table variables
Inserting rows into a remote table Linked server • OPENQUERY rowset function •
OPENDATASOURCE rowset function
Bulk loading data from tables or data files INSERT…SELECT • OPENROWSET function
Basic Syntax
Examples in this section demonstrate the basic functionality of the INSERT statement using the minimum
required syntax.
A. Inserting a single row of data
The following example inserts one row into the Production.UnitMeasure table in the AdventureWorks2012
database. The columns in this table are UnitMeasureCode , Name , and ModifiedDate . Because values for all
columns are supplied and are listed in the same order as the columns in the table, the column names do not
have to be specified in the column list.
C. Inserting data that is not in the same order as the table columns
The following example uses a column list to explicitly specify the values that are inserted into each column. The
column order in the Production.UnitMeasure table in the AdventureWorks2012 database is UnitMeasureCode ,
Name , ModifiedDate ; however, the columns are not listed in that order in column_list.
J. Using TOP to limit the data inserted from the source table
The following example creates the table EmployeeSales and inserts the name and year-to-date sales data for
the top 5 random employees from the table HumanResources.Employee in the AdventureWorks2012 database.
The INSERT statement chooses any 5 rows returned by the SELECT statement. The OUTPUT clause displays
the rows that are inserted into the EmployeeSales table. Notice that the ORDER BY clause in the SELECT
statement is not used to determine the top 5 employees.
If you have to use TOP to insert rows in a meaningful chronological order, you must use TOP together with
ORDER BY in a subselect statement as shown in the following example. The OUTPUT clause displays the rows
that are inserted into the EmployeeSales table. Notice that the top 5 employees are now inserted based on the
results of the ORDER BY clause instead of random rows.
USE master;
GO
-- Create a link to the remote data source.
-- Specify a valid server name for @datasrc as 'server_name'
-- or 'server_nameinstance_name'.
-- Specify the remote data source in the FROM clause using a four-part name
-- in the form linked_server.catalog.schema.object.
R. Using the OPENROWSET function with BULK to bulk load data into a table
The following example inserts rows from a data file into a table by specifying the OPENROWSET function. The
IGNORE_TRIGGERS table hint is specified for performance optimization. For more examples, see Import Bulk
Data by Using BULK INSERT or OPENROWSET(BULK...) (SQL Server).
Applies to: SQL Server 2008 through SQL Server 2017.
Because the SQL Server query optimizer typically selects the best execution plan for a query, we recommend
that hints be used only as a last resort by experienced developers and database administrators.
S. Using the TABLOCK hint to specify a locking method
The following example specifies that an exclusive (X) lock is taken on the Production.Location table and is held
until the end of the INSERT statement.
Applies to: SQL Server, SQL Database.
-- Uses AdventureWorks
-- Uses AdventureWorks
See Also
BULK INSERT (Transact-SQL )
DELETE (Transact-SQL )
EXECUTE (Transact-SQL )
FROM (Transact-SQL )
IDENTITY (Property) (Transact-SQL )
NEWID (Transact-SQL )
SELECT (Transact-SQL )
UPDATE (Transact-SQL )
MERGE (Transact-SQL )
OUTPUT Clause (Transact-SQL )
Use the inserted and deleted Tables
INSERT (SQL Graph)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Adds one or more rows to a node or edge table in SQL Server.
NOTE
For standard Transact-SQL statements, see INSERT TABLE (Transact-SQL).
<object> ::=
{
[ server_name . database_name . schema_name .
| database_name .[ schema_name ] .
| schema_name .
]
node_table_name | edge_table_name
}
<dml_table_source> ::=
SELECT <select_list>
FROM ( <dml_statement_with_output_clause> )
[AS] table_alias [ ( column_alias [ ,...n ] ) ]
[ WHERE <on_or_where_search_condition> ]
[ OPTION ( <query_hint> [ ,...n ] ) ]
<on_or_where_search_condition> ::=
{ <search_condition_with_match> | <search_condition> }
<search_condition_with_match> ::=
{ <graph_predicate> | [ NOT ] <predicate> | ( <search_condition> ) }
[ AND { <graph_predicate> | [ NOT ] <predicate> | ( <search_condition> ) } ]
[ ,...n ]
<search_condition> ::=
{ [ NOT ] <predicate> | ( <search_condition> ) }
[ { AND | OR } [ NOT ] { <predicate> | ( <search_condition> ) } ]
[ ,...n ]
<graph_predicate> ::=
MATCH( <graph_search_pattern> [ AND <graph_search_pattern> ] [ , ...n] )
<graph_search_pattern>::=
<node_alias> { { <-( <edge_alias> )- | -( <edge_alias> )-> } <node_alias> }
<edge_table_column_list> ::=
($from_id, $to_id, [column_list])
Arguments
This document describes arguments pertaining to SQL graph. For a full list and description of supported
arguments in INSERT statement, see INSERT TABLE (Transact-SQL )
INTO
Is an optional keyword that can be used between INSERT and the target table.
search_condition_with_match
MATCH clause can be used in a subquery while inserting into a node or edge table. For MATCH statement syntax,
see GRAPH MATCH (Transact-SQL )
graph_search_pattern
Search pattern provided to MATCH clause as part of the graph predicate.
edge_table_column_list
Users must provide values for $from_id and $to_id while inserting into an edge. An error will be returned if a
value is not provided or NULLs are inserted into these columns.
Remarks
Inserting into a node is same as inserting into any relational table. Values for the $node_id column are
automatically generated.
While inserting into an edge table, users must provide values for $from_id and $to_id columns.
BULK insert for node table is remains same as that of a relational table.
Before bulk inserting into an edge table, the node tables must be imported. Values for $from_id and $to_id can
then be extracted from the $node_id column of the node table and inserted as edges.
Permissions
INSERT permission is required on the target table.
INSERT permissions default to members of the sysadmin fixed server role, the db_owner and db_datawriter
fixed database roles, and the table owner. Members of the sysadmin, db_owner, and the db_securityadmin
roles, and the table owner can transfer permissions to other users.
To execute INSERT with the OPENROWSET function BULK option, you must be a member of the sysadmin fixed
server role or of the bulkadmin fixed server role.
Examples
A. Insert into node table
The following example creates a Person node table and inserts 2 rows into that table.
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Performs insert, update, or delete operations on a target table based on the results of a join with a source table. For
example, you can synchronize two tables by inserting, updating, or deleting rows in one table based on differences
found in the other table.
Performance Tip: The conditional behavior described for the MERGE statement works best when the two tables
have a complex mixture of matching characteristics. For example, inserting a row if it does not exist, or updating
the row if it does match. When simply updating one table based on the rows of another table, improved
performance and scalability can be achieved with basic INSERT, UPDATE, and DELETE statements. For example:
Syntax
[ WITH <common_table_expression> [,...n] ]
MERGE
[ TOP ( expression ) [ PERCENT ] ]
[ INTO ] <target_table> [ WITH ( <merge_hint> ) ] [ [ AS ] table_alias ]
USING <table_source>
ON <merge_search_condition>
[ WHEN MATCHED [ AND <clause_search_condition> ]
THEN <merge_matched> ] [ ...n ]
[ WHEN NOT MATCHED [ BY TARGET ] [ AND <clause_search_condition> ]
THEN <merge_not_matched> ]
[ WHEN NOT MATCHED BY SOURCE [ AND <clause_search_condition> ]
THEN <merge_matched> ] [ ...n ]
[ <output_clause> ]
[ OPTION ( <query_hint> [ ,...n ] ) ]
;
<target_table> ::=
{
[ database_name . schema_name . | schema_name . ]
target_table
}
<merge_hint>::=
{
{ [ <table_hint_limited> [ ,...n ] ]
[ [ , ] INDEX ( index_val [ ,...n ] ) ] }
}
<table_source> ::=
{
table_or_view_name [ [ AS ] table_alias ] [ <tablesample_clause> ]
[ WITH ( table_hint [ [ , ]...n ] ) ]
| rowset_function [ [ AS ] table_alias ]
[ ( bulk_column_alias [ ,...n ] ) ]
| user_defined_function [ [ AS ] table_alias ]
| OPENXML <openxml_clause>
| derived_table [ AS ] table_alias [ ( column_alias [ ,...n ] ) ]
| <joined_table>
| <pivoted_table>
| <unpivoted_table>
}
<merge_search_condition> ::=
<search_condition>
<merge_matched>::=
{ UPDATE SET <set_clause> | DELETE }
<set_clause>::=
SET
{ column_name = { expression | DEFAULT | NULL }
| { udt_column_name.{ { property_name = expression
| field_name = expression }
| method_name ( argument [ ,...n ] ) }
}
| column_name { .WRITE ( expression , @Offset , @Length ) }
| @variable = expression
| @variable = column = expression
| column_name { += | -= | *= | /= | %= | &= | ^= | |= } expression
| @variable { += | -= | *= | /= | %= | &= | ^= | |= } expression
| @variable = column { += | -= | *= | /= | %= | &= | ^= | |= } expression
} [ ,...n ]
<merge_not_matched>::=
{
INSERT [ ( column_list ) ]
{ VALUES ( values_list )
| DEFAULT VALUES }
}
<clause_search_condition> ::=
<search_condition>
<predicate> ::=
{ expression { = | < > | ! = | > | > = | ! > | < | < = | ! < } expression
| string_expression [ NOT ] LIKE string_expression
[ ESCAPE 'escape_character' ]
| expression [ NOT ] BETWEEN expression AND expression
| expression IS [ NOT ] NULL
| CONTAINS
( { column | * } , '< contains_search_condition >' )
| FREETEXT ( { column | * } , 'freetext_string' )
| expression [ NOT ] IN ( subquery | expression [ ,...n ] )
| expression { = | < > | ! = | > | > = | ! > | < | < = | ! < }
{ ALL | SOME | ANY} ( subquery )
| EXISTS ( subquery ) }
<output_clause>::=
{
[ OUTPUT <dml_select_list> INTO { @table_variable | output_table }
[ (column_list) ] ]
[ OUTPUT <dml_select_list> ]
}
<dml_select_list>::=
{ <column_name> | scalar_expression }
[ [AS] column_alias_identifier ] [ ,...n ]
<column_name> ::=
{ DELETED | INSERTED | from_table_name } . { * | column_name }
| $action
Arguments
WITH <common_table_expression>
Specifies the temporary named result set or view, also known as common table expression, defined within the
scope of the MERGE statement. The result set is derived from a simple query and is referenced by the MERGE
statement. For more information, see WITH common_table_expression (Transact-SQL ).
TOP ( expression ) [ PERCENT ]
Specifies the number or percentage of rows that are affected. expression can be either a number or a percentage of
the rows. The rows referenced in the TOP expression are not arranged in any order. For more information, see TOP
(Transact-SQL ).
The TOP clause is applied after the entire source table and the entire target table are joined and the joined rows
that do not qualify for an insert, update, or delete action are removed. The TOP clause further reduces the number
of joined rows to the specified value and the insert, update, or delete actions are applied to the remaining joined
rows in an unordered fashion. That is, there is no order in which the rows are distributed among the actions
defined in the WHEN clauses. For example, specifying TOP (10) affects 10 rows; of these rows, 7 may be updated
and 3 inserted, or 1 may be deleted, 5 updated, and 4 inserted and so on.
Because the MERGE statement performs a full table scan of both the source and target tables, I/O performance
can be affected when using the TOP clause to modify a large table by creating multiple batches. In this scenario, it
is important to ensure that all successive batches target new rows.
database_name
Is the name of the database in which target_table is located.
schema_name
Is the name of the schema to which target_table belongs.
target_table
Is the table or view against which the data rows from <table_source> are matched based on
<clause_search_condition>. target_table is the target of any insert, update, or delete operations specified by the
WHEN clauses of the MERGE statement.
If target_table is a view, any actions against it must satisfy the conditions for updating views. For more information,
see Modify Data Through a View.
target_table cannot be a remote table. target_table cannot have any rules defined on it.
[ AS ] table_alias
Is an alternative name used to reference a table.
USING <table_source>
Specifies the data source that is matched with the data rows in target_table based on <merge_search condition>.
The result of this match dictates the actions to take by the WHEN clauses of the MERGE statement.
<table_source> can be a remote table or a derived table that accesses remote tables.
<table_source> can be a derived table that uses the Transact-SQL table value constructor to construct a table by
specifying multiple rows.
For more information about the syntax and arguments of this clause, see FROM (Transact-SQL ).
ON <merge_search_condition>
Specifies the conditions on which <table_source> is joined with target_table to determine where they match.
Cau t i on
It is important to specify only the columns from the target table that are used for matching purposes. That is,
specify columns from the target table that are compared to the corresponding column of the source table. Do not
attempt to improve query performance by filtering out rows in the target table in the ON clause, such as by
specifying AND NOT target_table.column_x = value . Doing so may return unexpected and incorrect results.
WHEN MATCHED THEN <merge_matched>
Specifies that all rows of target_table that match the rows returned by <table_source> ON
<merge_search_condition>, and satisfy any additional search condition, are either updated or deleted according to
the <merge_matched> clause.
The MERGE statement can have at most two WHEN MATCHED clauses. If two clauses are specified, then the first
clause must be accompanied by an AND <search_condition> clause. For any given row, the second WHEN
MATCHED clause is only applied if the first is not. If there are two WHEN MATCHED clauses, then one must
specify an UPDATE action and one must specify a DELETE action. If UPDATE is specified in the <merge_matched>
clause, and more than one row of <table_source>matches a row in target_table based on
<merge_search_condition>, SQL Server returns an error. The MERGE statement cannot update the same row
more than once, or update and delete the same row.
WHEN NOT MATCHED [ BY TARGET ] THEN <merge_not_matched>
Specifies that a row is inserted into target_table for every row returned by <table_source> ON
<merge_search_condition> that does not match a row in target_table, but does satisfy an additional search
condition, if present. The values to insert are specified by the <merge_not_matched> clause. The MERGE
statement can have only one WHEN NOT MATCHED clause.
WHEN NOT MATCHED BY SOURCE THEN <merge_matched>
Specifies that all rows of target_table that do not match the rows returned by <table_source> ON
<merge_search_condition>, and that satisfy any additional search condition, are either updated or deleted
according to the <merge_matched> clause.
The MERGE statement can have at most two WHEN NOT MATCHED BY SOURCE clauses. If two clauses are
specified, then the first clause must be accompanied by an AND <clause_search_condition> clause. For any given
row, the second WHEN NOT MATCHED BY SOURCE clause is only applied if the first is not. If there are two
WHEN NOT MATCHED BY SOURCE clauses, then one must specify an UPDATE action and one must specify a
DELETE action. Only columns from the target table can be referenced in <clause_search_condition>.
When no rows are returned by <table_source>, columns in the source table cannot be accessed. If the update or
delete action specified in the <merge_matched> clause references columns in the source table, error 207 (Invalid
column name) is returned. For example, the clause
WHEN NOT MATCHED BY SOURCE THEN UPDATE SET TargetTable.Col1 = SourceTable.Col1 may cause the statement to fail
because Col1 in the source table is inaccessible.
AND <clause_search_condition>
Specifies any valid search condition. For more information, see Search Condition (Transact-SQL ).
<table_hint_limited>
Specifies one or more table hints that are applied on the target table for each of the insert, update, or delete actions
that are performed by the MERGE statement. The WITH keyword and the parentheses are required.
NOLOCK and READUNCOMMITTED are not allowed. For more information about table hints, see Table Hints
(Transact-SQL ).
Specifying the TABLOCK hint on a table that is the target of an INSERT statement has the same effect as specifying
the TABLOCKX hint. An exclusive lock is taken on the table. When FORCESEEK is specified, it is applied to the
implicit instance of the target table joined with the source table.
Cau t i on
Specifying READPAST with WHEN NOT MATCHED [ BY TARGET ] THEN INSERT may result in INSERT
operations that violate UNIQUE constraints.
INDEX ( index_val [ ,...n ] )
Specifies the name or ID of one or more indexes on the target table for performing an implicit join with the source
table. For more information, see Table Hints (Transact-SQL ).
<output_clause>
Returns a row for every row in target_table that is updated, inserted, or deleted, in no particular order. $action can
be specified in the output clause. $action is a column of type nvarchar(10) that returns one of three values for
each row: 'INSERT', 'UPDATE', or 'DELETE', according to the action that was performed on that row. For more
information about the arguments of this clause, see OUTPUT Clause (Transact-SQL ).
OPTION ( <query_hint> [ ,...n ] )
Specifies that optimizer hints are used to customize the way the Database Engine processes the statement. For
more information, see Query Hints (Transact-SQL ).
<merge_matched>
Specifies the update or delete action that is applied to all rows of target_table that do not match the rows returned
by <table_source> ON <merge_search_condition>, and that satisfy any additional search condition.
UPDATE SET <set_clause>
Specifies the list of column or variable names to be updated in the target table and the values with which to update
them.
For more information about the arguments of this clause, see UPDATE (Transact-SQL ). Setting a variable to the
same value as a column is not permitted.
DELETE
Specifies that the rows matching rows in target_table are deleted.
<merge_not_matched>
Specifies the values to insert into the target table.
(column_list)
Is a list of one or more columns of the target table in which to insert data. Columns must be specified as a single-
part name or else the MERGE statement will fail. column_list must be enclosed in parentheses and delimited by
commas.
VALUES ( values_list)
Is a comma-separated list of constants, variables, or expressions that return values to insert into the target table.
Expressions cannot contain an EXECUTE statement.
DEFAULT VALUES
Forces the inserted row to contain the default values defined for each column.
For more information about this clause, see INSERT (Transact-SQL ).
<search condition>
Specifies the search conditions used to specify <merge_search_condition> or <clause_search_condition>. For
more information about the arguments for this clause, see Search Condition (Transact-SQL ).
Remarks
At least one of the three MATCHED clauses must be specified, but they can be specified in any order. A variable
cannot be updated more than once in the same MATCHED clause.
Any insert, update, or delete actions specified on the target table by the MERGE statement are limited by any
constraints defined on it, including any cascading referential integrity constraints. If IGNORE_DUP_KEY is set to
ON for any unique indexes on the target table, MERGE ignores this setting.
The MERGE statement requires a semicolon (;) as a statement terminator. Error 10713 is raised when a MERGE
statement is run without the terminator.
When used after MERGE, @@ROWCOUNT (Transact-SQL ) returns the total number of rows inserted, updated,
and deleted to the client.
MERGE is a fully reserved keyword when the database compatibility level is set to 100 or higher. The MERGE
statement is available under both 90 and 100 database compatibility levels; however the keyword is not fully
reserved when the database compatibility level is set to 90.
The MERGE statement should not be used when using queued updating replication. The MERGE and queued
updating trigger are not compatible. Replace the MERGE statement with an insert or an update statement.
Trigger Implementation
For every insert, update, or delete action specified in the MERGE statement, SQL Server fires any corresponding
AFTER triggers defined on the target table, but does not guarantee on which action to fire triggers first or last.
Triggers defined for the same action honor the order you specify. For more information about setting trigger firing
order, see Specify First and Last Triggers.
If the target table has an enabled INSTEAD OF trigger defined on it for an insert, update, or delete action
performed by a MERGE statement, then it must have an enabled INSTEAD OF trigger for all of the actions
specified in the MERGE statement.
If there are any INSTEAD OF UPDATE or INSTEAD OF DELETE triggers defined on target_table, the update or
delete operations are not performed. Instead, the triggers fire and the inserted and deleted tables are populated
accordingly.
If there are any INSTEAD OF INSERT triggers defined on target_table, the insert operation is not performed.
Instead, the triggers fire and the inserted table is populated accordingly.
Permissions
Requires SELECT permission on the source table and INSERT, UPDATE, or DELETE permissions on the target
table. For additional information, see the Permissions section in the SELECT, INSERT, UPDATE, and DELETE
topics.
Examples
A. Using MERGE to perform INSERT and UPDATE operations on a table in a single statement
A common scenario is updating one or more columns in a table if a matching row exists, or inserting the data as a
new row if a matching row does not exist. This is usually done by passing parameters to a stored procedure that
contains the appropriate UPDATE and INSERT statements. With the MERGE statement, you can perform both
tasks in a single statement. The following example shows a stored procedure in the AdventureWorks2012database
that contains both an INSERT statement and an UPDATE statement. The procedure is then modified to perform
the equivalent operations by using a single MERGE statement.
CREATE PROCEDURE dbo.InsertUnitMeasure
@UnitMeasureCode nchar(3),
@Name nvarchar(25)
AS
BEGIN
SET NOCOUNT ON;
-- Update the row if it exists.
UPDATE Production.UnitMeasure
SET Name = @Name
WHERE UnitMeasureCode = @UnitMeasureCode
-- Insert the row if the UPDATE statement failed.
IF (@@ROWCOUNT = 0 )
BEGIN
INSERT INTO Production.UnitMeasure (UnitMeasureCode, Name)
VALUES (@UnitMeasureCode, @Name)
END
END;
GO
-- Test the procedure and return the results.
EXEC InsertUnitMeasure @UnitMeasureCode = 'ABC', @Name = 'Test Value';
SELECT UnitMeasureCode, Name FROM Production.UnitMeasure
WHERE UnitMeasureCode = 'ABC';
GO
C. Using MERGE to perform UPDATE and INSERT operations on a target table by using a derived source table
The following example uses MERGE to modify the SalesReason table in the AdventureWorks2012 database by
either updating or inserting rows. When the value of NewName in the source table matches a value in the Name
column of the target table, ( SalesReason ), the ReasonType column is updated in the target table. When the value of
NewName does not match, the source row is inserted into the target table. The source table is a derived table that
uses the Transact-SQL table value constructor to specify multiple rows for the source table. For more information
about using the table value constructor in a derived table, see Table Value Constructor (Transact-SQL ). The
example also shows how to store the results of the OUTPUT clause in a table variable and then summarize the
results of the MERGE statment by performing a simple select operation that returns the count of inserted and
updated rows.
See Also
SELECT (Transact-SQL )
INSERT (Transact-SQL )
UPDATE (Transact-SQL )
DELETE (Transact-SQL )
OUTPUT Clause (Transact-SQL )
MERGE in Integration Services Packages
FROM (Transact-SQL )
Table Value Constructor (Transact-SQL )
RENAME (Transact-SQL)
5/4/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Renames a user-created table in SQL Data Warehouse. Renames a user-created table or database in Parallel Data
Warehouse.
NOTE
To rename a database in SQL Data Warehouse, use ALTER DATABASE (Azure SQL Data Warehouse. To rename a database in
Azure SQL Database, use the ALTER DATABASE (Azure SQL Database) statement. To rename a database in SQL Server, use
the stored procedure sp_renamedb (Transact-SQL).
Syntax
-- Syntax for Azure SQL Data Warehouse
-- Rename a table.
RENAME OBJECT [::] [ [ database_name . [schema_name ] ] . ] | [schema_name . ] ] table_name TO new_table_name
[;]
-- Rename a table
RENAME OBJECT [::] [ [ database_name . [ schema_name ] . ] | [ schema_name . ] ] table_name TO new_table_name
[;]
-- Rename a database
RENAME DATABASE [::] database_name TO new_database_name
[;]
Arguments
RENAME OBJECT [::] [ [database_name . [ schema_name ] . ] | [ schema_name . ] ]table_name TO
new_table_name
APPLIES TO: SQL Data Warehouse, Parallel Data Warehouse
Change the name of a user-defined table. Specify the table to be renamed with a one-, two-, or three-part name.
Specify the new table new_table_name as a one-part name.
RENAME DATABASE [::] [ database_name TO new_database_name
APPLIES TO: Parallel Data Warehouse
Change the name of a user-defined database from database_name to new_database_name. You can't rename a
database to any of these Parallel Data Warehousereserved database names:
master
model
msdb
tempdb
pdwtempdb1
pdwtempdb2
DWConfiguration
DWDiagnostics
DWQueue
Permissions
To run this command, you need this permission:
ALTER permission on the table
Locking
Renaming a table takes a shared lock on the DATABASE object, a shared lock on the SCHEMA object, and an
exclusive lock on the table.
Examples
A. Rename a database
APPLIES TO: Parallel Data Warehouse only
This example renames the user-defined database AdWorks to AdWorks2.
-- Rename the user defined database AdWorks
RENAME DATABASE AdWorks to AdWorks2;
When renaming a table, all objects and properties associated with the table are updated to reference the new table
name. For example, table definitions, indexes, constraints, and permissions are updated. Views are not updated.
B. Rename a table
APPLIES TO: SQL Data Warehouse, Parallel Data Warehouse
This example renames the Customer table to Customer1.
When renaming a table, all objects and properties associated with the table are updated to reference the new table
name. For example, table definitions, indexes, constraints, and permissions are updated. Views are not updated.
C. Move a table to a different schema
APPLIES TO: SQL Data Warehouse, Parallel Data Warehouse
If your intent is to move the object to a different schema, use ALTER SCHEMA (Transact-SQL ). For example, the
following statement moves the table item from the product schema to the dbo schema.
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse
Adds a digital signature to a stored procedure, function, assembly, or trigger. Also adds a countersignature to a stored procedure, function, assembly, or trigger.
Transact-SQL Syntax Conventions
Syntax
ADD [ COUNTER ] SIGNATURE TO module_class::module_name
BY <crypto_list> [ ,...n ]
<crypto_list> ::=
CERTIFICATE cert_name
| CERTIFICATE cert_name [ WITH PASSWORD = 'password' ]
| CERTIFICATE cert_name WITH SIGNATURE = signed_blob
| ASYMMETRIC KEY Asym_Key_Name
| ASYMMETRIC KEY Asym_Key_Name [ WITH PASSWORD = 'password'.]
| ASYMMETRIC KEY Asym_Key_Name WITH SIGNATURE = signed_blob
Arguments
module_class
Is the class of the module to which the signature is added. The default for schema-scoped modules is OBJECT.
module_name
Is the name of a stored procedure, function, assembly, or trigger to be signed or countersigned.
CERTIFICATE cert_name
Is the name of a certificate with which to sign or countersign the stored procedure, function, assembly, or trigger.
WITH PASSWORD ='password'
Is the password that is required to decrypt the private key of the certificate or asymmetric key. This clause is only required if the private key is not protected by the database
master key.
SIGNATURE =signed_blob
Specifies the signed, binary large object (BLOB) of the module. This clause is useful if you want to ship a module without shipping the private key. When you use this clause,
only the module, signature, and public key are required to add the signed binary large object to a database. signed_blob is the blob itself in hexadecimal format.
ASYMMETRIC KEY Asym_Key_Name
Is the name of an asymmetric key with which to sign or counter-sign the stored procedure, function, assembly, or trigger.
Remarks
The module being signed or countersigned and the certificate or asymmetric key used to sign it must already exist. Every character in the module is included in the signature
calculation. This includes leading carriage returns and line feeds.
A module can be signed and countersigned by any number of certificates and asymmetric keys.
The signature of a module is dropped when the module is changed.
If a module contains an EXECUTE AS clause, the security ID (SID) of the principal is also included as a part of the signing process.
Cau t i on
Module signing should only be used to grant permissions, never to deny or revoke permissions.
Inline table-valued functions cannot be signed.
Information about signatures is visible in the sys.crypt_properties catalog view.
WARNING
When recreating a procedure for signature, all the statements in the original batch must match recreation batch. If any portion of the batch differs, even in spaces or comments, the resultant
signature will be different.
Countersignatures
When executing a signed module, the signatures will be temporarily added to the SQL token, but the signatures are lost if the module executes another module or if the
module terminates execution. A countersignature is a special form of signature. By itself, a countersignature does not grant any permissions, however, it allows signatures
made by the same certificate or asymmetric key to be kept for the duration of the call made to the countersigned object.
For example, presume that user Alice calls procedure ProcSelectT1ForAlice, which calls procedure procSelectT1, which selects from table T1. Alice has EXECUTE permission
on ProcSelectT1ForAlice and procSelectT1, but she does not have SELECT permission on T1, and no ownership chaining is involved in this entire chain. Alice cannot access
table T1, either directly, or through the use of ProcSelectT1ForAlice and procSelectT1. Since we want Alice to always use ProcSelectT1ForAlice for access, we don't want to
grant her permission to execute procSelectT1. How can we accomplish this?
If we sign procSelectT1, such that procSelectT1 can access T1, then Alice can invoke procSelectT1 directly and she doesn't have to call ProcSelectT1ForAlice.
We could deny EXECUTE permission on procSelectT1 to Alice, but then Alice would not be able to call procSelectT1 through ProcSelectT1ForAlice either.
Signing ProcSelectT1ForAlice would not work by itself, because the signature would be lost in the call to procSelectT1.
However, by countersigning procSelectT1 with the same certificate used to sign ProcSelectT1ForAlice, SQL Server will keep the signature across the call chain and will allow
access to T1. If Alice attempts to call procSelectT1 directly, she cannot access T1, because the countersignature doesn't grant any rights. Example C below, shows the Transact-
SQL for this example.
Permissions
Requires ALTER permission on the object and CONTROL permission on the certificate or asymmetric key. If an associated private key is protected by a password, the user
also must have the password.
Examples
A. Signing a stored procedure by using a certificate
The following example signs the stored procedure HumanResources.uspUpdateEmployeeLogin with the certificate HumanResourcesDP .
USE AdventureWorks2012;
ADD SIGNATURE TO HumanResources.uspUpdateEmployeeLogin
BY CERTIFICATE HumanResourcesDP;
GO
The crypt_property signature that is returned by this statement will be different each time you create a procedure. Make a note of the result for use later in this example. For
this example, the result demonstrated is:
0x831F5530C86CC8ED606E5BC2720DA835351E46219A6D5DE9CE546297B88AEF3B6A7051891AF3EE7A68EAB37CD8380988B4C3F7469C8EABDD9579A2A5C507A4482905C2F24024FFB2F9BD7A953DD5E98470C4AA90CE83237739BB5FAE7BAC796E7710BDE
.
-- Cleanup
USE master;
GO
DROP DATABASE testDB;
DROP LOGIN Alice;
See Also
sys.crypt_properties (Transact-SQL)
DROP SIGNATURE (Transact-SQL)
CLOSE MASTER KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Closes the master key of the current database.
Transact-SQL Syntax Conventions
Syntax
CLOSE MASTER KEY
Arguments
Takes no arguments.
Remarks
This statement reverses the operation performed by OPEN MASTER KEY. CLOSE MASTER KEY only succeeds
when the database master key was opened in the current session by using the OPEN MASTER KEY statement.
Permissions
No permissions are required.
Examples
USE AdventureWorks2012;
CLOSE MASTER KEY;
GO
See Also
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
Encryption Hierarchy
CLOSE SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Closes a symmetric key, or closes all symmetric keys open in the current session.
Transact-SQL Syntax Conventions
Syntax
CLOSE { SYMMETRIC KEY key_name | ALL SYMMETRIC KEYS }
Arguments
Key_name
Is the name of the symmetric key to be closed.
Remarks
Open symmetric keys are bound to the session not to the security context. An open key will continue to be
available until it is either explicitly closed or the session is terminated. CLOSE ALL SYMMETRIC KEYS will close
any database master key that was opened in the current session by using the OPEN MASTER KEY statement.
Information about open keys is visible in the sys.openkeys (Transact-SQL ) catalog view.
Permissions
No explicit permission is required to close a symmetric key.
Examples
A. Closing a symmetric key
The following example closes the symmetric key ShippingSymKey04 .
See Also
CREATE SYMMETRIC KEY (Transact-SQL )
ALTER SYMMETRIC KEY (Transact-SQL )
OPEN SYMMETRIC KEY (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL )
DENY (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies a permission to a principal. Prevents that principal from inheriting the permission through its group or
role memberships. DENY takes precedence over all permissions, except that DENY does not apply to object
owners or members of the sysadmin fixed server role. Security Note Members of the sysadmin fixed server
role and object owners cannot be denied permissions."
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<permission> ::=
{ see the tables below }
<class> ::=
{ see the tables below }
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
DENY
<permission> [ ,...n ]
[ ON [ <class_> :: ] securable ]
TO principal [ ,...n ]
[ CASCADE ]
[;]
<permission> ::=
{ see the tables below }
<class> ::=
{
LOGIN
| DATABASE
| OBJECT
| ROLE
| SCHEMA
| USER
}
Arguments
ALL
This option does not deny all possible permissions. Denying ALL is equivalent to denying the following
permissions.
If the securable is a database, ALL means BACKUP DATABASE, BACKUP LOG, CREATE DATABASE,
CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and
CREATE VIEW.
If the securable is a scalar function, ALL means EXECUTE and REFERENCES.
If the securable is a table-valued function, ALL means DELETE, INSERT, REFERENCES, SELECT, and
UPDATE.
If the securable is a stored procedure, ALL means EXECUTE.
If the securable is a table, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
If the securable is a view, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
NOTE
The DENY ALL syntax is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Avoid using
this feature in new development work, and plan to modify applications that currently use this feature. Deny specific
permissions instead.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the sub-topics
listed below.
column
Specifies the name of a column in a table on which permissions are being denied. The parentheses () are
required.
class
Specifies the class of the securable on which the permission is being denied. The scope qualifier :: is required.
securable
Specifies the securable on which the permission is being denied.
TO principal
Is the name of a principal. The principals to which permissions on a securable can be denied vary, depending on
the securable. See the securable-specific topics listed below for valid combinations.
CASCADE
Indicates that the permission is denied to the specified principal and to all other principals to which the principal
granted the permission. Required when the principal has the permission with GRANT OPTION.
AS principal
Use the AS principal clause to indicate that the principal recorded as the denier of the permission should be a
principal other than the person executing the statement. For example, presume that user Mary is principal_id 12
and user Raul is principal 15. Mary executes DENY SELECT ON OBJECT::X TO Steven WITH GRANT OPTION AS Raul;
Now the sys.database_permissions table will indicate that the grantor_prinicpal_id of the deny statement was 15
(Raul) even though the statement was actually executed by user 13 (Mary).
The use of AS in this statement does not imply the ability to impersonate another user.
Remarks
The full syntax of the DENY statement is complex. The syntax diagram above was simplified to draw attention to
its structure. Complete syntax for denying permissions on specific securables is described in the topics listed
below.
DENY will fail if CASCADE is not specified when denying a permission to a principal that was granted that
permission with GRANT OPTION specified.
The sp_helprotect system stored procedure reports permissions on a database-level securable.
Cau t i on
A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the permissions
hierarchy has been preserved for the sake of backward compatibility. It will be removed in a future release.
Cau t i on
Denying CONTROL permission on a database implicitly denies CONNECT permission on the database. A
principal that is denied CONTROL permission on a database will not be able to connect to that database.
Cau t i on
Denying CONTROL SERVER permission implicitly denies CONNECT SQL permission on the server. A principal
that is denied CONTROL SERVER permission on a server will not be able to connect to that server.
Permissions
The caller (or the principal specified with the AS option) must have either CONTROL permission on the
securable, or a higher permission that implies CONTROL permission on the securable. If using the AS option,
the specified principal must own the securable on which a permission is being denied.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can deny any
permission on any securable in the server. Grantees of CONTROL permission on the database, such as members
of the db_owner fixed database role, can deny any permission on any securable in the database. Grantees of
CONTROL permission on a schema can deny any permission on any object in the schema. If the AS clause is
used, the specified principal must own the securable on which permissions are being denied.
Examples
The following table lists the securables and the topics that describe the securable-specific syntax.
See Also
REVOKE (Transact-SQL )
sp_addlogin (Transact-SQL )
sp_adduser (Transact-SQL )
sp_changedbowner (Transact-SQL )
sp_dropuser (Transact-SQL )
sp_helprotect (Transact-SQL )
sp_helpuser (Transact-SQL )
DENY Assembly Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an assembly.
Transact-SQL Syntax Conventions
Syntax
DENY { permission [ ,...n ] } ON ASSEMBLY :: assembly_name
TO database_principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]
Arguments
permission
Specifies a permission that can be denied on an assembly. Listed below.
ON ASSEMBLY ::assembly_name
Specifies the assembly on which the permission is being denied. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
An assembly is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be denied on an assembly are listed below, together with the
more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the assembly. If using the AS option, the specified principal must own the
assembly.
See Also
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASSEMBLY (Transact-SQL )
Encryption Hierarchy
DENY Asymmetric Key Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an asymmetric key.
Transact-SQL Syntax Conventions
Syntax
DENY { permission [ ,...n ] }
ON ASYMMETRIC KEY :: asymmetric_key_name
TO database_principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]
Arguments
permission
Specifies a permission that can be denied on an asymmetric key. Listed below.
ON ASYMMETRIC KEY ::asymmetric_key_name
Specifies the asymmetric key on which the permission is being denied. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
An asymmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on an asymmetric key are listed below,
together with the more general permissions that include them by implication.
ASYMMETRIC KEY PERMISSION IMPLIED BY ASYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
Requires CONTROL permission on the asymmetric key. If the AS clause is used, the specified principal must own
the asymmetric key.
See Also
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
DENY Availability Group Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an Always On availability group in SQL Server.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] ON AVAILABILITY GROUP :: availability_group_name
TO < server_principal > [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
Arguments
permission
Specifies a permission that can be denied on an availability group. For a list of the permissions, see the Remarks
section later in this topic.
ON AVAIL ABILITY GROUP ::availability_group_name
Specifies the availability group on which the permission is being denied. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login to which the permission is being denied.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to deny the
permission.
Remarks
Permissions at the server scope can be denied only when the current database is master.
Information about availability groups is visible in the sys.availability_groups (Transact-SQL ) catalog view.
Information about server permissions is visible in the sys.server_permissions catalog view, and information about
server principals is visible in the sys.server_principals catalog view.
An availability group is a server-level securable. The most specific and limited permissions that can be denied on
an availability group are listed in the following table, together with the more general permissions that include
them by implication.
Permissions
Requires CONTROL permission on the availability group or ALTER ANY AVAIL ABILTIY GROUP permission on
the server.
Examples
A. Denying VIEW DEFINITION permission on an availability group
The following example denies VIEW DEFINITION permission on availability group MyAg to SQL Server login
ZArifin .
USE master;
DENY VIEW DEFINITION ON AVAILABILITY GROUP::MyAg TO ZArifin;
GO
USE master;
DENY TAKE OWNERSHIP ON AVAILABILITY GROUP::MyAg TO PKomosinski
CASCADE;
GO
See Also
REVOKE Availability Group Permissions (Transact-SQL )
GRANT Availability Group Permissions (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
sys.availability_groups (Transact-SQL )
Always On Availability Groups Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
DENY Certificate Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a certificate.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ]
ON CERTIFICATE :: certificate_name
TO principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]
Arguments
permission
Specifies a permission that can be denied on a certificate. Listed below.
ON CERTIFICATE ::certificate_name
Specifies the certificate on which the permission is being denied. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
A certificate is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be denied on a certificate are listed below, together with the
more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the certificate. If the AS clause is used, the specified principal must own the
certificate.
See Also
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
DENY Database Permissions (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a database in SQL Server.
Transact-SQL Syntax Conventions
Syntax
DENY <permission> [ ,...n ]
TO <database_principal> [ ,...n ] [ CASCADE ]
[ AS <database_principal> ]
<permission> ::=
permission | ALL [ PRIVILEGES ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be denied on a database. For a list of the permissions, see the Remarks section later
in this topic.
ALL
This option does not deny all possible permissions. Denying ALL is equivalent to denying the following
permissions: BACKUP DATABASE, BACKUP LOG, CREATE DATABASE, CREATE DEFAULT, CREATE FUNCTION,
CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and CREATE VIEW.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
CASCADE
Indicates that the permission will also be denied to principals to which the specified principal granted it.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
A database is a securable contained by the server that is its parent in the permissions hierarchy. The most specific
and limited permissions that can be denied on a database are listed in the following table, together with the more
general permissions that include them by implication.
ALTER ANY DATABASE EVENT SESSION ALTER ALTER ANY EVENT SESSION
Applies to: Azure SQL Database.
CREATE DATABASE DDL EVENT ALTER ANY DATABASE EVENT CREATE DDL EVENT NOTIFICATION
NOTIFICATION NOTIFICATION
CREATE REMOTE SERVICE BINDING ALTER ANY REMOTE SERVICE BINDING CONTROL SERVER
Permissions
The principal that executes this statement (or the principal specified with the AS option) must have CONTROL
permission on the database or a higher permission that implies CONTROL permission on the database.
If you are using the AS option, the specified principal must own the database.
Examples
A. Denying permission to create certificates
The following example denies CREATE CERTIFICATE permission on the AdventureWorks2012 database to user
MelanieK .
USE AdventureWorks2012;
DENY CREATE CERTIFICATE TO MelanieK;
GO
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
USE AdventureWorks2012;
DENY REFERENCES TO AuditMonitor;
GO
USE AdventureWorks2012;
DENY VIEW DEFINITION TO CarmineEs CASCADE;
GO
See Also
sys.database_permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
DENY Database Principal Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions granted on a database user, database role, or application role in SQL Server.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ]
ON
{ [ USER :: database_user ]
| [ ROLE :: database_role ]
| [ APPLICATION ROLE :: application_role ]
}
TO <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be denied on the database principal. For a list of the permissions, see the Remarks
section later in this topic.
USER ::database_user
Specifies the class and name of the user on which the permission is being denied. The scope qualifier (::) is
required.
ROLE ::database_role
Specifies the class and name of the role on which the permission is being denied. The scope qualifier (::) is
required.
APPLICATION ROLE ::application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies the class and name of the application role on which the permission is being denied. The scope qualifier
(::) is required.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Database User Permissions
A database user is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a database user are listed in the
following table, together with the more general permissions that include them by implication.
DATABASE USER PERMISSION IMPLIED BY DATABASE USER PERMISSION IMPLIED BY DATABASE PERMISSION
DATABASE ROLE PERMISSION IMPLIED BY DATABASE ROLE PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
Requires CONTROL permission on the specified principal, or a higher permission that implies CONTROL
permission.
Grantees of CONTROL permission on a database, such as members of the db_owner fixed database role, can
deny any permission on any securable in the database.
Examples
A. Denying CONTROL permission on a user to another user
The following example denies CONTROL permission on the AdventureWorks2012 user Wanida to user RolandX .
USE AdventureWorks2012;
DENY CONTROL ON USER::Wanida TO RolandX;
GO
B. Denying VIEW DEFINITION permission on a role to a user to which it was granted with GRANT OPTION
The following example denies VIEW DEFINITION permission on the AdventureWorks2012 role SammamishParking
to database user JinghaoLiu . The CASCADE option is specified because user JinghaoLiu was granted VIEW
DEFINITION permission WITH GRANT OPTION.
USE AdventureWorks2012;
DENY VIEW DEFINITION ON ROLE::SammamishParking
TO JinghaoLiu CASCADE;
GO
C. Denying IMPERSONATE permission on a user to an application role
The following example denies IMPERSONATE permission on user HamithaL to the AdventureWorks2012
application role AccountsPayable17 .
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
USE AdventureWorks2012;
DENY IMPERSONATE ON USER::HamithaL TO AccountsPayable17;
GO
See Also
GRANT Database Principal Permissions (Transact-SQL )
REVOKE Database Principal Permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
sys.database_permissions (Transact-SQL )
CREATE USER (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
DENY Database Scoped Credential (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a database scoped credential.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ]
ON DATABASE SCOPED CREDENTIAL :: credential_name
TO principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]
Arguments
permission
Specifies a permission that can be denied on a database scoped credential. Listed below.
ON DATABASE SCOPED CREDENTIAL ::credential_name
Specifies the database scoped credential on which the permission is being denied. The scope qualifier "::" is
required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
A database scoped credential is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be denied on a database scoped
credential are listed below, together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the database scoped credential. If the AS clause is used, the specified principal
must own the database scoped credential.
See Also
DENY (Transact-SQL )
GRANT database scoped credential (Transact-SQL )
REVOKE database scoped credential (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
DENY Endpoint Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an endpoint.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] ON ENDPOINT :: endpoint_name
TO < server_principal > [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
Arguments
permission
Specifies a permission that can be denied on an endpoint. For a list of the permissions, see the Remarks section
later in this topic.
ON ENDPOINT ::endpoint_name
Specifies the endpoint on which the permission is being denied. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login to which the permission is being denied.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to deny the
permission.
Remarks
Permissions at the server scope can be denied only when the current database is master.
Information about endpoints is visible in the sys.endpoints catalog view. Information about server permissions is
visible in the sys.server_permissions catalog view, and information about server principals is visible in the
sys.server_principals catalog view.
An endpoint is a server-level securable. The most specific and limited permissions that can be denied on an
endpoint are listed in the following table, together with the more general permissions that include them by
implication.
Permissions
Requires CONTROL permission on the endpoint or ALTER ANY ENDPOINT permission on the server.
Examples
A. Denying VIEW DEFINITION permission on an endpoint
The following example denies VIEW DEFINITION permission on the endpoint Mirror7 to the SQL Server login
ZArifin .
USE master;
DENY VIEW DEFINITION ON ENDPOINT::Mirror7 TO ZArifin;
GO
USE master;
DENY TAKE OWNERSHIP ON ENDPOINT::Shipping83 TO PKomosinski
CASCADE;
GO
See Also
GRANT Endpoint Permissions (Transact-SQL )
REVOKE Endpoint Permissions (Transact-SQL )
CREATE ENDPOINT (Transact-SQL )
Endpoints Catalog Views (Transact-SQL )
sys.endpoints (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
DENY Full-Text Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a full-text catalog and full-text stoplists.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] ON
FULLTEXT
{
CATALOG :: full-text_catalog_name
|
STOPLIST :: full-text_stoplist_name
}
TO database_principal [ ,...n ] [ CASCADE ]
[ AS denying_principal ]
Arguments
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON FULLTEXT CATALOG ::full-text_catalog_name
Specifies the full-text catalog on which the permission is being denied. The scope qualifier :: is required.
ON FULLTEXT STOPLIST ::full-text_stoplist_name
Specifies the full-text stoplist on which the permission is being denied. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by
this principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Permissions
Requires CONTROL permission on the full-text catalog. If using the AS option, the specified principal must own
the full-text catalog.
See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
CREATE FULLTEXT STOPLIST (Transact-SQL )
DENY (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT Full-Text Permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fulltext_catalogs (Transact-SQL )
sys.fulltext_stoplists (Transact-SQL )
DENY Object Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a member of the OBJECT class of securables. These are the members of the OBJECT
class: tables, views, table-valued functions, stored procedures, extended stored procedures, scalar functions,
aggregate functions, service queues, and synonyms.
Transact-SQL Syntax Conventions
Syntax
DENY <permission> [ ,...n ] ON
[ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ]
TO <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<permission> ::=
ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be denied on a schema-contained object. For a list of the permissions, see the
Remarks section later in this topic.
ALL
Denying ALL does not deny all possible permissions. Denying ALL is equivalent to denying all ANSI-92
permissions applicable to the specified object. The meaning of ALL varies as follows:
Scalar function permissions: EXECUTE, REFERENCES.
Table-valued function permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
Stored Procedure permissions: EXECUTE.
Table permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
View permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
PRIVILEGES
Included for ANSI-92 compliance. Does not change the behavior of ALL.
column
Specifies the name of a column in a table, view, or table-valued function on which the permission is being denied.
The parentheses ( ) are required. Only SELECT, REFERENCES, and UPDATE permissions can be denied on a
column. column can be specified in the permissions clause or after the securable name.
Cau t i on
A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the permissions
hierarchy has been preserved for backward compatibility.
ON [ OBJECT :: ] [ schema_name ] . object_name
Specifies the object on which the permission is being denied. The OBJECT phrase is optional if schema_name is
specified. If the OBJECT phrase is used, the scope qualifier (::) is required. If schema_name is not specified, the
default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
TO <database_principal>
Specifies the principal to which the permission is being denied.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about objects is visible in various catalog views. For more information, see Object Catalog Views
(Transact-SQL ).
An object is a schema-level securable contained by the schema that is its parent in the permissions hierarchy. The
most specific and limited permissions that can be denied on an object are listed in the following table, together
with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the object.
If you use the AS clause, the specified principal must own the object on which permissions are being denied.
Examples
The following examples use the AdventureWorks database.
A. Denying SELECT permission on a table
The following example denies SELECT permission to the user RosaQdM on the table Person.Address .
See Also
GRANT Object Permissions (Transact-SQL )
REVOKE Object Permissions (Transact-SQL )
Object Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Securables
sys.fn_builtin_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
DENY Schema Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a schema.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] } ON SCHEMA :: schema_name
TO database_principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]
Arguments
permission
Specifies a permission that can be denied on a schema. For a list of these permissions, see the Remarks section
later in this topic.
ON SCHEMA :: schema_name
Specifies the schema on which the permission is being denied. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being denied. database_principal can be one of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission.
denying_principal can be one of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
Remarks
A schema is a database-level securable that is contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a schema are listed in the following
table, together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the schema. If you are using the AS option, the specified principal must own
the schema.
See Also
CREATE SCHEMA (Transact-SQL )
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
DENY Search Property List Permissions (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a search property list.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] ON
SEARCH PROPERTY LIST :: search_property_list_name
TO database_principal [ ,...n ] [ CASCADE ]
[ AS denying_principal ]
Arguments
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON SEARCH PROPERTY LIST ::search_property_list_name
Specifies the search property list on which the permission is being denied. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being denied. The principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission. The
principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
SEARCH PROPERTY LIST Permissions
A search property list is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a search property list are listed in the
following table, together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the full-text catalog. If using the AS option, the specified principal must own
the full-text catalog.
See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE SEARCH PROPERTY LIST (Transact-SQL )
DENY (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT Search Property List Permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Principals (Database Engine)
REVOKE Search Property List Permissions (Transact-SQL )
sys.fn_builtin_permissions (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
Search Document Properties with Search Property Lists
DENY Server Permissions (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a server.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ]
TO <grantee_principal> [ ,...n ]
[ CASCADE ]
[ AS <grantor_principal> ]
Arguments
permission
Specifies a permission that can be denied on a server. For a list of the permissions, see the Remarks section later in
this topic.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
TO <server_principal>
Specifies the principal to which the permission is denied.
AS <grantor_principal>
Specifies the principal from which the principal executing this query derives its right to deny the permission.
SQL_Server_login
Specifies a SQL Server login.
SQL_Server_login_mapped_to_Windows_login
Specifies a SQL Server login mapped to a Windows login.
SQL_Server_login_mapped_to_Windows_group
Specifies a SQL Server login mapped to a Windows group.
SQL_Server_login_mapped_to_certificate
Specifies a SQL Server login mapped to a certificate.
SQL_Server_login_mapped_to_asymmetric_key
Specifies a SQL Server login mapped to an asymmetric key.
server_role
Specifies a server role.
Remarks
Permissions at the server scope can be denied only when the current database is master.
Information about server permissions can be viewed in the sys.server_permissions catalog view, and information
about server principals can be viewed in the sys.server_principals catalog view. Information about membership of
server roles can be viewed in the sys.server_role_members catalog view.
A server is the highest level of the permissions hierarchy. The most specific and limited permissions that can be
denies on a server are listed in the following table.
Remarks
The following three server permissions were added in SQL Server 2014 (12.x).
CONNECT ANY DATABASE Permission
Grant CONNECT ANY DATABASE to a login that must connect to all databases that currently exist and to any
new databases that might be created in future. Does not grant any permission in any database beyond connect.
Combine with SELECT ALL USER SECURABLES or VIEW SERVER STATE to allow an auditing process to
view all data or all database states on the instance of SQL Server.
IMPERSONATE ANY LOGIN Permission
When granted, allows a middle-tier process to impersonate the account of clients connecting to it, as it connects to
databases. When denied, a high privileged login can be blocked from impersonating other logins. For example, a
login with CONTROL SERVER permission can be blocked from impersonating other logins.
SELECT ALL USER SECURABLES Permission
When granted, a login such as an auditor can view data in all databases that the user can connect to. When denied,
prevents access to objects unless they are in the sys schema.
Permissions
Requires CONTROL SERVER permission or ownership of the securable. If you use the AS clause, the specified
principal must own the securable on which permissions are being denied.
Examples
A. Denying CONNECT SQL permission to a SQL Server login and principals to which the login has regranted it
The following example denies CONNECT SQL permission to the SQL Server login Annika and to the principals to
which she has granted the permission.
USE master;
DENY CONNECT SQL TO Annika CASCADE;
GO
B. Denying CREATE ENDPOINT permission to a SQL Server login using the AS option
The following example denies CREATE ENDPOINT permission to the user ArifS . The example uses the AS option to
specify MandarP as the principal from which the executing principal derives the authority to do so.
USE master;
DENY CREATE ENDPOINT TO ArifS AS MandarP;
GO
See Also
GRANT (Transact-SQL )
DENY (Transact-SQL )
DENY Server Permissions (Transact-SQL )
REVOKE Server Permissions (Transact-SQL )
Permissions Hierarchy (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
DENY Server Principal Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions granted on a SQL Server login.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] }
ON
{ [ LOGIN :: SQL_Server_login ]
| [ SERVER ROLE :: server_role ] }
TO <server_principal> [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
| server_role
Arguments
permission
Specifies a permission that can be denies on a SQL Server login. For a list of the permissions, see the Remarks
section later in this topic.
LOGIN :: SQL_Server_login
Specifies the SQL Server login on which the permission is being denied. The scope qualifier (::) is required.
SERVER ROLE :: server_role
Specifies the server role on which the permission is being denied. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login or server role to which the permission is being granted.
TO SQL_Server_login
Specifies the SQL Server login to which the permission is being denied.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
server_role
Specifies the name of a server role.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to deny the
permission.
Remarks
Permissions at the server scope can be denied only when the current database is master.
Information about server permissions is available in the sys.server_permissions catalog view. Information about
server principals is available in the sys.server_principals catalog view.
The DENY statement fails if CASCADE is not specified when you are denying a permission to a principal that was
granted that permission with GRANT OPTION.
SQL Server logins and server roles are server-level securables. The most specific and limited permissions that can
be denied on a SQL Server login or server role are listed in the following table, together with the more general
permissions that include them by implication.
SQL SERVER LOGIN OR SERVER ROLE IMPLIED BY SQL SERVER LOGIN OR SERVER
PERMISSION ROLE PERMISSION IMPLIED BY SERVER PERMISSION
Permissions
For logins, requires CONTROL permission on the login or ALTER ANY LOGIN permission on the server.
For server roles, requires CONTROL permission on the server role or ALTER ANY SERVER ROLE permission on
the server.
Examples
A. Denying IMPERSONATE permission on a login
The following example denies IMPERSONATE permission on the SQL Server login WanidaBenshoof to a SQL Server
login created from the Windows user AdvWorks\YoonM .
USE master;
DENY IMPERSONATE ON LOGIN::WanidaBenshoof TO [AdvWorks\YoonM];
GO
USE master;
DENY VIEW DEFINITION ON LOGIN::EricKurjan TO RMeyyappan
CASCADE;
GO
USE master;
DENY VIEW DEFINITION ON SERVER ROLE::Sales TO Auditors ;
GO
See Also
sys.server_principals (Transact-SQL )
sys.server_permissions (Transact-SQL )
GRANT Server Principal Permissions (Transact-SQL )
REVOKE Server Principal Permissions (Transact-SQL )
CREATE LOGIN (Transact-SQL )
Principals (Database Engine)
Permissions (Database Engine)
Security Functions (Transact-SQL )
Security Stored Procedures (Transact-SQL )
DENY Service Broker Permissions (Transact-SQL)
5/4/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a Service Broker contract, message type, remote service binding, route, or service.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] ON
{
[ CONTRACT :: contract_name ]
| [ MESSAGE TYPE :: message_type_name ]
| [ REMOTE SERVICE BINDING :: remote_binding_name ]
| [ ROUTE :: route_name ]
| [ SERVICE :: service_name ]
}
TO database_principal [ ,...n ]
[ CASCADE ]
[ AS denying_principal ]
Arguments
permission
Specifies a permission that can be denied on a Service Broker securable. For a list of the permissions, see the
Remarks section later in this topic.
CONTRACT ::contract_name
Specifies the contract on which the permission is being denied. The scope qualifier :: is required.
MESSAGE TYPE ::message_type_name
Specifies the message type on which the permission is being denied. The scope qualifier :: is required.
REMOTE SERVICE BINDING ::remote_binding_name
Specifies the remote service binding on which the permission is being denied. The scope qualifier :: is required.
ROUTE ::route_name
Specifies the route on which the permission is being denied. The scope qualifier :: is required.
SERVICE ::message_type_name
Specifies the service on which the permission is being denied. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being denied. One of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
denying_principal
Specifies a principal from which the principal executing this query derives its right to deny the permission. One of
the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
Remarks
Service Broker Contracts
A Service Broker contract is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be denied on a Service Broker contract
are listed in the following table, together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the Service Broker contract, message type, remote service binding, route, or
service. If the AS clause is used, the specified principal must own the securable on which permissions are being
denied.
See Also
Principals (Database Engine)
REVOKE Service Broker Permissions (Transact-SQL )
DENY (Transact-SQL )
Permissions (Database Engine)
DENY Symmetric Key Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a symmetric key.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ]
ON SYMMETRIC KEY :: symmetric_key_name
TO <database_principal> [ ,...n ] [ CASCADE ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be denied on a symmetric key. For a list of the permissions, see the Remarks
section later in this topic.
ON SYMMETRIC KEY ::asymmetric_key_name
Specifies the symmetric key on which the permission is being denied. The scope qualifier (::) is required.
TO <database_principal>
Specifies the principal from which the permission is being revoked.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about symmetric keys is visible in the sys.symmetric_keys catalog view.
A symmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be denied on a symmetric key are listed in the
following table, together with the more general permissions that include them by implication.
SYMMETRIC KEY PERMISSION IMPLIED BY SYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
Requires CONTROL permission on the symmetric key or ALTER ANY SYMMETRIC KEY permission on the
database. If you use the AS option, the specified principal must own the symmetric key.
Examples
The following example denies ALTER permission on the symmetric key SamInventory42 to the database user
HamidS .
USE AdventureWorks2012;
DENY ALTER ON SYMMETRIC KEY::SamInventory42 TO HamidS;
GO
See Also
sys.symmetric_keys (Transact-SQL )
GRANT Symmetric Key Permissions (Transact-SQL )
REVOKE Symmetric Key Permissions (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
DENY System Object Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on system objects such as stored procedures, extended stored procedures, functions, and
views.
Transact-SQL Syntax Conventions
Syntax
DENY { SELECT | EXECUTE } ON [ sys.]system_object TO principal
Arguments
[ sys.]
The sys qualifier is required only when you are referring to catalog views and dynamic management views.
system_object
Specifies the object on which permission is being denied.
principal
Specifies the principal from which the permission is being revoked.
Remarks
This statement can be used to deny permissions on certain stored procedures, extended stored procedures, table-
valued functions, scalar functions, views, catalog views, compatibility views, INFORMATION_SCHEMA views,
dynamic management views, and system tables that are installed by SQL Server. Each of these system objects
exists as a unique record in the resource database (mssqlsystemresource). The resource database is read-only. A
link to the object is exposed as a record in the sys schema of every database.
Default name resolution resolves unqualified procedure names to the resource database. Therefore, the sys
qualifier is only required when you are specifying catalog views and dynamic management views.
Cau t i on
Denying permissions on system objects will cause applications that depend on them to fail. SQL Server
Management Studio uses catalog views and may not function as expected if you change the default permissions
on catalog views.
Denying permissions on triggers and on columns of system objects is not supported.
Permissions on system objects will be preserved during upgrades of SQL Server.
System objects are visible in the sys.system_objects catalog view. The permissions on system objects are visible in
the sys.database_permissions catalog view in the master database.
The following query returns information about permissions of system objects:
SELECT * FROM master.sys.database_permissions AS dp
JOIN sys.system_objects AS so
ON dp.major_id = so.object_id
WHERE dp.class = 1 AND so.parent_object_id = 0 ;
GO
Permissions
Requires CONTROL SERVER permission.
Examples
The following example denies EXECUTE permission on xp_cmdshell to public .
See Also
Transact-SQL Syntax Conventions (Transact-SQL )
sys.database_permissions (Transact-SQL )
GRANT System Object Permissions (Transact-SQL )
REVOKE System Object Permissions (Transact-SQL )
DENY Type Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on a type in SQL Server.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] ON TYPE :: [ schema_name . ] type_name
TO <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be denied on a type. For a list of the permissions, see the Remarks section later in
this topic.
ON TYPE :: [ schema_name. ] type_name
Specifies the type on which the permission is being denied. The scope qualifier (::) is required. If schema_name is
not specified, the default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
TO <database_principal>
Specifies the principal to which the permission is being denied.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
A type is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.
IMPORTANT
GRANT, DENY, and REVOKE permissions do not apply to system types. User-defined types can be granted permissions. For
more information about user-defined types, see Working with User-Defined Types in SQL Server.
The most specific and limited permissions that can be denied on a type are listed in the following table, together
with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the type. If you use the AS clause, the specified principal must own the type on
which permissions are being denied.
Examples
The following example denies VIEW DEFINITION permission with CASCADE on the user-defined type PhoneNumber to
the KhalidR . PhoneNumber is located in schema Telemarketing .
DENY VIEW DEFINITION ON TYPE::Telemarketing.PhoneNumber
TO KhalidR CASCADE;
GO
See Also
GRANT Type Permissions (Transact-SQL )
REVOKE Type Permissions (Transact-SQL )
CREATE TYPE (Transact-SQL )
Principals (Database Engine)
Permissions (Database Engine)
Securables
DENY XML Schema Collection Permissions (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Denies permissions on an XML schema collection.
Transact-SQL Syntax Conventions
Syntax
DENY permission [ ,...n ] ON
XML SCHEMA COLLECTION :: [ schema_name . ]
XML_schema_collection_name
TO <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be denied on an XML schema collection. For a list of the permissions, see the
Remarks section later in this topic.
ON XML SCHEMA COLLECTION :: [ schema_name. ] XML_schema_collection_name
Specifies the XML schema collection on which the permission is being denied. The scope qualifier (::) is required. If
schema_name is not specified, the default schema is used. If schema_name is specified, the schema scope qualifier
(.) is required.
TO <database_principal>
Specifies the principal to which the permission is being denied.
CASCADE
Indicates that the permission being denied is also denied to other principals to which it has been granted by this
principal.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to deny the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about XML schema collections is visible in the sys.xml_schema_collections catalog view.
An XML schema collection is a schema-level securable contained by the schema that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be denied on an XML schema collection
are listed in the following table, together with the more general permissions that include them by implication.
Permissions
Requires CONTROL on the XML schema collection. If you use the AS option, the specified principal must own the
XML schema collection.
Examples
The following example denies EXECUTE permission on the XML schema collection Invoices4 to the user Wanida .
The XML schema collection Invoices4 is located inside the Sales schema of the AdventureWorks2012 database.
USE AdventureWorks2012;
DENY EXECUTE ON XML SCHEMA COLLECTION::Sales.Invoices4 TO Wanida;
GO
See Also
GRANT XML Schema Collection Permissions (Transact-SQL )
REVOKE XML Schema Collection Permissions (Transact-SQL )
sys.xml_schema_collections (Transact-SQL )
CREATE XML SCHEMA COLLECTION (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
EXECUTE AS (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets the execution context of a session.
By default, a session starts when a user logs in and ends when the user logs off. All operations during a session
are subject to permission checks against that user. When an EXECUTE AS statement is run, the execution context
of the session is switched to the specified login or user name. After the context switch, permissions are checked
against the login and user security tokens for that account instead of the person calling the EXECUTE AS
statement. In essence, the user or login account is impersonated for the duration of the session or module
execution, or the context switch is explicitly reverted.
Transact-SQL Syntax Conventions
Syntax
{ EXEC | EXECUTE } AS <context_specification>
[;]
<context_specification>::=
{ LOGIN | USER } = 'name'
[ WITH { NO REVERT | COOKIE INTO @varbinary_variable } ]
| CALLER
Arguments
LOGIN
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the execution context to be impersonated is a login. The scope of impersonation is at the server level.
NOTE
This option is not available in a contained database or in SQL Database.
USER
Specifies the context to be impersonated is a user in the current database. The scope of impersonation is restricted
to the current database. A context switch to a database user does not inherit the server-level permissions of that
user.
IMPORTANT
While the context switch to the database user is active, any attempt to access resources outside of the database will cause
the statement to fail. This includes USE database statements, distributed queries, and queries that reference another
database that uses three- or four-part identifiers.
NOTE
The cookie OUTPUT parameter for is currently documented as varbinary(8000) which is the correct maximum length.
However the current implementation returns varbinary(100). Applications should reserve varbinary(8000) so that the
application continues to operate correctly if the cookie return size increases in a future release.
CALLER
When used inside a module, specifies the statements inside the module are executed in the context of the caller of
the module.
When used outside a module, the statement has no action.
Remarks
The change in execution context remains in effect until one of the following occurs:
Another EXECUTE AS statement is run.
A REVERT statement is run.
The session is dropped.
The stored procedure or trigger where the command was executed exits.
You can create an execution context stack by calling the EXECUTE AS statement multiple times across multiple
principals. When called, the REVERT statement switches the context to the login or user in the next level up in the
context stack. For a demonstration of this behavior, see Example A.
Best Practice
Specify a login or user that has the least privileges required to perform the operations in the session. For example,
do not specify a login name with server-level permissions, if only database-level permissions are required; or do
not specify a database owner account unless those permissions are required.
Cau t i on
The EXECUTE AS statement can succeed as long as the Database Engine can resolve the name. If a domain user
exists, Windows might be able to resolve the user for the Database Engine, even though the Windows user does
not have access to SQL Server. This can lead to a condition where a login with no access to SQL Server appears
to be logged in, though the impersonated login would only have the permissions granted to public or guest.
Permissions
To specify EXECUTE AS on a login, the caller must have IMPERSONATE permission on the specified login
name and must not be denied the IMPERSONATE ANY LOGIN permission. To specify EXECUTE AS on a
database user, the caller must have IMPERSONATE permissions on the specified user name. When EXECUTE
AS CALLER is specified, IMPERSONATE permissions are not required.
Examples
A. Using EXECUTE AS and REVERT to switch context
The following example creates a context execution stack using multiple principals. The REVERT statement is then
used to reset the execution context to the previous caller. The REVERT statement is executed multiple times moving
up the stack until the execution context is set to the original caller.
USE AdventureWorks2012;
GO
--Create two temporary principals
CREATE LOGIN login1 WITH PASSWORD = 'J345#$)thb';
CREATE LOGIN login2 WITH PASSWORD = 'Uor80$23b';
GO
CREATE USER user1 FOR LOGIN login1;
CREATE USER user2 FOR LOGIN login2;
GO
--Give IMPERSONATE permissions on user2 to user1
--so that user1 can successfully set the execution context to user2.
GRANT IMPERSONATE ON USER:: user2 TO user1;
GO
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- Set the execution context to login1.
EXECUTE AS LOGIN = 'login1';
--Verify the execution context is now login1.
SELECT SUSER_NAME(), USER_NAME();
--Login1 sets the execution context to login2.
EXECUTE AS USER = 'user2';
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- The execution context stack now has three principals: the originating caller, login1 and login2.
--The following REVERT statements will reset the execution context to the previous context.
REVERT;
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
REVERT;
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
See Also
REVERT (Transact-SQL )
EXECUTE AS Clause (Transact-SQL )
EXECUTE AS Clause (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
In SQL Server you can define the execution context of the following user-defined modules: functions (except
inline table-valued functions), procedures, queues, and triggers.
By specifying the context in which the module is executed, you can control which user account the Database
Engine uses to validate permissions on objects that are referenced by the module. This provides additional
flexibility and control in managing permissions across the object chain that exists between user-defined modules
and the objects referenced by those modules. Permissions must be granted to users only on the module itself,
without having to grant them explicit permissions on the referenced objects. Only the user that the module is
running as must have permissions on the objects accessed by the module.
Transact-SQL Syntax Conventions
Syntax
-- SQL Server Syntax
Functions (except inline table-valued functions), Stored Procedures, and DML Triggers
{ EXEC | EXECUTE } AS { CALLER | SELF | OWNER | 'user_name' }
Queues
{ EXEC | EXECUTE } AS { SELF | OWNER | 'user_name' }
Arguments
CALLER
Specifies the statements inside the module are executed in the context of the caller of the module. The user
executing the module must have appropriate permissions not only on the module itself, but also on any database
objects that are referenced by the module.
CALLER is the default for all modules except queues, and is the same as SQL Server 2005 behavior.
CALLER cannot be specified in a CREATE QUEUE or ALTER QUEUE statement.
SELF
EXECUTE AS SELF is equivalent to EXECUTE AS user_name, where the specified user is the person creating or
altering the module. The actual user ID of the person creating or modifying the modules is stored in the
execute_as_principal_id column in the sys.sql_modules or sys.service_queues catalog view.
SELF is the default for queues.
NOTE
To change the user ID of the execute_as_principal_id in the sys.service_queues catalog view, you must explicitly specify
the EXECUTE AS setting in the ALTER QUEUE statement.
OWNER
Specifies the statements inside the module executes in the context of the current owner of the module. If the
module does not have a specified owner, the owner of the schema of the module is used. OWNER cannot be
specified for DDL or logon triggers.
IMPORTANT
OWNER must map to a singleton account and cannot be a role or group.
Remarks
How the Database Engine evaluates permissions on the objects that are referenced in the module depends on
the ownership chain that exists between calling objects and referenced objects. In earlier versions of SQL Server,
ownership chaining was the only method available to avoid having to grant the calling user access to all
referenced objects.
Ownership chaining has the following limitations:
Applies only to DML statements: SELECT, INSERT, UPDATE, and DELETE.
The owners of the calling and the called objects must be the same.
Does not apply to dynamic queries inside the module.
Regardless of the execution context that is specified in the module, the following actions always apply:
When the module is executed, the Database Engine first verifies that the user executing the module has
EXECUTE permission on the module.
Ownership chaining rules continue to apply. This means if the owners of the calling and called objects are
the same, no permissions are checked on the underlying objects.
When a user executes a module that has been specified to run in a context other than CALLER, the user's
permission to execute the module is checked, but additional permissions checks on objects that are
accessed by the module are performed against the user account specified in the EXECUTE AS clause. The
user executing the module is, in effect, impersonating the specified user.
The context specified in the EXECUTE AS clause of the module is valid only for the duration of the
module execution. Context reverts to the caller when the module execution is completed.
IMPORTANT
If the SQL Server (MSSQLSERVER) service is running as a local account (local service or local user account), it will not have
privileges to obtain the group memberships of a Windows domain account that is specified in the EXECUTE AS clause. This
will cause the execution of the module to fail.
Best Practice
Specify a login or user that has the least privileges required to perform the operations defined in the module. For
example, do not specify a database owner account unless those permissions are required.
Permissions
To execute a module specified with EXECUTE AS, the caller must have EXECUTE permissions on the module.
To execute a CLR module specified with EXECUTE AS that accesses resources in another database or server, the
target database or server must trust the authenticator of the database from which the module originates (the
source database).
To specify the EXECUTE AS clause when you create or modify a module, you must have IMPERSONATE
permissions on the specified principal and also permissions to create the module. You can always impersonate
yourself. When no execution context is specified or EXECUTE AS CALLER is specified, IMPERSONATE
permissions are not required.
To specify a login_name or user_name that has implicit access to the database through a Windows group
membership, you must have CONTROL permissions on the database.
Examples
The following example creates a stored procedure in the AdventureWorks2012 database and assigns the
execution context to OWNER .
See Also
sys.assembly_modules (Transact-SQL )
sys.sql_modules (Transact-SQL )
sys.service_queues (Transact-SQL )
REVERT (Transact-SQL )
EXECUTE AS (Transact-SQL )
GRANT (Transact-SQL)
5/3/2018 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a securable to a principal. The general concept is to GRANT <some permission> ON
<some object> TO <some user, login, or group>. For a general discussion of permissions, see Permissions
(Database Engine).
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
GRANT
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
TO principal [ ,...n ]
[ WITH GRANT OPTION ]
[;]
<permission> ::=
{ see the tables below }
<class_type> ::=
{
LOGIN
| DATABASE
| OBJECT
| ROLE
| SCHEMA
| USER
}
Arguments
ALL
This option is deprecated and maintained only for backward compatibility. It does not grant all possible
permissions. Granting ALL is equivalent to granting the following permissions:
If the securable is a database, ALL means BACKUP DATABASE, BACKUP LOG, CREATE DATABASE,
CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE,
and CREATE VIEW.
If the securable is a scalar function, ALL means EXECUTE and REFERENCES.
If the securable is a table-valued function, ALL means DELETE, INSERT, REFERENCES, SELECT, and
UPDATE.
If the securable is a stored procedure, ALL means EXECUTE.
If the securable is a table, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
If the securable is a view, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the subtopics
listed below.
column
Specifies the name of a column in a table on which permissions are being granted. The parentheses () are
required.
class
Specifies the class of the securable on which the permission is being granted. The scope qualifier :: is
required.
securable
Specifies the securable on which the permission is being granted.
TO principal
Is the name of a principal. The principals to which permissions on a securable can be granted vary, depending
on the securable. See the subtopics listed below for valid combinations.
GRANT OPTION
Indicates that the grantee will also be given the ability to grant the specified permission to other principals.
AS principal
Use the AS principal clause to indicate that the principal recorded as the grantor of the permission should be
a principal other than the person executing the statement. For example, presume that user Mary is
principal_id 12 and user Raul is principal 15. Mary executes
GRANT SELECT ON OBJECT::X TO Steven WITH GRANT OPTION AS Raul; Now the sys.database_permissions table
will indicate that the grantor_prinicpal_id was 15 (Raul) even though the statement was actually executed by
user 13 (Mary).
Using the AS clause is typically not recommended unless you need to explicitly define the permission chain.
For more information, see the Summary of the Permission Check Algorithm section of Permissions
(Database Engine).
The use of AS in this statement does not imply the ability to impersonate another user.
Remarks
The full syntax of the GRANT statement is complex. The syntax diagram above was simplified to draw
attention to its structure. Complete syntax for granting permissions on specific securables is described in the
articles listed below.
The REVOKE statement can be used to remove granted permissions, and the DENY statement can be used to
prevent a principal from gaining a specific permission through a GRANT.
Granting a permission removes DENY or REVOKE of that permission on the specified securable. If the same
permission is denied at a higher scope that contains the securable, the DENY takes precedence. But revoking
the granted permission at a higher scope does not take precedence.
Database-level permissions are granted within the scope of the specified database. If a user needs
permissions to objects in another database, create the user account in the other database, or grant the user
account access to the other database, as well as the current database.
Cau t i on
A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the
permissions hierarchy has been preserved for the sake of backward compatibility. It will be removed in a
future release.
The sp_helprotect system stored procedure reports permissions on a database-level securable.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with
GRANT OPTION, or a higher permission that implies the permission being granted. If using the AS option,
additional requirements apply. See the securable-specific article for details.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant
any permission on any securable in the server. Grantees of CONTROL permission on a database, such as
members of the db_owner fixed database role, can grant any permission on any securable in the database.
Grantees of CONTROL permission on a schema can grant any permission on any object within the schema.
Examples
The following table lists the securables and the articles that describe the securable-specific syntax.
Application Role GRANT Database Principal Permissions (Transact-SQL)
See Also
DENY (Transact-SQL )
REVOKE (Transact-SQL )
sp_addlogin (Transact-SQL )
sp_adduser (Transact-SQL )
sp_changedbowner (Transact-SQL )
sp_dropuser (Transact-SQL )
sp_helprotect (Transact-SQL )
sp_helpuser (Transact-SQL )
GRANT Assembly Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an assembly.
Transact-SQL Syntax Conventions
Syntax
GRANT { permission [ ,...n ] } ON ASSEMBLY :: assembly_name
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]
Arguments
permission
Specifies a permission that can be granted on an assembly. Listed below.
ON ASSEMBLY ::assembly_name
Specifies the assembly on which the permission is being granted. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
An assembly is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be granted on an assembly are listed below, together with the
more general permissions that include them by implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.
See Also
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
GRANT Asymmetric Key Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an asymmetric key.
Transact-SQL Syntax Conventions
Syntax
GRANT { permission [ ,...n ] }
ON ASYMMETRIC KEY :: asymmetric_key_name
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]
Arguments
permission
Specifies a permission that can be granted on an asymmetric key. Listed below.
ON ASYMMETRIC KEY ::asymmetric_key_name
Specifies the asymmetric key on which the permission is being granted. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
An asymmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on an asymmetric key are listed below,
together with the more general permissions that include them by implication.
ASYMMETRIC KEY PERMISSION IMPLIED BY ASYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.
See Also
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
GRANT Availability Group Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an Always On availability group.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] ON AVAILABILITY GROUP :: availability_group_name
TO < server_principal > [ ,...n ]
[ WITH GRANT OPTION ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
Arguments
permission
Specifies a permission that can be granted on an availability group. For a list of the permissions, see the Remarks
section later in this topic.
ON AVAIL ABILITY GROUP ::availability_group_name
Specifies the availability group on which the permission is being granted. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login to which the permission is being granted.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to grant the
permission.
Remarks
Permissions at the server scope can be granted only when the current database is master.
Information about availability groups is visible in the sys.availability_groups (Transact-SQL ) catalog view.
Information about server permissions is visible in the sys.server_permissions catalog view, and information about
server principals is visible in the sys.server_principals catalog view.
An availability group is a server-level securable. The most specific and limited permissions that can be granted on
an availability group are listed in the following table, together with the more general permissions that include
them by implication.
For a chart of all Database Engine permissions, see Database Engine Permission Poster.
Permissions
Requires CONTROL permission on the availability group or ALTER ANY AVAIL ABILTIY GROUP permission on
the server.
Examples
A. Granting VIEW DEFINITION permission on an availability group
The following example grants VIEW DEFINITION permission on availability group MyAg to SQL Server login
ZArifin .
USE master;
GRANT VIEW DEFINITION ON AVAILABILITY GROUP::MyAg TO ZArifin;
GO
USE master;
GRANT TAKE OWNERSHIP ON AVAILABILITY GROUP::MyAg TO PKomosinski
WITH GRANT OPTION;
GO
USE master;
GRANT CONTROL ON AVAILABILITY GROUP::MyAg TO PKomosinski;
GO
See Also
REVOKE Availability Group Permissions (Transact-SQL )
DENY Availability Group Permissions (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
sys.availability_groups (Transact-SQL )
AlwaysOn Availability Groups Catalog Views (Transact-SQL ) Permissions (Database Engine)
Principals (Database Engine)
GRANT Certificate Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a certificate in SQL Server.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ]
ON CERTIFICATE :: certificate_name
TO principal [ ,...n ] [ WITH GRANT OPTION ]
[ AS granting_principal ]
Arguments
permission
Specifies a permission that can be granted on a certificate. Listed below.
ON CERTIFICATE ::certificate_name
Specifies the certificate on which the permission is being granted. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
A certificate is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be granted on a certificate are listed below, together with the
more general permissions that include them by implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.
See Also
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
GRANT Database Permissions (Transact-SQL)
5/3/2018 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a database in SQL Server.
Transact-SQL Syntax Conventions
Syntax
GRANT <permission> [ ,...n ]
TO <database_principal> [ ,...n ] [ WITH GRANT OPTION ]
[ AS <database_principal> ]
<permission>::=
permission | ALL [ PRIVILEGES ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be granted on a database. For a list of the permissions, see the Remarks section
later in this topic.
ALL
This option does not grant all possible permissions. Granting ALL is equivalent to granting the following
permissions: BACKUP DATABASE, BACKUP LOG, CREATE DATABASE, CREATE DEFAULT, CREATE
FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and CREATE VIEW.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
IMPORTANT
A combination of ALTER and REFERENCE permissions in some cases could allow the grantee to view data or execute
unauthorized functions. For example: A user with ALTER permission on a table and REFERENCE permission on a function can
create a computed column over a function and have it be executed. In this case, the user must also have SELECT permission
on the computed column.
A database is a securable contained by the server that is its parent in the permissions hierarchy. The most specific
and limited permissions that can be granted on a database are listed in the following table, together with the
more general permissions that include them by implication.
ALTER ANY DATABASE EVENT SESSION ALTER ALTER ANY EVENT SESSION
Applies to: SQL Database.
CREATE DATABASE DDL EVENT ALTER ANY DATABASE EVENT CREATE DDL EVENT NOTIFICATION
NOTIFICATION NOTIFICATION
CREATE REMOTE SERVICE BINDING ALTER ANY REMOTE SERVICE BINDING CONTROL SERVER
DATABASE PERMISSION IMPLIED BY DATABASE PERMISSION IMPLIED BY SERVER PERMISSION
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a Windows Group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Object owners can grant permissions on the objects they own. Principals that have CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server.
Examples
A. Granting permission to create tables
The following example grants CREATE TABLE permission on the AdventureWorks database to user MelanieK .
USE AdventureWorks;
GRANT CREATE TABLE TO MelanieK;
GO
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
USE AdventureWorks2012;
GRANT SHOWPLAN TO AuditMonitor;
GO
USE AdventureWorks2012;
GRANT CREATE VIEW TO CarmineEs WITH GRANT OPTION;
GO
See Also
sys.database_permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
GRANT Database Principal Permissions (Transact-
SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a database user, database role, or application role in SQL Server.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ]
ON
{ [ USER :: database_user ]
| [ ROLE :: database_role ]
| [ APPLICATION ROLE :: application_role ]
}
TO <database_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be granted on the database principal. For a list of the permissions, see the
Remarks section later in this topic.
USER ::database_user
Specifies the class and name of the user on which the permission is being granted. The scope qualifier (::) is
required.
ROLE ::database_role
Specifies the class and name of the role on which the permission is being granted. The scope qualifier (::) is
required.
APPLICATION ROLE ::application_role
Specifies the class and name of the application role on which the permission is being granted. The scope qualifier
(::) is required.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal>
Specifies a principal from which the principal executing this query derives its right to grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
Specifies an application role.
Database_user_mapped_to_Windows_User
pecifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about database principals is visible in the sys.database_principals catalog view. Information about
database-level permissions is visible in the sys.database_permissions catalog view.
DATABASE USER PERMISSION IMPLIED BY DATABASE USER PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.
Database user mapped to a Windows User IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a Windows Group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED
Database user mapped to an asymmetric key Membership in the db_securityadminfixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Principals that have CONTROL permission on a securable can grant permission on that securable.
Grantees of CONTROL permission on a database, such as members of the db_owner fixed database role, can
grant any permission on any securable in the database.
Examples
A. Granting CONTROL permission on a user to another user
The following example grants CONTROL permission on AdventureWorks2012 user Wanida to user RolandX .
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database.
See Also
DENY Database Principal Permissions (Transact-SQL )
REVOKE Database Principal Permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
sys.database_permissions (Transact-SQL )
CREATE USER (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
GRANT Database Scoped Credential Permissions
(Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a database scoped credential.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ]
ON DATABASE SCOPED CREDENTIAL :: credential_name
TO principal [ ,...n ] [ WITH GRANT OPTION ]
[ AS granting_principal ]
Arguments
permission
Specifies a permission that can be granted on a database scoped credential. Listed below.
ON DATABASE SCOPED CREDENTIAL ::credential_name
Specifies the database scoped credential on which the permission is being granted. The scope qualifier "::" is
required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
A database scoped credential is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on a database scoped
credential are listed below, together with the more general permissions that include them by implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.
See Also
GRANT (Transact-SQL )
REVOKE Database Scoped Credential (Transact-SQL )
DENY Database Scoped Credential (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
GRANT Endpoint Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an endpoint.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] ON ENDPOINT :: endpoint_name
TO < server_principal > [ ,...n ]
[ WITH GRANT OPTION ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
Arguments
permission
Specifies a permission that can be granted on an endpoint. For a list of the permissions, see the Remarks section
later in this topic.
ON ENDPOINT ::endpoint_name
Specifies the endpoint on which the permission is being granted. The scope qualifier (::) is required.
TO <server_principal>
Specifies the SQL Server login to which the permission is being granted.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to grant the
permission.
Remarks
Permissions at the server scope can be granted only when the current database is master.
Information about endpoints is visible in the sys.endpoints catalog view. Information about server permissions is
visible in the sys.server_permissions catalog view, and information about server principals is visible in the
sys.server_principals catalog view.
An endpoint is a server-level securable. The most specific and limited permissions that can be granted on an
endpoint are listed in the following table, together with the more general permissions that include them by
implication.
Permissions
Requires CONTROL permission on the endpoint or ALTER ANY ENDPOINT permission on the server.
Examples
A. Granting VIEW DEFINITION permission on an endpoint
The following example grants VIEW DEFINITION permission on endpoint Mirror7 to SQL Server login ZArifin .
USE master;
GRANT VIEW DEFINITION ON ENDPOINT::Mirror7 TO ZArifin;
GO
USE master;
GRANT TAKE OWNERSHIP ON ENDPOINT::Shipping83 TO PKomosinski
WITH GRANT OPTION;
GO
See Also
DENY Endpoint Permissions (Transact-SQL )
REVOKE Endpoint Permissions (Transact-SQL )
CREATE ENDPOINT (Transact-SQL )
Endpoints Catalog Views (Transact-SQL )
sys.endpoints (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
GRANT Full-Text Permissions (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a full-text catalog or full-text stoplist.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] ON
FULLTEXT
{
CATALOG :: full-text_catalog_name
|
STOPLIST :: full-text_stoplist_name
}
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]
Arguments
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON FULLTEXT CATALOG ::full-text_catalog_name
Specifies the full-text catalog on which the permission is being granted. The scope qualifier :: is required.
ON FULLTEXT STOPLIST ::full-text_stoplist_name
Specifies the full-text stoplist on which the permission is being granted. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
FULLTEXT CATALOG Permissions
A full-text catalog is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a full-text catalog are listed in the
following table, together with the more general permissions that include them by implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.
Examples
A. Granting permissions to a full-text catalog
The following example grants Ted the CONTROL permission on the full-text catalog ProductCatalog .
GRANT CONTROL
ON FULLTEXT CATALOG :: ProductCatalog
TO Ted ;
See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
CREATE FULLTEXT STOPLIST (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fulltext_catalogs (Transact-SQL )
sys.fulltext_stoplists (Transact-SQL )
GRANT Object Permissions (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a table, view, table-valued function, stored procedure, extended stored procedure, scalar
function, aggregate function, service queue, or synonym.
Transact-SQL Syntax Conventions
Syntax
GRANT <permission> [ ,...n ] ON
[ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ]
TO <database_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS <database_principal> ]
<permission> ::=
ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be granted on a schema-contained object. For a list of the permissions, see the
Remarks section later in this topic.
ALL
Granting ALL does not grant all possible permissions. Granting ALL is equivalent to granting all ANSI-92
permissions applicable to the specified object. The meaning of ALL varies as follows:
Scalar function permissions: EXECUTE, REFERENCES.
Table-valued function permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
Stored procedure permissions: EXECUTE.
Table permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
View permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
PRIVILEGES
Included for ANSI-92 compliance. Does not change the behavior of ALL.
column
Specifies the name of a column in a table, view, or table-valued function on which the permission is being
granted. The parentheses ( ) are required. Only SELECT, REFERENCES, and UPDATE permissions can be
granted on a column. column can be specified in the permissions clause or after the securable name.
Cau t i on
A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the permissions
hierarchy has been preserved for backward compatibility.
ON [ OBJECT :: ] [ schema_name ] . object_name
Specifies the object on which the permission is being granted. The OBJECT phrase is optional if schema_name is
specified. If the OBJECT phrase is used, the scope qualifier (::) is required. If schema_name is not specified, the
default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
TO <database_principal>
Specifies the principal to which the permission is being granted.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
IMPORTANT
A combination of ALTER and REFERENCE permissions in some cases could allow the grantee to view data or execute
unauthorized functions. For example: A user with ALTER permission on a table and REFERENCE permission on a function
can create a computed column over a function and have it be executed. In this case the user would also need SELECT
permission on the computed column.
Information about objects is visible in various catalog views. For more information, see Object Catalog Views
(Transact-SQL ).
An object is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be granted on an object are listed in the following table,
together with the more general permissions that include them by implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a Windows Group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
AS ADDITIONAL PERMISSION REQUIRED
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Examples
A. Granting SELECT permission on a table
The following example grants SELECT permission to user RosaQdM on table Person.Address in the
AdventureWorks2012 database.
USE AdventureWorks2012;
GRANT EXECUTE ON OBJECT::HumanResources.uspUpdateEmployeeHireInfo
TO Recruiting11;
GO
See Also
DENY Object Permissions (Transact-SQL )
REVOKE Object Permissions (Transact-SQL )
Object Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Securables
sys.fn_builtin_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
GRANT Schema Permissions (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a schema.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] ON SCHEMA :: schema_name
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]
Arguments
permission
Specifies a permission that can be granted on a schema. For a list of the permissions, see the Remarks section later
in this topic..
ON SCHEMA :: schema_name
Specifies the schema on which the permission is being granted. The scope qualifier :: is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission. One of
the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
IMPORTANT
A combination of ALTER and REFERENCE permissions in some cases could allow the grantee to view data or execute
unauthorized functions. For example: A user with ALTER permission on a table and REFERENCE permission on a function can
create a computed column over a function and have it be executed. In this case, the user must also have SELECT permission
on the computed column.
A schema is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be granted on a schema are listed below, together with the
more general permissions that include them by implication.
Cau t i on
A user with ALTER permission on a schema can use ownership chaining to access securables in other schemas,
including securables to which that user is explicitly denied access. This is because ownership chaining bypasses
permissions checks on referenced objects when they are owned by the principal that owns the objects that refer to
them. A user with ALTER permission on a schema can create procedures, synonyms, and views that are owned by
the schema's owner. Those objects will have access (via ownership chaining) to information in other schemas
owned by the schema's owner. When possible, you should avoid granting ALTER permission on a schema if the
schema's owner also owns other schemas.
For example, this issue may occur in the following scenarios. These scenarios assume that a user, referred as U1,
has the ALTER permission on the S1 schema. The U1 user is denied to access a table object, referred as T1, in the
schema S2. The S1 schema and the S2 schema are owned by the same owner.
The U1 user has the CREATE PROCEDURE permission on the database and the EXECUTE permission on the S1
schema. Therefore, the U1 user can create a stored procedure, and then access the denied object T1 in the stored
procedure.
The U1 user has the CREATE SYNONYM permission on the database and the SELECT permission on the S1
schema. Therefore, the U1 user can create a synonym in the S1 schema for the denied object T1, and then access
the denied object T1 by using the synonym.
The U1 user has the CREATE VIEW permission on the database and the SELECT permission on the S1 schema.
Therefore, the U1 user can create a view in the S1 schema to query data from the denied object T1, and then
access the denied object T1 by using the view.
For more information, see the Microsoft KB Article number 914847.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.
Examples
A. Granting INSERT permission on schema HumanResources to guest
See Also
DENY Schema Permissions (Transact-SQL )
REVOKE Schema Permissions (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
GRANT Search Property List Permissions (Transact-
SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a search property list.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] ON
SEARCH PROPERTY LIST :: search_property_list_name
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]
Arguments
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON SEARCH PROPERTY LIST ::search_property_list_name
Specifies the search property list on which the permission is being granted. The scope qualifier :: is required.
To view the existing search property lists
sys.registered_search_property_lists (Transact-SQL )
database_principal
Specifies the principal to which the permission is being granted. The principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission.
The principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
SEARCH PROPERTY LIST Permissions
A search property list is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a search property list are listed in the
following table, together with the more general permissions that include them by implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, the following additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.
Examples
Granting permissions to a search property list
The following example grants Mary the VIEW DEFINITION permission on the search property list
DocumentTablePropertyList .
See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE SEARCH PROPERTY LIST (Transact-SQL )
DENY Search Property List Permissions (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Principals (Database Engine)
REVOKE Search Property List Permissions (Transact-SQL )
sys.fn_builtin_permissions (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
Search Document Properties with Search Property Lists
GRANT Server Permissions (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a server.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ]
TO <grantee_principal> [ ,...n ] [ WITH GRANT OPTION ]
[ AS <grantor_principal> ]
Arguments
permission
Specifies a permission that can be granted on a server. For a list of the permissions, see the Remarks section later
in this topic.
TO <grantee_principal> Specifies the principal to which the permission is being granted.
AS <grantor_principal> Specifies the principal from which the principal executing this query derives its right to
grant the permission.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
SQL_Server_login
Specifies a SQL Server login.
SQL_Server_login_mapped_to_Windows_login
Specifies a SQL Server login mapped to a Windows login.
SQL_Server_login_mapped_to_Windows_group
Specifies a SQL Server login mapped to a Windows group.
SQL_Server_login_mapped_to_certificate
Specifies a SQL Server login mapped to a certificate.
SQL_Server_login_mapped_to_asymmetric_key
Specifies a SQL Server login mapped to an asymmetric key.
server_role
Specifies a user-defined server role.
Remarks
Permissions at the server scope can be granted only when the current database is master.
Information about server permissions can be viewed in the sys.server_permissions catalog view, and information
about server principals can be viewed in the sys.server_principals catalog view. Information about membership of
server roles can be viewd in the sys.server_role_members catalog view.
A server is the highest level of the permissions hierarchy. The most specific and limited permissions that can be
granted on a server are listed in the following table.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION or a higher permission that implies the permission being granted. Members of the sysadmin fixed server
role can grant any permission.
Examples
A. Granting a permission to a login
The following example grants CONTROL SERVER permission to the SQL Server login TerryEminhizer .
USE master;
GRANT CONTROL SERVER TO TerryEminhizer;
GO
USE master;
GRANT ALTER ANY EVENT NOTIFICATION TO JanethEsteves WITH GRANT OPTION;
GO
See Also
GRANT (Transact-SQL )
DENY (Transact-SQL )
DENY Server Permissions (Transact-SQL )
REVOKE Server Permissions (Transact-SQL )
Permissions Hierarchy (Database Engine)
Principals (Database Engine)
Permissions (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
GRANT Server Principal Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a SQL Server login.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] }
ON
{ [ LOGIN :: SQL_Server_login ]
| [ SERVER ROLE :: server_role ] }
TO <server_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
| server_role
Arguments
permission
Specifies a permission that can be granted on a SQL Server login. For a list of the permissions, see the Remarks
section later in this topic.
LOGIN :: SQL_Server_login
Specifies the SQL Server login on which the permission is being granted. The scope qualifier (::) is required.
SERVER ROLE :: server_role
Specifies the user-defined server role on which the permission is being granted. The scope qualifier (::) is required.
TO <server_principal> Specifies the SQL Server login or server role to which the permission is being granted.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
server_role
Specifies the name of a user-defined server role.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to grant the
permission.
Remarks
Permissions at the server scope can be granted only when the current database is master.
Information about server permissions is visible in the sys.server_permissions catalog view. Information about
server principals is visible in the sys.server_principals catalog view.
SQL Server logins and server roles are server-level securables. The most specific and limited permissions that can
be granted on a SQL Server login or server role are listed in the following table, together with the more general
permissions that include them by implication.
SQL SERVER LOGIN OR SERVER ROLE IMPLIED BY SQL SERVER LOGIN OR SERVER
PERMISSION ROLE PERMISSION IMPLIED BY SERVER PERMISSION
Permissions
For logins, requires CONTROL permission on the login or ALTER ANY LOGIN permission on the server.
For server roles, requires CONTROL permission on the server role or ALTER ANY SERVER ROLE permission on
the server.
Examples
A. Granting IMPERSONATE permission on a login
The following example grants IMPERSONATE permission on the SQL Server login WanidaBenshoof to a SQL Server
login created from the Windows user AdvWorks\YoonM .
USE master;
GRANT IMPERSONATE ON LOGIN::WanidaBenshoof to [AdvWorks\YoonM];
GO
USE master;
GRANT VIEW DEFINITION ON SERVER ROLE::Sales TO Auditors ;
GO
See Also
sys.server_principals (Transact-SQL )
sys.server_permissions (Transact-SQL )
CREATE LOGIN (Transact-SQL )
Principals (Database Engine)
Permissions (Database Engine)
Security Functions (Transact-SQL )
Security Stored Procedures (Transact-SQL )
GRANT Service Broker Permissions (Transact-SQL)
5/4/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a Service Broker contract, message type, remote binding, route, or service.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] ON
{
[ CONTRACT :: contract_name ]
| [ MESSAGE TYPE :: message_type_name ]
| [ REMOTE SERVICE BINDING :: remote_binding_name ]
| [ ROUTE :: route_name ]
| [ SERVICE :: service_name ]
}
TO database_principal [ ,...n ]
[ WITH GRANT OPTION ]
[ AS granting_principal ]
Arguments
permission
Specifies a permission that can be granted on a Service Broker securable. Listed below.
CONTRACT ::contract_name
Specifies the contract on which the permission is being granted. The scope qualifier "::" is required.
MESSAGE TYPE ::message_type_name
Specifies the message type on which the permission is being granted. The scope qualifier "::" is required.
REMOTE SERVICE BINDING ::remote_binding_name
Specifies the remote service binding on which the permission is being granted. The scope qualifier "::" is required.
ROUTE ::route_name
Specifies the route on which the permission is being granted. The scope qualifier "::" is required.
SERVICE ::service_name
Specifies the service on which the permission is being granted. The scope qualifier "::" is required.
database_principal
Specifies the principal to which the permission is being granted. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other
principals.
granting_principal
Specifies a principal from which the principal executing this query derives its right to grant the permission.
One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
Service Broker Contracts
A Service Broker contract is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on a Service Broker
contract are listed below, together with the more general permissions that include them by implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If using the AS option, these additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Object owners can grant permissions on the objects they own. Principals with CONTROL permission on a
securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database. Grantees of
CONTROL permission on a schema can grant any permission on any object within the schema.
See Also
SQL Server Service Broker
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
GRANT Symmetric Key Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a symmetric key.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ]
ON SYMMETRIC KEY :: symmetric_key_name
TO <database_principal> [ ,...n ] [ WITH GRANT OPTION ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be granted on a symmetric key. For a list of the permissions, see the Remarks
section later in this topic.
ON SYMMETRIC KEY ::asymmetric_key_name
Specifies the symmetric key on which the permission is being granted. The scope qualifier (::) is required.
TO <database_principal>
Specifies the principal to which the permission is being granted.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about symmetric keys is visible in the sys.symmetric_keys catalog view.
A symmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a symmetric key are listed in the
following table, together with the more general permissions that include them by implication.
SYMMETRIC KEY PERMISSION IMPLIED BY SYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
AS GRANTING_PRINCIPAL ADDITIONAL PERMISSION REQUIRED
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Principals with CONTROL permission on a securable can grant permission on that securable.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can grant any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members of
the db_owner fixed database role, can grant any permission on any securable in the database.
Examples
The following example grants ALTER permission on the symmetric key SamInventory42 to the database user
HamidS .
USE AdventureWorks2012;
GRANT ALTER ON SYMMETRIC KEY::SamInventory42 TO HamidS;
GO
See Also
sys.symmetric_keys (Transact-SQL )
DENY Symmetric Key Permissions (Transact-SQL )
REVOKE Symmetric Key Permissions (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
GRANT System Object Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on system objects such as system stored procedures, extended stored procedures, functions,
and views.
Transact-SQL Syntax Conventions
Syntax
GRANT { SELECT | EXECUTE } ON [ sys.]system_object TO principal
Arguments
[ sys.] .
The sys qualifier is required only when you are referring to catalog views and dynamic management views.
system_object
Specifies the object on which permission is being granted.
principal
Specifies the principal to which the permission is being granted.
Remarks
This statement can be used to grant permissions on certain stored procedures, extended stored procedures, table-
valued functions, scalar functions, views, catalog views, compatibility views, INFORMATION_SCHEMA views,
dynamic management views, and system tables that are installed by SQL Server. Each of these system objects
exists as a unique record in the resource database of the server (mssqlsystemresource). The resource database is
read-only. A link to the object is exposed as a record in the sys schema of every database. Permission to execute or
select a system object can be granted, denied, and revoked.
Granting permission to execute or select an object does not necessarily convey all the permissions required to use
the object. Most objects perform operations for which additional permissions are required. For example, a user
that is granted EXECUTE permission on sp_addlinkedserver cannot create a linked server unless the user is also a
member of the sysadmin fixed server role.
Default name resolution resolves unqualified procedure names to the resource database. Therefore, the sys
qualifier is only required when you are specifying catalog views and dynamic management views.
Granting permissions on triggers and on columns of system objects is not supported.
Permissions on system objects will be preserved during upgrades of SQL Server.
System objects are visible in the sys.system_objects catalog view. The permissions on system objects are visible in
the sys.database_permissions catalog view in the master database.
The following query returns information about permissions of system objects:
SELECT * FROM master.sys.database_permissions AS dp
JOIN sys.system_objects AS so
ON dp.major_id = so.object_id
WHERE dp.class = 1 AND so.parent_object_id = 0 ;
GO
Permissions
Requires CONTROL SERVER permission.
Examples
A. Granting SELECT permission on a view
The following example grants the SQL Server login Sylvester1 permission to select a view that lists SQL Server
logins. The example then grants the additional permission that is required to view metadata on SQL Server logins
that are not owned by the user.
USE AdventureWorks2012;
GRANT SELECT ON sys.sql_logins TO Sylvester1;
GRANT VIEW SERVER STATE to Sylvester1;
GO
See Also
sys.system_objects (Transact-SQL )
sys.database_permissions (Transact-SQL )
REVOKE System Object Permissions (Transact-SQL )
DENY System Object Permissions (Transact-SQL )
GRANT Type Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on a type.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] ON TYPE :: [ schema_name . ] type_name
TO <database_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be granted on a type. For a list of the permissions, see the Remarks section later in
this topic.
ON TYPE :: [ schema_name. ] type_name
Specifies the type on which the permission is being granted. The scope qualifier (::) is required. If schema_name is
not specified, the default schema will be used. If schema_name is specified, the schema scope qualifier (.) is
required.
TO <database_principal> Specifies the principal to which the permission is being granted.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
A type is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.
IMPORTANT
GRANT, DENY, and REVOKE permissions do not apply to system types. User-defined types can be granted permissions. For
more information about user-defined types, see Working with User-Defined Types in SQL Server.
The most specific and limited permissions that can be granted on a type are listed in the following table, together
with the more general permissions that include them by implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.
AS ADDITIONAL PERMISSION REQUIRED
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the
sysadmin fixed server role.
Examples
The following example grants VIEW DEFINITION permission with GRANT OPTION on the user-defined type
PhoneNumber to user KhalidR . PhoneNumber is located in the schema Telemarketing .
See Also
DENY Type Permissions (Transact-SQL )
REVOKE Type Permissions (Transact-SQL )
CREATE TYPE (Transact-SQL )
Permissions (Database Engine)
Securables
Principals (Database Engine)
GRANT XML Schema Collection Permissions
(Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Grants permissions on an XML schema collection.
Transact-SQL Syntax Conventions
Syntax
GRANT permission [ ,...n ] ON
XML SCHEMA COLLECTION :: [ schema_name . ]
XML_schema_collection_name
TO <database_principal> [ ,...n ]
[ WITH GRANT OPTION ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be granted on an XML schema collection. For a list of the permissions, see the
Remarks section later in this topic.
ON XML SCHEMA COLLECTION :: [ schema_name. ] XML_schema_collection_name
Specifies the XML schema collection on which the permission is being granted. The scope qualifier (::) is required.
If schema_name is not specified, the default schema will be used. If schema_name is specified, the schema scope
qualifier (.) is required.
<database_principal> Specifies the principal to which the permission is being granted.
WITH GRANT OPTION
Indicates that the principal will also be given the ability to grant the specified permission to other principals.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
grant the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about XML schema collections is visible in the sys.xml_schema_collections catalog view.
An XML schema collection is a schema-level securable contained by the schema that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be granted on an XML schema
collection are listed in the following table, together with the more general permissions that include them by
implication.
Permissions
The grantor (or the principal specified with the AS option) must have either the permission itself with GRANT
OPTION, or a higher permission that implies the permission being granted.
If you are using the AS option, the following additional requirements apply.
Database user mapped to a Windows login IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a Windows group Membership in the Windows group, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Database user mapped to a certificate Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user mapped to an asymmetric key Membership in the db_securityadmin fixed database role,
membership in the db_owner fixed database role, or
membership in the sysadmin fixed server role.
Database user not mapped to any server principal IMPERSONATE permission on the user, membership in the
db_securityadmin fixed database role, membership in the
db_owner fixed database role, or membership in the sysadmin
fixed server role.
Examples
The following example grants EXECUTE permission on the XML schema collection Invoices4 to the user Wanida .
The XML schema collection Invoices4 is located inside the Sales schema of the AdventureWorks2012 database.
USE AdventureWorks2012;
GRANT EXECUTE ON XML SCHEMA COLLECTION::Sales.Invoices4 TO Wanida;
GO
See Also
DENY XML Schema Collection Permissions (Transact-SQL )
REVOKE XML Schema Collection Permissions (Transact-SQL )
sys.xml_schema_collections (Transact-SQL )
CREATE XML SCHEMA COLLECTION (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
OPEN MASTER KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Opens the Database Master Key of the current database.
Transact-SQL Syntax Conventions
Syntax
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'password'
Arguments
'password'
The password with which the Database Master Key was encrypted.
Remarks
If the database master key was encrypted with the service master key, it will be automatically opened when it is
needed for decryption or encryption. In this case, it is not necessary to use the OPEN MASTER KEY statement.
When a database is first attached or restored to a new instance of SQL Server, a copy of the database master key
(encrypted by the service master key) is not yet stored in the server. You must use the OPEN MASTER KEY
statement to decrypt the database master key (DMK). Once the DMK has been decrypted, you have the option
of enabling automatic decryption in the future by using the ALTER MASTER KEY REGENERATE statement to
provision the server with a copy of the DMK, encrypted with the service master key (SMK). When a database
has been upgraded from an earlier version, the DMK should be regenerated to use the newer AES algorithm.
For more information about regenerating the DMK, see ALTER MASTER KEY (Transact-SQL ). The time required
to regenerate the DMK key to upgrade to AES depends upon the number of objects protected by the DMK.
Regenerating the DMK key to upgrade to AES is only necessary once, and has no impact on future
regenerations as part of a key rotation strategy.
You can exclude the Database Master Key of a specific database from automatic key management by using the
ALTER MASTER KEY statement with the DROP ENCRYPTION BY SERVICE MASTER KEY option. Afterward,
you must explicitly open the Database Master Key with a password.
If a transaction in which the Database Master Key was explicitly opened is rolled back, the key will remain open.
Permissions
Requires CONTROL permission on the database.
Examples
The following example opens the Database Master Key of the AdventureWorks2012 database, which has been
encrypted with a password.
USE AdventureWorks2012;
OPEN MASTER KEY DECRYPTION BY PASSWORD = '43987hkhj4325tsku7';
GO
USE master;
OPEN MASTER KEY DECRYPTION BY PASSWORD = '43987hkhj4325tsku7';
GO
CLOSE MASTER KEY;
GO
See Also
CREATE MASTER KEY (Transact-SQL )
CLOSE MASTER KEY (Transact-SQL )
BACKUP MASTER KEY (Transact-SQL )
RESTORE MASTER KEY (Transact-SQL )
ALTER MASTER KEY (Transact-SQL )
DROP MASTER KEY (Transact-SQL )
Encryption Hierarchy
OPEN SYMMETRIC KEY (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Decrypts a symmetric key and makes it available for use.
Transact-SQL Syntax Conventions
Syntax
OPEN SYMMETRIC KEY Key_name DECRYPTION BY <decryption_mechanism>
<decryption_mechanism> ::=
CERTIFICATE certificate_name [ WITH PASSWORD = 'password' ]
|
ASYMMETRIC KEY asym_key_name [ WITH PASSWORD = 'password' ]
|
SYMMETRIC KEY decrypting_Key_name
|
PASSWORD = 'decryption_password'
Arguments
Key_name
Is the name of the symmetric key to be opened.
CERTIFICATE certificate_name
Is the name of a certificate whose private key will be used to decrypt the symmetric key.
ASYMMETRIC KEY asym_key_name
Is the name of an asymmetric key whose private key will be used to decrypt the symmetric key.
WITH PASSWORD ='password'
Is the password that was used to encrypt the private key of the certificate or asymmetric key.
SYMMETRIC KEY decrypting_key_name
Is the name of a symmetric key that will be used to decrypt the symmetric key that is being opened.
PASSWORD ='password'
Is the password that was used to protect the symmetric key.
Remarks
Open symmetric keys are bound to the session not to the security context. An open key will continue to be
available until it is either explicitly closed or the session is terminated. If you open a symmetric key and then switch
context, the key will remain open and be available in the impersonated context. Information about open symmetric
keys is visible in the sys.openkeys (Transact-SQL ) catalog view.
If the symmetric key was encrypted with another key, that key must be opened first.
If the symmetric key is already open, the query is a NO_OP.
If the password, certificate, or key supplied to decrypt the symmetric key is incorrect, the query will fail.
Symmetric keys created from encryption providers cannot be opened. Encryption and decryption operations using
this kind of symmetric key succeed without the OPEN statement because the Encryption Provider is opening and
closing the key.
Permissions
The caller must have some permission on the key and must not have been denied VIEW DEFINITION permission
on the key. Additional requirements vary, depending on the decryption mechanism:
DECRYPTION BY CERTIFICATE: CONTROL permission on the certificate and knowledge of the password
that encrypts its private key.
DECRYPTION BY ASYMMETRIC KEY: CONTROL permission on the asymmetric key and knowledge of
the password that encrypts its private key.
DECRYPTION BY PASSWORD: knowledge of one of the passwords that is used to encrypt the symmetric
key.
Examples
A. Opening a symmetric key by using a certificate
The following example opens the symmetric key SymKeyMarketing3 and decrypts it by using the private key of
certificate MarketingCert9 .
USE AdventureWorks2012;
OPEN SYMMETRIC KEY SymKeyMarketing3
DECRYPTION BY CERTIFICATE MarketingCert9;
GO
USE AdventureWorks2012;
-- First open the symmetric key that you want for decryption.
OPEN SYMMETRIC KEY HarnpadoungsatayaSE3
DECRYPTION BY CERTIFICATE sariyaCert01;
-- Use the key that is already open to decrypt MarketingKey11.
OPEN SYMMETRIC KEY MarketingKey11
DECRYPTION BY SYMMETRIC KEY HarnpadoungsatayaSE3;
GO
See Also
CREATE SYMMETRIC KEY (Transact-SQL )
ALTER SYMMETRIC KEY (Transact-SQL )
CLOSE SYMMETRIC KEY (Transact-SQL )
DROP SYMMETRIC KEY (Transact-SQL )
Encryption Hierarchy
Extensible Key Management (EKM )
Permissions: GRANT, DENY, REVOKE (Azure SQL
Data Warehouse, Parallel Data Warehouse)
5/4/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Use SQL Data Warehouse or Parallel Data WarehouseGRANT and DENY statements to grant or deny a
permission (such as UPDATE ) on a securable (such as a database, table, view, etc.) to a security principal (a login, a
database user, or a database role). Use REVOKE to remove the grant or deny of a permission.
Server level permissions are applied to logins. Database level permissions are applied to database users and
database roles.
To see what permissions have been granted and denied, query the sys.server_permissions and
sys.database_permissions views. Permissions that are not explicitly granted or denied to a security principal can be
inherited by having membership in a role that has permissions. The permissions of the fixed database roles cannot
be changed and do not appear in the sys.server_permissions and sys.database_permissions views.
GRANT explicitly grants one or more permissions.
DENY explicitly denies the principal from having one or more permissions.
REVOKE removes existing GRANT or DENY permissions.
Transact-SQL Syntax Conventions (Transact-SQL )
Syntax
-- Azure SQL Data Warehouse and Parallel Data Warehouse
GRANT
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
TO principal [ ,...n ]
[ WITH GRANT OPTION ]
[;]
DENY
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
TO principal [ ,...n ]
[ CASCADE ]
[;]
REVOKE
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
[ FROM | TO ] principal [ ,...n ]
[ CASCADE ]
[;]
<permission> ::=
{ see the tables below }
<class_type> ::=
{
LOGIN
| DATABASE
| OBJECT
| ROLE
| SCHEMA
| USER
}
Arguments
<permission>[ ,...n ]
One or more permissions to grant, deny, or revoke.
ON [ <class_type> :: ] securable The ON clause describes the securable parameter on which to grant, deny, or
revoke permissions.
<class_type> The class type of the securable. This can be LOGIN, DATABASE, OBJECT, SCHEMA, ROLE, or
USER. Permissions can also be granted to the SERVERclass_type, but SERVER is not specified for those
permissions. DATABASE is not specified when the permission includes the word DATABASE (for example ALTER
ANY DATABASE ). When no class_type is specified and the permission type is not restricted to the server or
database class, the class is assumed to be OBJECT.
securable
The name of the login, database, table, view, schema, procedure, role, or user on which to grant, deny, or revoke
permissions. The object name can be specified with the three-part naming rules that are described in Transact-SQL
Syntax Conventions (Transact-SQL ).
TO principal [ ,...n ]
One or more principals being granted, denied, or revoked permissions. Principal is the name of a login, database
user, or database role.
FROM principal [ ,...n ]
One or more principals to revoke permissions from. Principal is the name of a login, database user, or database
role. FROM can only be used with a REVOKE statement. TO can be used with GRANT, DENY, or REVOKE.
WITH GRANT OPTION
Indicates that the grantee will also be given the ability to grant the specified permission to other principals.
CASCADE
Indicates that the permission is denied or revoked to the specified principal and to all other principals to which the
principal granted the permission. Required when the principal has the permission with GRANT OPTION.
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked. This is required when you are using the
CASCADE argument.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
Permissions
To grant a permission, the grantor must have either the permission itself with the WITH GRANT OPTION, or
must have a higher permission that implies the permission being granted. Object owners can grant permissions on
the objects they own. Principals with CONTROL permission on a securable can grant permission on that securable.
Members of the db_owner and db_securityadmin fixed database roles can grant any permission in the database.
General Remarks
Denying or revoking permissions to a principal will not affect requests that have passed authorization and are
currently running. To restrict access immediately, you must cancel active requests or kill current sessions.
NOTE
Most fixed server roles are not available in this release. Use user-defined database roles instead. Logins cannot be added to
the sysadmin fixed server role. Granting the CONTROL SERVER permission approximates membership in the sysadmin
fixed server role.
Some statements require multiple permissions. For example, to create a table requires the CREATE TABLE
permissions in the database, and the ALTER SCHEMA permission for the table that will contain the table.
PDW sometimes executes stored procedures to distribute user actions to the compute nodes. Therefore, the
execute permission for an entire database cannot be denied. (For example
DENY EXECUTE ON DATABASE::<name> TO <user>; will fail.) As a work around, deny the execute permission to user-
schemas or specific objects (procedures).
Implicit and Explicit Permissions
An explicit permission is a GRANT or DENY permission given to a principal by a GRANT or DENY statement.
An implicit permission is a GRANT or DENY permission that a principal (login, user, or database role) has
inherited from another database role.
An implicit permission can also be inherited from a covering or parent permission. For example, UPDATE
permission on a table can be inherited by having UPDATE permission on the schema that contains the table, or
CONTROL permission on the table.
Ownership Chaining
When multiple database objects access each other sequentially, the sequence is known as a chain. Although such
chains do not independently exist, when SQL Server traverses the links in a chain, SQL Server evaluates
permissions on the constituent objects differently than it would if it were accessing the objects separately.
Ownership chaining has important implications for managing security. For more information about ownership
chains, see Ownership Chains and Tutorial: Ownership Chains and Context Switching.
Permission List
Server Level Permissions
Server level permissions can be granted, denied, and revoked from logins.
Permissions that apply to servers
CONTROL SERVER
ADMINISTER BULK OPERATIONS
ALTER ANY CONNECTION
ALTER ANY DATABASE
CREATE ANY DATABASE
ALTER ANY EXTERNAL DATA SOURCE
ALTER ANY EXTERNAL FILE FORMAT
ALTER ANY LOGIN
ALTER SERVER STATE
CONNECT SQL
VIEW ANY DEFINITION
VIEW ANY DATABASE
VIEW SERVER STATE
Permissions that apply to logins
CONTROL ON LOGIN
ALTER ON LOGIN
IMPERSONATE ON LOGIN
VIEW DEFINITION
Database Level Permissions
Database level permissions can be granted, denied, and revoked from database users and user-defined database
roles.
Permissions that apply to all database classes
CONTROL
ALTER
VIEW DEFINITION
Permissions that apply to all database classes except users
TAKE OWNERSHIP
Permissions that apply only to databases
ALTER ANY DATABASE
ALTER ON DATABASE
ALTER ANY DATASPACE
ALTER ANY ROLE
ALTER ANY SCHEMA
ALTER ANY USER
BACKUP DATABASE
CONNECT ON DATABASE
CREATE PROCEDURE
CREATE ROLE
CREATE SCHEMA
CREATE TABLE
CREATE VIEW
SHOWPL AN
Permissions that apply only to users
IMPERSONATE
Permissions that apply to databases, schemas, and objects
ALTER
DELETE
EXECUTE
INSERT
SELECT
UPDATE
REFRENCES
For a definition of each type of permission, see Permissions (Database Engine).
Chart of Permissions
All permissions are graphically represented on this poster. This is the easiest way to see nested hierarchy of
permissions. For example the ALTER ON LOGIN permission can be granted by itself, but it is also included if a
login is granted the CONTROL permission on that login, or if a login is granted the ALTER ANY LOGIN
permission.
To download a full size version of this poster, see SQL Server PDW Permissionsin the files section of the APS
Yammer site (or request by e-mail from apsdoc@microsoft.com.
Default Permissions
The following list describes the default permissions:
When a login is created by using the CREATE LOGIN statement the new login receives the CONNECT
SQL permission.
All logins are members of the public server role and cannot be removed from public.
When a database user is created by using the CREATE USER permission, the database user receives the
CONNECT permission in the database.
All principals, including the public role, have no explicit or implicit permissions by default.
When a login or user becomes the owner of a database or object, the login or user always has all
permissions on the database or object. The ownership permissions cannot be changed and are not visible as
explicit permissions. The GRANT, DENY, and REVOKE statements have no effect on owners.
The sa login has all permissions on the appliance. Similar to ownership permissions, the sa permissions
cannot be changed and are not visible as explicit permissions. The GRANT, DENY, and REVOKE
statements have no effect on sa login. The sa login cannot be renamed.
The USE statement does not require permissions. All principals can run the USE statement on any
database.
The following DENY statement prevents Yuen from selecting data from any table or view in the dbo schema. Yuen
cannot read the data even if he has permission in some other way, such as through a role membership.
The following REVOKE statement removes the DENY permission. Now Yuen's explicit permissions are neutral.
Yuen might be able to select data from any table through some other implicit permission such as a role
membership.
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Switches the execution context back to the caller of the last EXECUTE AS statement.
Transact-SQL Syntax Conventions
Syntax
REVERT
[ WITH COOKIE = @varbinary_variable ]
Arguments
WITH COOKIE = @varbinary_variable
Specifies the cookie that was created in a corresponding EXECUTE AS stand-alone statement.
@varbinary_variable is varbinary(100).
Remarks
REVERT can be specified within a module such as a stored procedure or user-defined function, or as a stand-alone
statement. When specified inside a module, REVERT is applicable only to EXECUTE AS statements defined in the
module. For example, the following stored procedure issues an EXECUTE AS statement followed by a REVERT
statement.
Assume that in the session in which the stored procedure is run, the execution context of the session is explicitly
changed to login1 , as shown in the following example.
The REVERT statement that is defined inside usp_myproc switches the execution context set inside the module, but
does not affect the execution context set outside the module. That is, the execution context for the session remains
set to login1 .
When specified as a standalone statement, REVERT applies to EXECUTE AS statements defined within a batch or
session. REVERT has no effect if the corresponding EXECUTE AS statement contains the WITH NO REVERT
clause. In this case, the execution context remains in effect until the session is dropped.
Permissions
No permissions are required.
Examples
A. Using EXECUTE AS and REVERT to switch context
The following example creates a context execution stack by using multiple principals. The REVERT statement is
then used to reset the execution context to the previous caller. The REVERT statement is executed multiple times
moving up the stack until the execution context is set to the original caller.
USE AdventureWorks2012;
GO
-- Create two temporary principals.
CREATE LOGIN login1 WITH PASSWORD = 'J345#$)thb';
CREATE LOGIN login2 WITH PASSWORD = 'Uor80$23b';
GO
CREATE USER user1 FOR LOGIN login1;
CREATE USER user2 FOR LOGIN login2;
GO
-- Give IMPERSONATE permissions on user2 to user1
-- so that user1 can successfully set the execution context to user2.
GRANT IMPERSONATE ON USER:: user2 TO user1;
GO
-- Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- Set the execution context to login1.
EXECUTE AS LOGIN = 'login1';
-- Verify that the execution context is now login1.
SELECT SUSER_NAME(), USER_NAME();
-- Login1 sets the execution context to login2.
EXECUTE AS USER = 'user2';
-- Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- The execution context stack now has three principals: the originating caller, login1, and login2.
-- The following REVERT statements will reset the execution context to the previous context.
REVERT;
-- Display the current execution context.
SELECT SUSER_NAME(), USER_NAME();
REVERT;
-- Display the current execution context.
SELECT SUSER_NAME(), USER_NAME();
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes a previously granted or denied permission.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
REVOKE
<permission> [ ,...n ]
[ ON [ <class_type> :: ] securable ]
[ FROM | TO ] principal [ ,...n ]
[ CASCADE ]
[;]
<permission> ::=
{ see the tables below }
<class_type> ::=
{
LOGIN
| DATABASE
| OBJECT
| ROLE
| SCHEMA
| USER
}
Arguments
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked. This is required when you are using
the CASCADE argument.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
ALL
Applies to: SQL Server 2008 through SQL Server 2017
This option does not revoke all possible permissions. Revoking ALL is equivalent to revoking the following
permissions.
If the securable is a database, ALL means BACKUP DATABASE, BACKUP LOG, CREATE DATABASE,
CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and
CREATE VIEW.
If the securable is a scalar function, ALL means EXECUTE and REFERENCES.
If the securable is a table-valued function, ALL means DELETE, INSERT, REFERENCES, SELECT, and
UPDATE.
If the securable is a stored procedure, ALL means EXECUTE.
If the securable is a table, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
If the securable is a view, ALL means DELETE, INSERT, REFERENCES, SELECT, and UPDATE.
NOTE
The REVOKE ALL syntax is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Avoid using
this feature in new development work, and plan to modify applications that currently use this feature. Revoke specific
permissions instead.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the topics listed in
Securable-specific Syntax later in this topic.
column
Specifies the name of a column in a table on which permissions are being revoked. The parentheses are required.
class
Specifies the class of the securable on which the permission is being revoked. The scope qualifier :: is required.
securable
Specifies the securable on which the permission is being revoked.
TO | FROM principal
Is the name of a principal. The principals from which permissions on a securable can be revoked vary, depending
on the securable. For more information about valid combinations, see the topics listed in Securable-specific
Syntax later in this topic.
CASCADE
Indicates that the permission that is being revoked is also revoked from other principals to which it has been
granted by this principal. When you are using the CASCADE argument, you must also include the GRANT
OPTION FOR argument.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS principal
Use the AS principal clause to indicate that you are revoking a permission that was granted by a principal other
than you. For example, presume that user Mary is principal_id 12 and user Raul is principal 15. Both Mary and
Raul grant a user named Steven the same permission. The sys.database_permissions table will indicate the
permissions twice but they will each have a different grantor_prinicpal_id value. Mary could revoke the
permission using the AS RAUL clause to remove Raul's grant of the permission.
The use of AS in this statement does not imply the ability to impersonate another user.
Remarks
The full syntax of the REVOKE statement is complex. The syntax diagram above was simplified to draw attention
to its structure. Complete syntax for revoking permissions on specific securables is described in the topics listed
in Securable-specific Syntax later in this topic.
The REVOKE statement can be used to remove granted permissions, and the DENY statement can be used to
prevent a principal from gaining a specific permission through a GRANT.
Granting a permission removes DENY or REVOKE of that permission on the specified securable. If the same
permission is denied at a higher scope that contains the securable, the DENY takes precedence. However,
revoking the granted permission at a higher scope does not take precedence.
Cau t i on
A table-level DENY does not take precedence over a column-level GRANT. This inconsistency in the permissions
hierarchy has been preserved for backward compatibility. It will be removed in a future release.
The sp_helprotect system stored procedure reports permissions on a database-level securable
The REVOKE statement will fail if CASCADE is not specified when you are revoking a permission from a
principal that was granted that permission with GRANT OPTION specified.
Permissions
Principals with CONTROL permission on a securable can revoke permission on that securable. Object owners
can revoke permissions on the objects they own.
Grantees of CONTROL SERVER permission, such as members of the sysadmin fixed server role, can revoke any
permission on any securable in the server. Grantees of CONTROL permission on a database, such as members
of the db_owner fixed database role, can revoke any permission on any securable in the database. Grantees of
CONTROL permission on a schema can revoke any permission on any object within the schema.
Securable-specific Syntax
The following table lists the securables and the topics that describe the securable-specific syntax.
SECURABLE TOPIC
See Also
Permissions Hierarchy (Database Engine)
DENY (Transact-SQL )
GRANT (Transact-SQL )
sp_addlogin (Transact-SQL )
sp_adduser (Transact-SQL )
sp_changedbowner (Transact-SQL )
sp_dropuser (Transact-SQL )
sp_helprotect (Transact-SQL )
sp_helpuser (Transact-SQL )
REVOKE Assembly Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on an assembly.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON ASSEMBLY :: assembly_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]
Arguments
GRANT OPTION FOR
Indicates that the ability to grant or deny the specified permission will be revoked. The permission itself will not be
revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
permission
Specifies a permission that can be revoked on an assembly. Listed below.
ON ASSEMBLY ::assembly_name
Specifies the assembly on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted or denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
An assembly is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be revoked on an assembly are listed below, together with the
more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the assembly
See Also
DENY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE ASSEMBLY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
REVOKE Asymmetric Key Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on an asymmetric key.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] { permission [ ,...n ] }
ON ASYMMETRIC KEY :: asymmetric_key_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]
Arguments
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
permission
Specifies a permission that can be revoked on an assembly. Listed below.
ON ASYMMETRIC KEY ::asymmetric_key_name
Specifies the asymmetric key on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted or denied by this principal. The permission itself will not be revoked.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal.
Remarks
An asymmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on an asymmetric key are listed below,
together with the more general permissions that include them by implication.
ASYMMETRIC KEY PERMISSION IMPLIED BY ASYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
Requires CONTROL permission on the asymmetric key.
See Also
REVOKE (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
REVOKE Availability Group Permissions (Transact-
SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on an Always On availability group.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON AVAILABILITY GROUP :: availability_group_name
{ FROM | TO } < server_principal > [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
Arguments
permission
Specifies a permission that can be revoked on an availability group. For a list of the permissions, see the Remarks
section later in this topic.
ON AVAIL ABILITY GROUP ::availability_group_name
Specifies the availability group on which the permission is being revoked. The scope qualifier (::) is required.
{ FROM | TO } <server_principal> Specifies the SQL Server login to which the permission is being revoked.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
IMPORTANT
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of that permission.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to revoke the
permission.
Remarks
Permissions at the server scope can be revoked only when the current database is master.
Information about availability groups is visible in the sys.availability_groups (Transact-SQL ) catalog view.
Information about server permissions is visible in the sys.server_permissions catalog view, and information about
server principals is visible in the sys.server_principals catalog view.
An availability group is a server-level securable. The most specific and limited permissions that can be revoked on
an availability group are listed in the following table, together with the more general permissions that include
them by implication.
Permissions
Requires CONTROL permission on the availability group or ALTER ANY AVAIL ABILTIY GROUP permission on
the server.
Examples
A. Revoking VIEW DEFINITION permission on an availability group
The following example revokes VIEW DEFINITION permission on availability group MyAg to SQL Server login
ZArifin .
USE master;
REVOKE VIEW DEFINITION ON AVAILABILITY GROUP::MyAg TO ZArifin;
GO
USE master;
REVOKE TAKE OWNERSHIP ON AVAILABILITY GROUP::MyAg TO PKomosinski
CASCADE;
GO
USE master;
GRANT CONTROL ON AVAILABILITY GROUP::MyAg TO PKomosinski
WITH GRANT OPTION;
GO
REVOKE GRANT OPTION FOR CONTROL ON AVAILABILITY GROUP::MyAg TO PKomosinski
CASCADE
GO
See Also
GRANT Availability Group Permissions (Transact-SQL )
DENY Availability Group Permissions (Transact-SQL )
CREATE AVAIL ABILITY GROUP (Transact-SQL )
sys.availability_groups (Transact-SQL )
Always On Availability Groups Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Certificate Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a certificate.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON CERTIFICATE :: certificate_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]
Arguments
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked. The permission itself will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
permission
Specifies a permission that can be revoked on a certificate. Listed below.
ON CERTIFICATE ::certificate_name
Specifies the certificate on which the permission is being revoked. The scope qualifier "::" is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
A certificate is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be revoked on a certificate are listed below, together with the
more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the certificate.
See Also
REVOKE (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
CREATE CERTIFICATE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
Encryption Hierarchy
REVOKE Database Permissions (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted and denied on a database.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] <permission> [ ,...n ]
{ TO | FROM } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<permission> ::=
permission | ALL [ PRIVILEGES ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be denied on a database. For a list of the permissions, see the Remarks section later
in this topic.
ALL
This option does not revoke all possible permissions. Revoking ALL is equivalent to revoking the following
permissions: BACKUP DATABASE, BACKUP LOG, CREATE DATABASE, CREATE DEFAULT, CREATE FUNCTION,
CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and CREATE VIEW.
PRIVILEGES
Included for ISO compliance. Does not change the behavior of ALL.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
The statement will fail if CASCADE is not specified when you are revoking a permission to a principal that was
granted that permission with the GRANT OPTION specified.
A database is a securable contained by the server that is its parent in the permissions hierarchy. The most specific
and limited permissions that can be revoked on a database are listed in the following table, together with the more
general permissions that include them by implication.
ALTER ANY DATABASE EVENT SESSION ALTER ALTER ANY EVENT SESSION
Applies to: Azure SQL Database.
CREATE DATABASE DDL EVENT ALTER ANY DATABASE EVENT CREATE DDL EVENT NOTIFICATION
NOTIFICATION NOTIFICATION
CREATE REMOTE SERVICE BINDING ALTER ANY REMOTE SERVICE BINDING CONTROL SERVER
Permissions
The principal that executes this statement (or the principal specified with the AS option) must have CONTROL
permission on the database or a higher permission that implies CONTROL permission on the database.
If you are using the AS option, the specified principal must own the database.
Examples
A. Revoking permission to create certificates
The following example revokes CREATE CERTIFICATE permission on the AdventureWorks2012 database from user
MelanieK .
USE AdventureWorks2012;
REVOKE CREATE CERTIFICATE FROM MelanieK;
GO
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
USE AdventureWorks2012;
REVOKE REFERENCES FROM AuditMonitor;
GO
USE AdventureWorks2012;
REVOKE VIEW DEFINITION FROM CarmineEs CASCADE;
GO
See Also
sys.database_permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
GRANT Database Permissions (Transact-SQL )
DENY Database Permissions (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Database Principal Permissions (Transact-
SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted or denied on a database user, database role, or application role.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON
{ [ USER :: database_user ]
| [ ROLE :: database_role ]
| [ APPLICATION ROLE :: application_role ]
}
{ FROM | TO } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be revoked on the database principal. For a list of the permissions, see the
Remarks section later in this topic.
USER ::database_user
Specifies the class and name of the user on which the permission is being revoked. The scope qualifier (::) is
required.
ROLE ::database_role
Specifies the class and name of the role on which the permission is being revoked. The scope qualifier (::) is
required.
APPLICATION ROLE ::application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies the class and name of the application role on which the permission is being revoked. The scope qualifier
(::) is required.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Database User Permissions
A database user is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on a database user are listed in the
following table, together with the more general permissions that include them by implication.
DATABASE USER PERMISSION IMPLIED BY DATABASE USER PERMISSION IMPLIED BY DATABASE PERMISSION
DATABASE ROLE PERMISSION IMPLIED BY DATABASE ROLE PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
Requires CONTROL permission on the specified principal, or a higher permission that implies CONTROL
permission.
Grantees of CONTROL permission on a database, such as members of the db_owner fixed database role, can
grant any permission on any securable in the database.
Examples
A. Revoking CONTROL permission on a user from another user
The following example revokes CONTROL permission on AdventureWorks2012 user Wanida from user RolandX .
USE AdventureWorks2012;
REVOKE CONTROL ON USER::Wanida FROM RolandX;
GO
B. Revoking VIEW DEFINITION permission on a role from a user to which it was granted WITH GRANT
OPTION
The following example revokes VIEW DEFINITION permission on AdventureWorks2012 role SammamishParking
from database user JinghaoLiu . The CASCADE option is specified because the user JinghaoLiu was granted
VIEW DEFINITION permission WITH GRANT OPTION .
USE AdventureWorks2012;
REVOKE VIEW DEFINITION ON ROLE::SammamishParking
FROM JinghaoLiu CASCADE;
GO
USE AdventureWorks2012;
REVOKE IMPERSONATE ON USER::HamithaL FROM AccountsPayable17;
GO
See Also
GRANT Database Principal Permissions (Transact-SQL )
DENY Database Principal Permissions (Transact-SQL )
sys.database_principals (Transact-SQL )
sys.database_permissions (Transact-SQL )
CREATE USER (Transact-SQL )
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ROLE (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Database Scoped Credential (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a database scoped credential.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON DATABASE SCOPED CREDENTIAL :: credential_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]
Arguments
GRANT OPTION FOR
Indicates that the ability to grant the specified permission will be revoked. The permission itself will not be
revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
permission
Specifies a permission that can be revoked on a database scoped credential. Listed below.
ON CERTIFICATE ::credential_name
Specifies the database scoped credential on which the permission is being revoked. The scope qualifier "::" is
required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
A database scoped credential is a database-level securable contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be revoked on a database scoped
credential are listed below, together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the database scoped credential.
See Also
REVOKE (Transact-SQL )
GRANT Database Scoped Credential (Transact-SQL )
DENY Database Scoped Credential (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
REVOKE Endpoint Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted or denied on an endpoint.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON ENDPOINT :: endpoint_name
{ FROM | TO } <server_principal> [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
Arguments
permission
Specifies a permission that can be granted on an endpoint. For a list of the permissions, see the Remarks section
later in this topic.
ON ENDPOINT ::endpoint_name
Specifies the endpoint on which the permission is being granted. The scope qualifier (::) is required.
{ FROM | TO } <server_principal> Specifies the SQL Server login from which the permission is being revoked.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to revoke the
permission.
Remarks
Permissions at the server scope can be revoked only when the current database is master.
Information about endpoints is visible in the sys.endpoints catalog view. Information about server permissions is
visible in the sys.server_permissions catalog view, and information about server principals is visible in the
sys.server_principals catalog view.
An endpoint is a server-level securable. The most specific and limited permissions that can be revoked on an
endpoint are listed in the following table, together with the more general permissions that include them by
implication.
Permissions
Requires CONTROL permission on the endpoint or ALTER ANY ENDPOINT permission on the server.
Examples
A. Revoking VIEW DEFINITION permission on an endpoint
The following example revokes VIEW DEFINITION permission on the endpoint Mirror7 from the SQL Server login
ZArifin .
USE master;
REVOKE VIEW DEFINITION ON ENDPOINT::Mirror7 FROM ZArifin;
GO
B. Revoking TAKE OWNERSHIP permission with the CASCADE option
The following example revokes TAKE OWNERSHIP permission on the endpoint Shipping83 from the SQL Server
user PKomosinski and from all principals to which PKomosinski granted TAKE OWNERSHIP on Shipping83 .
USE master;
REVOKE TAKE OWNERSHIP ON ENDPOINT::Shipping83 FROM PKomosinski
CASCADE;
GO
See Also
GRANT Endpoint Permissions (Transact-SQL )
DENY Endpoint Permissions (Transact-SQL )
CREATE ENDPOINT (Transact-SQL )
Endpoints Catalog Views (Transact-SQL )
sys.endpoints (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Full-Text Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a full-text catalog or full-text stoplist.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] ON
FULLTEXT
{
CATALOG :: full-text_catalog_name
|
STOPLIST :: full-text_stoplist_name
}
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]
Arguments
GRANT OPTION FOR
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON FULLTEXT CATALOG ::full-text_catalog_name
Specifies the full-text catalog on which the permission is being revoked. The scope qualifier :: is required.
ON FULLTEXT STOPLIST ::full-text_stoplist_name
Specifies the full-text stoplist on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
FULLTEXT CATALOG Permissions
A full-text catalog is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on a full-text catalog are listed in the
following table, together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the full-text catalog.
See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
CREATE FULLTEXT STOPLIST (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT Full-Text Permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE (Transact-SQL )
sys.fn_builtin_permissions (Transact-SQL )
sys.fulltext_catalogs (Transact-SQL )
sys.fulltext_stoplists (Transact-SQL )
REVOKE Object Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a table, view, table-valued function, stored procedure, extended stored procedure, scalar
function, aggregate function, service queue, or synonym.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] <permission> [ ,...n ] ON
[ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ]
{ FROM | TO } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<permission> ::=
ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be revoked on a schema-contained object. For a list of the permissions, see the
Remarks section later in this topic.
ALL
Revoking ALL does not revoke all possible permissions. Revoking ALL is equivalent to revoking all ANSI-92
permissions applicable to the specified object. The meaning of ALL varies as follows:
Scalar function permissions: EXECUTE, REFERENCES.
Table-valued function permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
Stored Procedure permissions: EXECUTE.
Table permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
View permissions: DELETE, INSERT, REFERENCES, SELECT, UPDATE.
PRIVILEGES
Included for ANSI-92 compliance. Does not change the behavior of ALL.
column
Specifies the name of a column in a table, view, or table-valued function on which the permission is being
revoked. The parentheses ( ) are required. Only SELECT, REFERENCES, and UPDATE permissions can be denied
on a column. column can be specified in the permissions clause or after the securable name.
ON [ OBJECT :: ] [ schema_name ] . object_name
Specifies the object on which the permission is being revoked. The OBJECT phrase is optional if schema_name
is specified. If the OBJECT phrase is used, the scope qualifier (::) is required. If schema_name is not specified, the
default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
{ FROM | TO } <database_principal> Specifies the principal from which the permission is being revoked.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about objects is visible in various catalog views. For more information, see Object Catalog Views
(Transact-SQL ).
An object is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be revoked on an object are listed in the following table,
together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the object.
If you use the AS clause, the specified principal must own the object on which permissions are being revoked.
Examples
A. Revoking SELECT permission on a table
The following example revokes SELECT permission from the user RosaQdM on the table Person.Address in the
AdventureWorks2012 database.
USE AdventureWorks2012;
REVOKE SELECT ON OBJECT::Person.Address FROM RosaQdM;
GO
USE AdventureWorks2012;
REVOKE REFERENCES (BusinessEntityID) ON OBJECT::HumanResources.vEmployee
FROM Wanida CASCADE;
GO
See Also
GRANT Object Permissions (Transact-SQL )
DENY Object Permissions (Transact-SQL )
Object Catalog Views (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Securables
sys.fn_builtin_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
REVOKE Schema Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a schema.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON SCHEMA :: schema_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]
Arguments
permission
Specifies a permission that can be revoked on a schema. The permissions that can be revoked on a schema are
listed in the "Remarks" section, later in this topic.
GRANT OPTION FOR
Indicates that the right to grant the specified right to other principals will be revoked. The permission itself will not
be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
ON SCHEMA :: schema_name
Specifies the schema on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. One of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. One
of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
A schema is a database-level securable contained by the database that is its parent in the permissions hierarchy.
The most specific and limited permissions that can be revoked on a schema are listed in the following table,
together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the schema.
See Also
CREATE SCHEMA (Transact-SQL )
REVOKE (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
REVOKE Search Property List Permissions (Transact-
SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a search property list.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] ON
SEARCH PROPERTY LIST :: search_property_list_name
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]
Arguments
GRANT OPTION FOR
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
permission
Is the name of a permission. The valid mappings of permissions to securables are described in the "Remarks"
section, later in this topic.
ON SEARCH PROPERTY LIST ::search_property_list_name
Specifies the search property list on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. The principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission. The
principal can be one of the following:
database user
database role
application role
database user mapped to a Windows login
database user mapped to a Windows group
database user mapped to a certificate
database user mapped to an asymmetric key
database user not mapped to a server principal.
Remarks
SEARCH PROPERTY LIST Permissions
A search property list is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be revoked on a search property list are listed in the
following, together with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the full-text catalog.
See Also
CREATE APPLICATION ROLE (Transact-SQL )
CREATE ASYMMETRIC KEY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE SEARCH PROPERTY LIST (Transact-SQL )
DENY Search Property List Permissions (Transact-SQL )
Encryption Hierarchy
sys.fn_my_permissions (Transact-SQL )
GRANT Search Property List Permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
Principals (Database Engine)
REVOKE (Transact-SQL )
sys.fn_builtin_permissions (Transact-SQL )
sys.registered_search_property_lists (Transact-SQL )
Search Document Properties with Search Property Lists
Search Document Properties with Search Property Lists
REVOKE Server Permissions (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes server-level GRANT and DENY permissions.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
{ TO | FROM } <grantee_principal> [ ,...n ]
[ CASCADE ]
[ AS <grantor_principal> ]
Arguments
permission
Specifies a permission that can be granted on a server. For a list of the permissions, see the Remarks section later
in this topic.
{ TO | FROM } <grantee_principal> Specifies the principal from which the permission is being revoked.
AS <grantor_principal> Specifies the principal from which the principal executing this query derives its right to
revoke the permission.
GRANT OPTION FOR
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
SQL_Server_login
Specifies a SQL Server login.
SQL_Server_login_mapped_to_Windows_login
Specifies a SQL Server login mapped to a Windows login.
SQL_Server_login_mapped_to_Windows_group
Specifies a SQL Server login mapped to a Windows group.
SQL_Server_login_mapped_to_certificate
Specifies a SQL Server login mapped to a certificate.
SQL_Server_login_mapped_to_asymmetric_key
Specifies a SQL Server login mapped to an asymmetric key.
server_role
Specifies a user-defined server role.
Remarks
Permissions at the server scope can be revoked only when the current database is master.
REVOKE removes both GRANT and DENY permissions.
Use REVOKE GRANT OPTION FOR to revoke the right to regrant the specified permission. If the principal has
the permission with the right to grant it, the right to grant the permission will be revoked, and the permission itself
will not be revoked. But if the principal has the specified permission without the GRANT option, the permission
itself will be revoked.
Information about server permissions can be viewed in the sys.server_permissions catalog view, and information
about server principals can be viewed in the sys.server_principals catalog view. Information about membership of
server roles can be viewed in the sys.server_role_members catalog view.
A server is the highest level of the permissions hierarchy. The most specific and limited permissions that can be
revoked on a server are listed in the following table.
Permissions
Requires CONTROL SERVER permission or membership in the sysadmin fixed server role.
Examples
A. Revoking a permission from a login
The following example revokes VIEW SERVER STATE permission from the SQL Server login WanidaBenshoof .
USE master;
REVOKE VIEW SERVER STATE FROM WanidaBenshoof;
GO
USE master;
REVOKE GRANT OPTION FOR CONNECT SQL FROM JanethEsteves;
GO
The login still has CONNECT SQL permission, but it can no longer grant that permission to other principals.
See Also
GRANT (Transact-SQL )
DENY (Transact-SQL )
DENY Server Permissions (Transact-SQL )
REVOKE Server Permissions (Transact-SQL )
Permissions Hierarchy (Database Engine)
sys.fn_builtin_permissions (Transact-SQL )
sys.fn_my_permissions (Transact-SQL )
HAS_PERMS_BY_NAME (Transact-SQL )
REVOKE Server Principal Permissions (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted or denied on a SQL Server login.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] }
ON
{ [ LOGIN :: SQL_Server_login ]
| [ SERVER ROLE :: server_role ] }
{ FROM | TO } <server_principal> [ ,...n ]
[ CASCADE ]
[ AS SQL_Server_login ]
<server_principal> ::=
SQL_Server_login
| SQL_Server_login_from_Windows_login
| SQL_Server_login_from_certificate
| SQL_Server_login_from_AsymKey
| server_role
Arguments
permission
Specifies a permission that can be revoked on a SQL Server login. For a list of the permissions, see the Remarks
section later in this topic.
LOGIN :: SQL_Server_login
Specifies the SQL Server login on which the permission is being revoked. The scope qualifier (::) is required.
SERVER ROLE :: server_role
Specifies the server role on which the permission is being revoked. The scope qualifier (::) is required.
{ FROM | TO } <server_principal> Specifies the SQL Server login or server role from which the permission is
being revoked.
SQL_Server_login
Specifies the name of a SQL Server login.
SQL_Server_login_from_Windows_login
Specifies the name of a SQL Server login created from a Windows login.
SQL_Server_login_from_certificate
Specifies the name of a SQL Server login mapped to a certificate.
SQL_Server_login_from_AsymKey
Specifies the name of a SQL Server login mapped to an asymmetric key.
server_role
Specifies the name of a user-defined server role.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS SQL_Server_login
Specifies the SQL Server login from which the principal executing this query derives its right to revoke the
permission.
Remarks
SQL Server logins and server roles are server-level securables. The most specific and limited permissions that can
be revoked on a SQL Server login or server role are listed in the following table, together with the more general
permissions that include them by implication.
SQL SERVER LOGIN OR SERVER ROLE IMPLIED BY SQL SERVER LOGIN OR SERVER
PERMISSION ROLE PERMISSION IMPLIED BY SERVER PERMISSION
Permissions
For logins, requires CONTROL permission on the login or ALTER ANY LOGIN permission on the server.
For server roles, requires CONTROL permission on the server role or ALTER ANY SERVER ROLE permission on
the server.
Examples
A. Revoking IMPERSONATE permission on a login
The following example revokes IMPERSONATE permission on the SQL Server login WanidaBenshoof from a SQL
Server login created from the Windows user AdvWorks\YoonM .
USE master;
REVOKE IMPERSONATE ON LOGIN::WanidaBenshoof FROM [AdvWorks\YoonM];
GO
USE master;
REVOKE VIEW DEFINITION ON LOGIN::EricKurjan FROM RMeyyappan
CASCADE;
GO
USE master;
REVOKE VIEW DEFINITION ON SERVER ROLE::Sales TO Auditors ;
GO
See Also
sys.server_principals (Transact-SQL )
sys.server_permissions (Transact-SQL )
GRANT Server Principal Permissions (Transact-SQL )
DENY Server Principal Permissions (Transact-SQL )
CREATE LOGIN (Transact-SQL )
Principals (Database Engine)
Permissions (Database Engine)
Security Functions (Transact-SQL )
Security Stored Procedures (Transact-SQL )
REVOKE Service Broker Permissions (Transact-SQL)
5/4/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a Service Broker contract, message type, remote service binding, route, or service.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] ON
{
[ CONTRACT :: contract_name ]
| [ MESSAGE TYPE :: message_type_name ]
| [ REMOTE SERVICE BINDING :: remote_binding_name ]
| [ ROUTE :: route_name ]
| [ SERVICE :: service_name ]
}
{ TO | FROM } database_principal [ ,...n ]
[ CASCADE ]
[ AS revoking_principal ]
Arguments
GRANT OPTION FOR
Indicates that the right to grant the specified right to other principals will be revoked. The permission itself will
not be revokes.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
permission
Specifies a permission that can be revoked on a Service Broker securable. For a list of these permissions, see the
Remarks section later in this topic.
CONTRACT ::contract_name
Specifies the contract on which the permission is being revoked. The scope qualifier :: is required.
MESSAGE TYPE ::message_type_name
Specifies the message type on which the permission is being revoked. The scope qualifier :: is required.
REMOTE SERVICE BINDING ::remote_binding_name
Specifies the remote service binding on which the permission is being revoked. The scope qualifier :: is required.
ROUTE ::route_name
Specifies the route on which the permission is being revoked. The scope qualifier :: is required.
SERVICE ::message_type_name
Specifies the service on which the permission is being revoked. The scope qualifier :: is required.
database_principal
Specifies the principal from which the permission is being revoked. database_principal can be one of the
following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been
granted or denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS revoking_principal
Specifies a principal from which the principal executing this query derives its right to revoke the permission.
revoking_principal can be one of the following:
Database user
Database role
Application role
Database user mapped to a Windows login
Database user mapped to a Windows group
Database user mapped to a certificate
Database user mapped to an asymmetric key
Database user not mapped to a server principal
Remarks
Service Broker Contracts
A Service Broker contract is a database-level securable that is contained by the database that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be revoked on a Service Broker
contract are listed in the following table, together with the more general permissions that include them by
implication.
IMPLIED BY SERVICE BROKER CONTRACT
SERVICE BROKER CONTRACT PERMISSION PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
Requires CONTROL permission on the Service Broker contract, message type, remote service binding, route, or
service
See Also
GRANT Service Broker Permissions (Transact-SQL )
DENY Service Broker Permissions (Transact-SQL )
GRANT (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
REVOKE Symmetric Key Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted and denied on a symmetric key.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON SYMMETRIC KEY :: symmetric_key_name
{ TO | FROM } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be revoked on a symmetric key. For a list of the permissions, see the Remarks
section later in this topic.
ON SYMMETRIC KEY :: asymmetric_key_name
Specifies the symmetric key on which the permission is being revoked. The scope qualifier (::) is required.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
{ TO | FROM } <database_principal>
Specifies the principal from which the permission is being revoked.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about symmetric keys is visible in the sys.symmetric_keys catalog view.
The statement will fail if CASCADE is not specified when revoking a permission from a principal that was granted
that permission with GRANT OPTION specified.
A symmetric key is a database-level securable contained by the database that is its parent in the permissions
hierarchy. The most specific and limited permissions that can be granted on a symmetric key are listed in the
following table, together with the more general permissions that include them by implication.
SYMMETRIC KEY PERMISSION IMPLIED BY SYMMETRIC KEY PERMISSION IMPLIED BY DATABASE PERMISSION
Permissions
Requires CONTROL permission on the symmetric key or ALTER ANY SYMMETRIC KEY permission on the
database. If you use the AS option, the specified principal must own the symmetric key.
Examples
The following example revokes ALTER permission on the symmetric key SamInventory42 from the user HamidS
and from other principals to which HamidS has granted ALTER permission.
USE AdventureWorks2012;
REVOKE ALTER ON SYMMETRIC KEY::SamInventory42 TO HamidS CASCADE;
GO
See Also
sys.symmetric_keys (Transact-SQL )
GRANT Symmetric Key Permissions (Transact-SQL )
DENY Symmetric Key Permissions (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Encryption Hierarchy
REVOKE System Object Permissions (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on system objects such as stored procedures, extended stored procedures, functions, and
views from a principal.
Transact-SQL Syntax Conventions
Syntax
REVOKE { SELECT | EXECUTE } ON [sys.]system_object FROM principal
Arguments
[sys.] .
The sys qualifier is required only when you are referring to catalog views and dynamic management views.
system_object
Specifies the object on which permission is being revoked.
principal
Specifies the principal from which the permission is being revoked.
Remarks
This statement can be used to revoke permissions on certain stored procedures, extended stored procedures,
table-valued functions, scalar functions, views, catalog views, compatibility views, INFORMATION_SCHEMA
views, dynamic management views, and system tables that are installed by SQL Server. Each of these system
objects exists as a unique record in the resource database (mssqlsystemresource). The resource database is read-
only. A link to the object is exposed as a record in the sys schema of every database.
Default name resolution resolves unqualified procedure names to the resource database. Therefore, the sys.
qualifier is required only when you are specifying catalog views and dynamic management views.
Cau t i on
Revoking permissions on system objects will cause applications that depend on them to fail. SQL Server
Management Studio uses catalog views and may not function as expected if you change the default permissions
on catalog views.
Revoking permissions on triggers and on columns of system objects is not supported.
Permissions on system objects will be preserved during upgrades of SQL Server.
System objects are visible in the sys.system_objects catalog view.
Permissions
Requires CONTROL SERVER permission.
Examples
The following example revokes EXECUTE permission on sp_addlinkedserver from public .
See Also
sys.system_objects (Transact-SQL )
sys.database_permissions (Transact-SQL )
GRANT System Object Permissions (Transact-SQL )
DENY System Object Permissions (Transact-SQL )
REVOKE Type Permissions (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions on a type.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ]
ON TYPE :: [ schema_name ]. type_name
{ FROM | TO } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be revoked on a type. For a list of the permissions, see the Remarks section later in
this topic.
ON TYPE :: [ schema_name ] . type_name
Specifies the type on which the permission is being revoked. The scope qualifier (::) is required. If schema_name is
not specified, the default schema is used. If schema_name is specified, the schema scope qualifier (.) is required.
{ FROM | TO } <database_principal> Specifies the principal from which the permission is being revoked.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Applies to: SQL Server 2008 through SQL Server 2017, SQL Database
Specifies an application role.
Database_user_mapped_to_Windows_User
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Applies to: SQL Server 2008 through SQL Server 2017
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
A type is a schema-level securable contained by the schema that is its parent in the permissions hierarchy.
IMPORTANT
GRANT, DENY, and REVOKE permissions do not apply to system types. User-defined types can be granted permissions. For
more information about user-defined types, see Working with User-Defined Types in SQL Server.
The most specific and limited permissions that can be revoked on a type are listed in the following table, together
with the more general permissions that include them by implication.
Permissions
Requires CONTROL permission on the type. If you use the AS clause, the specified principal must own the type.
Examples
The following example revokes VIEW DEFINITION permission on the user-defined type PhoneNumber from the user
KhalidR . The CASCADE option indicates that VIEW DEFINITION permission will also be revoked from principals to
which KhalidR granted it. PhoneNumber is located in schema Telemarketing .
See Also
GRANT Type Permissions (Transact-SQL )
DENY Type Permissions (Transact-SQL )
CREATE TYPE (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
Securables
REVOKE XML Schema Collection Permissions
(Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Revokes permissions granted or denied on an XML schema collection.
Transact-SQL Syntax Conventions
Syntax
REVOKE [ GRANT OPTION FOR ] permission [ ,...n ] ON
XML SCHEMA COLLECTION :: [ schema_name . ]
XML_schema_collection_name
{ TO | FROM } <database_principal> [ ,...n ]
[ CASCADE ]
[ AS <database_principal> ]
<database_principal> ::=
Database_user
| Database_role
| Application_role
| Database_user_mapped_to_Windows_User
| Database_user_mapped_to_Windows_Group
| Database_user_mapped_to_certificate
| Database_user_mapped_to_asymmetric_key
| Database_user_with_no_login
Arguments
permission
Specifies a permission that can be revoked on an XML schema collection. For a list of the permissions, see the
Remarks section later in this topic.
ON XML SCHEMA COLLECTION :: [ schema_name. ] XML_schema_collection_name
Specifies the XML schema collection on which the permission is being revoked. The scope qualifier (::) is required.
If schema_name is not specified, the default schema will be used. If schema_name is specified, the schema scope
qualifier (.) is required.
GRANT OPTION
Indicates that the right to grant the specified permission to other principals will be revoked. The permission itself
will not be revoked.
IMPORTANT
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE
Indicates that the permission being revoked is also revoked from other principals to which it has been granted or
denied by this principal.
Cau t i on
A cascaded revocation of a permission granted WITH GRANT OPTION will revoke both GRANT and DENY of
that permission.
{ TO | FROM } <database_principal>
Specifies the principal from which the permission is being revoked.
AS <database_principal> Specifies a principal from which the principal executing this query derives its right to
revoke the permission.
Database_user
Specifies a database user.
Database_role
Specifies a database role.
Application_role
Specifies an application role.
Database_user_mapped_to_Windows_User
Specifies a database user mapped to a Windows user.
Database_user_mapped_to_Windows_Group
Specifies a database user mapped to a Windows group.
Database_user_mapped_to_certificate
Specifies a database user mapped to a certificate.
Database_user_mapped_to_asymmetric_key
Specifies a database user mapped to an asymmetric key.
Database_user_with_no_login
Specifies a database user with no corresponding server-level principal.
Remarks
Information about XML schema collections is visible in the sys.xml_schema_collections catalog view.
The statement will fail if CASCADE is not specified when you are revoking a permission from a principal that was
granted that permission with GRANT OPTION specified.
An XML schema collection is a schema-level securable contained by the schema that is its parent in the
permissions hierarchy. The most specific and limited permissions that can be revoked on an XML schema
collection are listed in the following table, together with the more general permissions that include them by
implication.
Permissions
Requires CONTROL permission on the XML schema collection. If you use the AS option, the specified principal
must own the XML schema collection.
Examples
The following example revokes EXECUTE permission on the XML schema collection Invoices4 from the user
Wanida . The XML schema collection Invoices4 is located inside the Sales schema of the AdventureWorks2012
database.
USE AdventureWorks2012;
REVOKE EXECUTE ON XML SCHEMA COLLECTION::Sales.Invoices4 FROM Wanida;
GO
See Also
GRANT XML Schema Collection Permissions (Transact-SQL )
DENY XML Schema Collection Permissions (Transact-SQL )
sys.xml_schema_collections (Transact-SQL )
CREATE XML SCHEMA COLLECTION (Transact-SQL )
Permissions (Database Engine)
Principals (Database Engine)
SETUSER (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Allows a member of the sysadmin fixed server role or the owner of a database to impersonate another user.
IMPORTANT
SETUSER is included for backward compatibility only. SETUSER may not be supported in a future release of SQL Server. We
recommend that you use EXECUTE AS instead.
Syntax
SETUSER [ 'username' [ WITH NORESET ] ]
Arguments
' username '
Is the name of a SQL Server or Windows user in the current database that is impersonated. When username is not
specified, the original identity of the system administrator or database owner impersonating the user is reset.
WITH NORESET
Specifies that subsequent SETUSER statements (with no specified username) should not reset the user identity to
system administrator or database owner.
Remarks
SETUSER can be used by a member of the sysadmin fixed server role or the owner of a database to adopt the
identity of another user to test the permissions of the other user. Membership in the db_owner fixed database role
is not sufficient.
Only use SETUSER with SQL Server users. SETUSER is not supported with Windows users. When SETUSER has
been used to assume the identity of another user, any objects that the impersonating user creates are owned by the
user being impersonated. For example, if the database owner assumes the identity of user Margaret and creates a
table called orders, the orders table is owned by Margaret, not the system administrator.
SETUSER remains in effect until another SETUSER statement is issued or until the current database is changed
with the USE statement.
NOTE
If SETUSER WITH NORESET is used, the database owner or system administrator must log off and then log on again to
reestablish his or her own rights.
Permissions
Requires membership in the sysadmin fixed server role or must be the owner of the database. Membership in the
db_owner fixed database role is not sufficient
Examples
The following example shows how the database owner can adopt the identity of another user. User mary has
created a table called computer_types . By using SETUSER, the database owner impersonates mary to grant user
joe access to the computer_types table, and then resets his or her own identity.
SETUSER 'mary';
GO
GRANT SELECT ON computer_types TO joe;
GO
--To revert to the original user
SETUSER;
See Also
DENY (Transact-SQL )
GRANT (Transact-SQL )
REVOKE (Transact-SQL )
USE (Transact-SQL )
BEGIN CONVERSATION TIMER (Transact-SQL)
5/4/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Starts a timer. When the time-out expires, Service Broker puts a message of type
http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer on the local queue for the conversation.
Syntax
BEGIN CONVERSATION TIMER ( conversation_handle )
TIMEOUT = timeout
[ ; ]
Arguments
BEGIN CONVERSATION TIMER (conversation_handle)
Specifies the conversation to time. The conversation_handle must be of type uniqueidentifier.
TIMEOUT
Specifies, in seconds, the amount of time to wait before putting the message on the queue.
Remarks
A conversation timer provides a way for an application to receive a message on a conversation after a specific
amount of time. Calling BEGIN CONVERSATION TIMER on a conversation before the timer has expired sets the
timeout to the new value. Unlike the conversation lifetime, each side of the conversation has an independent
conversation timer. The DialogTimer message arrives on the local queue without affecting the remote side of the
conversation. Therefore, an application can use a timer message for any purpose.
For example, you can use the conversation timer to keep an application from waiting too long for an overdue
response. If you expect the application to complete a dialog in 30 seconds, you might set the conversation timer for
that dialog to 60 seconds (30 seconds plus a 30-second grace period). If the dialog is still open after 60 seconds,
the application receives a time-out message on the queue for that dialog.
Alternatively, an application can use a conversation timer to request activation at a particular time. For example,
you might create a service that reports the number of active connections every few minutes, or a service that
reports the number of open purchase orders every evening. The service sets a conversation timer to expire at the
desired time; when the timer expires, Service Broker sends a DialogTimer message. The DialogTimer message
causes Service Broker to start the activation stored procedure for the queue. The stored procedure sends a
message to the remote service and restarts the conversation timer.
BEGIN CONVERSATION TIMER is not valid in a user-defined function.
Permissions
Permission for setting a conversation timer defaults to users that have SEND permissions on the service for the
conversation, members of the sysadmin fixed server role, and members of the db_owner fixed database role.
Examples
The following example sets a two-minute time-out on the dialog identified by @dialog_handle .
See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
END CONVERSATION (Transact-SQL )
RECEIVE (Transact-SQL )
BEGIN DIALOG CONVERSATION (Transact-SQL)
5/4/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Begins a dialog from one service to another service. A dialog is a conversation that provides exactly-once-in-order
messaging between two services.
Transact-SQL Syntax Conventions
Syntax
BEGIN DIALOG [ CONVERSATION ] @dialog_handle
FROM SERVICE initiator_service_name
TO SERVICE 'target_service_name'
[ , { 'service_broker_guid' | 'CURRENT DATABASE' }]
[ ON CONTRACT contract_name ]
[ WITH
[ { RELATED_CONVERSATION = related_conversation_handle
| RELATED_CONVERSATION_GROUP = related_conversation_group_id } ]
[ [ , ] LIFETIME = dialog_lifetime ]
[ [ , ] ENCRYPTION = { ON | OFF } ] ]
[ ; ]
Arguments
@ dialog_handle
Is a variable used to store the system-generated dialog handle for the new dialog that is returned by the BEGIN
DIALOG CONVERSATION statement. The variable must be of type uniqueidentifier.
FROM SERVICE initiator_service_name
Specifies the service that initiates the dialog. The name specified must be the name of a service in the current
database. The queue specified for the initiator service receives messages returned by the target service and
messages created by Service Broker for this conversation.
TO SERVICE 'target_service_name'
Specifies the target service with which to initiate the dialog. The target_service_name is of type nvarchar(256).
Service Broker uses a byte-by-byte comparison to match the target_service_name string. In other words, the
comparison is case-sensitive and does not take into account the current collation.
service_broker_guid
Specifies the database that hosts the target service. When more than one database hosts an instance of the target
service, you can communicate with a specific database by providing a service_broker_guid.
The service_broker_guid is of type nvarchar(128). To find the service_broker_guid for a database, run the
following query in the database:
SELECT service_broker_guid
FROM sys.databases
WHERE database_id = DB_ID() ;
NOTE
This option is not available in a contained database.
'CURRENT DATABASE'
Specifies that the conversation use the service_broker_guid for the current database.
ON CONTRACT contract_name
Specifies the contract that this conversation follows. The contract must exist in the current database. If the target
service does not accept new conversations on the contract specified, Service Broker returns an error message on
the conversation. When this clause is omitted, the conversation follows the contract named DEFAULT.
REL ATED_CONVERSATION =related_conversation_handle
Specifies the existing conversation group that the new dialog is added to. When this clause is present, the new
dialog belongs to the same conversation group as the dialog specified by related_conversation_handle. The
related_conversation_handlemust be of a type implicitly convertible to type uniqueidentifier. The statement fails
if the related_conversation_handle does not reference an existing dialog.
REL ATED_CONVERSATION_GROUP =related_conversation_group_id
Specifies the existing conversation group that the new dialog is added to. When this clause is present, the new
dialog will be added to the conversation group specified by related_conversation_group_id. The
related_conversation_group_idmust be of a type implicitly convertible to type uniqueidentifier. If
related_conversation_group_iddoes not reference an existing conversation group, the service broker creates a
new conversation group with the specified related_conversation_group_id and relates the new dialog to that
conversation group.
LIFETIME =dialog_lifetime
Specifies the maximum amount of time the dialog will remain open. For the dialog to complete successfully, both
endpoints must explicitly end the dialog before the lifetime expires. The dialog_lifetime value must be expressed
in seconds. Lifetime is of type int. When no LIFETIME clause is specified, the dialog lifetime is the maximum value
of the int data type.
ENCRYPTION
Specifies whether or not messages sent and received on this dialog must be encrypted when they are sent outside
of an instance of Microsoft SQL Server. A dialog that must be encrypted is a secured dialog. When ENCRYPTION
= ON and the certificates required to support encryption are not configured, Service Broker returns an error
message on the conversation. If ENCRYPTION = OFF, encryption is used if a remote service binding is
configured for the target_service_name; otherwise messages are sent unencrypted. If this clause is not present,
the default value is ON.
NOTE
Messages exchanged with services in the same instance of SQL Server are never encrypted. However, a database master key
and the certificates for encryption are still required for conversations that use encryption if the services for the conversation
are in different databases. This allows conversations to continue in the event that one of the databases is moved to a
different instance while the conversation is in progress.
Remarks
All messages are part of a conversation. Therefore, an initiating service must begin a conversation with the target
service before sending a message to the target service. The information specified in the BEGIN DIALOG
CONVERSATION statement is similar to the address on a letter; Service Broker uses the information to deliver
messages to the correct service. The service specified in the TO SERVICE clause is the address that messages are
sent to. The service specified in the FROM SERVICE clause is the return address used for reply messages.
The target of a conversation does not need to call BEGIN DIALOG CONVERSATION. Service Broker creates a
conversation in the target database when the first message in the conversation arrives from the initiator.
Beginning a dialog creates a conversation endpoint in the database for the initiating service, but does not create a
network connection to the instance that hosts the target service. Service Broker does not establish communication
with the target of the dialog until the first message is sent.
When the BEGIN DIALOG CONVERSATION statement does not specify a related conversation or a related
conversation group, Service Broker creates a new conversation group for the new conversation.
Service Broker does not allow arbitrary groupings of conversations. All conversations in a conversation group
must have the service specified in the FROM clause as either the initiator or the target of the conversation.
The BEGIN DIALOG CONVERSATION command locks the conversation group that contains the dialog_handle
returned. When the command includes a REL ATED_CONVERSATION_GROUP clause, the conversation group
for dialog_handle is the conversation group specified in the related_conversation_group_id parameter. When the
command includes a REL ATED_CONVERSATION clause, the conversation group for dialog_handle is the
conversation group associated with the related_conversation_handle specified.
BEGIN DIALOG CONVERSATION is not valid in a user-defined function.
Permissions
To begin a dialog, the current user must have RECEIVE permission on the queue for the service specified in the
FROM clause of the command and REFERENCES permission for the contract specified.
Examples
A. Beginning a dialog
The following example begins a dialog conversation and stores an identifier for the dialog in @dialog_handle. The
//Adventure-Works.com/ExpenseClient service is the initiator for the dialog, and the
//Adventure-Works.com/Expenses service is the target of the dialog. The dialog follows the contract
//Adventure-Works.com/Expenses/ExpenseSubmission .
E. Beginning a dialog with an explicit lifetime, and relating the dialog to an existing conversation
The following example begins a dialog conversation and stores an identifier for the dialog in @dialog_handle . The
//Adventure-Works.com/ExpenseClient service is the initiator for the dialog, and the
//Adventure-Works.com/Expenses service is the target of the dialog. The dialog follows the contract
//Adventure-Works.com/Expenses/ExpenseSubmission . The new dialog belongs to the same conversation group that
@existing_conversation_handle belongs to. If the dialog has not been closed by the END CONVERSATION
command within 600 seconds, Service Broker ends the dialog with an error.
DECLARE @dialog_handle UNIQUEIDENTIFIER
DECLARE @existing_conversation_handle UNIQUEIDENTIFIER
See Also
BEGIN CONVERSATION TIMER (Transact-SQL )
END CONVERSATION (Transact-SQL )
MOVE CONVERSATION (Transact-SQL )
sys.conversation_endpoints (Transact-SQL )
END CONVERSATION (Transact-SQL)
5/4/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Ends one side of an existing conversation.
Transact-SQL Syntax Conventions
Syntax
END CONVERSATION conversation_handle
[ [ WITH ERROR = failure_code DESCRIPTION = 'failure_text' ]
| [ WITH CLEANUP ]
]
[ ; ]
Arguments
conversation_handle
Is the conversation handle for the conversation to end.
WITH ERROR =failure_code
Is the error code. The failure_code is of type int. The failure code is a user-defined code that is included in the
error message sent to the other side of the conversation. The failure code must be greater than 0.
DESCRIPTION =failure_text
Is the error message. The failure_text is of type nvarchar(3000). The failure text is user-defined text that is
included in the error message sent to the other side of the conversation.
WITH CLEANUP
Removes all messages and catalog view entries for one side of a conversation that cannot complete normally. The
other side of the conversation is not notified of the cleanup. Microsoft SQL Server drops the conversation
endpoint, all messages for the conversation in the transmission queue, and all messages for the conversation in
the service queue. Administrators can use this option to remove conversations which cannot complete normally.
For example, if the remote service has been permanently removed, an administrator can use WITH CLEANUP to
remove conversations to that service. Do not use WITH CLEANUP in the code of a Service Broker application. If
END CONVERSATION WITH CLEANUP is run before the receiving endpoint acknowledges receiving a message,
the sending endpoint will send the message again. This could potentially re-run the dialog.
Remarks
Ending a conversation locks the conversation group that the provided conversation_handle belongs to. When a
conversation ends, Service Broker removes all messages for the conversation from the service queue.
After a conversation ends, an application can no longer send or receive messages for that conversation. Both
participants in a conversation must call END CONVERSATION for the conversation to complete. If Service
Broker has not received an end dialog message or an Error message from the other participant in the
conversation, Service Broker notifies the other participant in the conversation that the conversation has ended. In
this case, although the conversation handle for the conversation is no longer valid, the endpoint for the
conversation remains active until the instance that hosts the remote service acknowledges the message.
If Service Broker has not already processed an end dialog or error message for the conversation, Service Broker
notifies the remote side of the conversation that the conversation has ended. The messages that Service Broker
sends to the remote service depend on the options specified:
If the conversation ends without errors, and the conversation to the remote service is still active, Service
Broker sends a message of type http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog to the remote
service. Service Broker adds this message to the transmission queue in conversation order. Service Broker
sends all messages for this conversation that are currently in the transmission queue before sending this
message.
If the conversation ends with an error and the conversation to the remote service is still active, Service
Broker sends a message of type http://schemas.microsoft.com/SQL/ServiceBroker/Error to the remote
service. Service Broker drops any other messages for this conversation currently in the transmission queue.
The WITH CLEANUP clause allows a database administrator to remove conversations that cannot
complete normally. This option removes all messages and catalog view entries for the conversation. Notice
that, in this case, the remote side of the conversation receives no indication that the conversation has ended,
and may not receive messages that have been sent by an application but not yet transmitted over the
network. Avoid this option unless the conversation cannot complete normally.
After a conversation ends, a Transact-SQL SEND statement that specifies the conversation handle causes a
Transact-SQL error. If messages for this conversation arrive from the other side of the conversation, Service
Broker discards those messages.
If a conversation ends while the remote service still has unsent messages for the conversation, the remote
service drops the unsent messages. This is not considered an error, and the remote service receives no
notification that messages have been dropped.
Failure codes specified in the WITH ERROR clause must be positive numbers. Negative numbers are
reserved for Service Broker error messages.
END CONVERSATION is not valid in a user-defined function.
Permissions
To end an active conversation, the current user must be the owner of the conversation, a member of the sysadmin
fixed server role or a member of the db_owner fixed database role.
A member of the sysadmin fixed server role or a member of the db_owner fixed database role may use the WITH
CLEANUP to remove the metadata for a conversation that has already completed.
Examples
A. Ending a conversation
The following example ends the dialog specified by @dialog_handle .
IF (@ErrorSave <> 0)
BEGIN
ROLLBACK TRANSACTION ;
SET @ErrorDesc = N'An error has occurred.' ;
END CONVERSATION @dialog_handle
WITH ERROR = @ErrorSave DESCRIPTION = @ErrorDesc ;
END
ELSE
COMMIT TRANSACTION ;
See Also
BEGIN CONVERSATION TIMER (Transact-SQL )
BEGIN DIALOG CONVERSATION (Transact-SQL )
sys.conversation_endpoints (Transact-SQL )
GET CONVERSATION GROUP (Transact-SQL)
5/4/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Returns the conversation group identifier for the next message to be received, and locks the conversation group
for the conversation that contains the message. The conversation group identifier can be used to retrieve
conversation state information before retrieving the message itself.
Transact-SQL Syntax Conventions
Syntax
[ WAITFOR ( ]
GET CONVERSATION GROUP @conversation_group_id
FROM <queue>
[ ) ] [ , TIMEOUT timeout ]
[ ; ]
<queue> ::=
{
[ database_name . [ schema_name ] . | schema_name . ] queue_name
}
Arguments
WAITFOR
Specifies that the GET CONVERSATION GROUP statement waits for a message to arrive on the queue if no
messages are currently present.
@conversation_group_id
Is a variable used to store the conversation group ID returned by the GET CONVERSATION GROUP statement.
The variable must be of type uniqueidentifier. If there are no conversation groups available, the variable is set to
NULL.
FROM
Specifies the queue to get the conversation group from.
database_name
Is the name of the database that contains the queue to get the conversation group from. When no database_name
is provided, defaults to the current database.
schema_name
Is the name of the schema that owns the queue to get the conversation group from. When no schema_name is
provided, defaults to the default schema for the current user.
queue_name
Is the name of the queue to get the conversation group from.
TIMEOUT timeout
Specifies the length of time, in milliseconds, that Service Broker waits for a message to arrive on the queue. This
clause may only be used with the WAITFOR clause. If a statement that uses WAITFOR does not include this clause
or the timeout is -1, the wait time is unlimited. If the timeout expires, GET CONVERSATION GROUP sets the
@conversation_group_id variable to NULL.
Remarks
IMPORTANT
If the GET CONVERSATION GROUP statement is not the first statement in a batch or stored procedure, the preceding
statement must be terminated with a semicolon (;), the Transact-SQL statement terminator.
If the queue specified in the GET CONVERSATION GROUP statement is unavailable, the statement fails with a
Transact-SQL error.
This statement returns the next conversation group where all of the following is true:
The conversation group can be successfully locked.
The conversation group has messages available in the queue.
The conversation group has the highest priority level of all the conversation groups that meet the
previously-listed criteria. The priority level of a conversation group is the highest priority level assigned to
any conversation that is a member of the group and has messages in the queue.
Successive calls to GET CONVERSATION GROUP within the same transaction may lock more than one
conversation group. If no conversation group is available, the statement returns NULL as the conversation
group identifier.
When the WAITFOR clause is specified, the statement waits for the timeout specified, or until a
conversation group is available. If the queue is dropped while the statement is waiting, the statement
immediately returns an error.
GET CONVERSATION GROUP is not valid in a user-defined function.
Permissions
To get a conversation group identifier from a queue, the current user must have RECEIVE permission on the
queue.
Examples
A. Getting a conversation group, waiting indefinitely
The following example sets @conversation_group_id to the conversation group identifier for the next available
message on ExpenseQueue . The command waits until a message becomes available.
WAITFOR (
GET CONVERSATION GROUP @conversation_group_id
FROM ExpenseQueue
) ;
WAITFOR (
GET CONVERSATION GROUP @conversation_group_id
FROM ExpenseQueue ),
TIMEOUT 60000 ;
See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
MOVE CONVERSATION (Transact-SQL )
GET_TRANSMISSION_STATUS (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Returns the status for the last transmission for one side of a conversation.
Transact-SQL Syntax Conventions
Syntax
GET_TRANSMISSION_STATUS ( conversation_handle )
Arguments
conversation_id
Is the conversation handle for the conversation. This parameter is of type uniqueidentifier.
Return Types
nchar
Remarks
Returns a string describing the status of the last transmission attempt for the specified conversation. Returns an
empty string if the last transmission attempt succeeded, if no transmission attempt has yet been made, or if the
conversation_handle does not exist.
The information returned by this function is the same information displayed in the last_transmission_error column
of the management view sys.transmission_queue. However, this function can be used to find the transmission
status for conversations that do not currently have messages in the transmission queue.
NOTE
GET_TRANSMISSION_STATUS does not provide information for messages that do not have a conversation endpoint in the
current instance. That is, no information is available for messages to be forwarded.
Examples
The following example reports the transmission status for the conversation with the conversation handle
58ef1d2d-c405-42eb-a762-23ff320bddf0 .
SELECT Status =
GET_TRANSMISSION_STATUS('58ef1d2d-c405-42eb-a762-23ff320bddf0') ;
In this case, SQL Server is not configured to allow Service Broker to communicate over the network.
See Also
sys.conversation_endpoints (Transact-SQL )
sys.transmission_queue (Transact-SQL )
MOVE CONVERSATION (Transact-SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Moves a conversation to a different conversation group.
Transact-SQL Syntax Conventions
Syntax
MOVE CONVERSATION conversation_handle
TO conversation_group_id
[ ; ]
Arguments
conversation_handle
Is a variable or constant containing the conversation handle of the conversation to be moved. conversation_handle
must be of type uniqueidentifier.
TO conversation_group_id
Is a variable or constant containing the identifier of the conversation group where the conversation is to be moved.
conversation_group_id must be of type uniqueidentifier.
Remarks
The MOVE CONVERSATION statement moves the conversation specified by conversation_handle to the
conversation group identified by conversation_group_id. Dialogs can be only be redirected between conversation
groups that are associated with the same queue.
IMPORTANT
If the MOVE CONVERSATION statement is not the first statement in a batch or stored procedure, the preceding statement
must be terminated with a semicolon (;), the Transact-SQL statement terminator.
The MOVE CONVERSATION statement locks the conversation group associated with conversation_handle and
the conversation group specified by conversation_group_id until the transaction containing the statement commits
or rolls back.
MOVE CONVERSATION is not valid in a user-defined function.
Permissions
To move a conversation, the current user must be the owner of the conversation and the conversation group, or be
a member of the sysadmin fixed server role, or be a member of the db_owner fixed database role.
Examples
The following example moves a conversation to a different conversation group.
SET @conversation_handle =
<retrieve conversation handle from database> ;
SET @conversation_group_id =
<retrieve conversation group ID from database> ;
See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
GET CONVERSATION GROUP (Transact-SQL )
END CONVERSATION (Transact-SQL )
sys.conversation_groups (Transact-SQL )
sys.conversation_endpoints (Transact-SQL )
RECEIVE (Transact-SQL)
5/3/2018 • 10 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Retrieves one or more messages from a queue. Depending on the retention setting for the queue, either removes
the message from the queue or updates the status of the message in the queue.
Transact-SQL Syntax Conventions
Syntax
[ WAITFOR ( ]
RECEIVE [ TOP ( n ) ]
<column_specifier> [ ,...n ]
FROM <queue>
[ INTO table_variable ]
[ WHERE { conversation_handle = conversation_handle
| conversation_group_id = conversation_group_id } ]
[ ) ] [ , TIMEOUT timeout ]
[ ; ]
<column_specifier> ::=
{ *
| { column_name | [ ] expression } [ [ AS ] column_alias ]
| column_alias = expression
} [ ,...n ]
<queue> ::=
{
[ database_name . [ schema_name ] . | schema_name . ]
queue_name
}
Arguments
WAITFOR
Specifies that the RECEIVE statement waits for a message to arrive on the queue, if no messages are currently
present.
TOP ( n )
Specifies the maximum number of messages to be returned. If this clause is not specified, all messages are
returned that meet the statement criteria.
*
Specifies that the result set contains all columns in the queue.
column_name
The name of a column to include in the result set.
expression
A column name, constant, function, or any combination of column names, constants, and functions connected by
an operator.
column_alias
An alternative name to replace the column name in the result set.
FROM
Specifies the queue that contains the messages to retrieve.
database_name
The name of the database that contains the queue to receive messages from. When no database name is
provided, defaults to the current database.
schema_name
The name of the schema that owns the queue to receive messages from. When no schema name is provided,
defaults to the default schema for the current user.
queue_name
The name of the queue to receive messages from.
INTO table_variable
Specifies the table variable that RECEIVE places the messages into. The table variable must have the same
number of columns as are in the messages. The data type of each column in the table variable must be implicitly
convertible to the data type of the corresponding column in the messages. If INTO is not specified, the messages
are returned as a result set.
WHERE
Specifies the conversation or conversation group for the received messages. If omitted, returns messages from the
next available conversation group.
conversation_handle = conversation_handle
Specifies the conversation for received messages. The conversation handle provided must be a uniqueidentifer,
or a type that is convertible to uniqueidentifier.
conversation_group_id = conversation_group_id
Specifies the conversation group for received messages. The conversation group ID that is provided must be a
uniqueidentifier, or a type convertible to uniqueidentifier.
TIMEOUT timeout
Specifies the amount of time, in milliseconds, for the statement to wait for a message. This clause can only be used
with the WAITFOR clause. If this clause is not specified, or the time-out is -1, the wait time is unlimited. If the
time-out expires, RECEIVE returns an empty result set.
Remarks
IMPORTANT
If the RECEIVE statement is not the first statement in a batch or stored procedure, the preceding statement must be ended
with a semi-colon (;).
The RECEIVE statement reads messages from a queue and returns a result set. The result set consists of zero or
more rows, each of which contains one message. If the INTO clause is not used, and column_specifier does not
assign the values to local variables, the statement returns a result set to the calling program.
The messages that are returned by the RECEIVE statement can be of different message types. Applications can
use the message_type_name column to route each message to code that handles the associated message type.
There are two classes of message types:
Application-defined message types that were created by using the CREATE MESSAGE TYPE statement.
The set of application-defined message types that are allowed in a conversation are defined by the Service
Broker contract that is specified for the conversation.
Service Broker system messages that return status or error information.
The RECEIVE statement removes received messages from the queue unless the queue specifies message
retention. When the RETENTION setting for the queue is ON, the RECEIVE statement updates the status
column to 0 and leaves the messages in the queue. When a transaction that contains a RECEIVE statement
rolls back, all changes to the queue in the transaction are also rolled back, returning messages to the queue.
All messages that are returned by a RECEIVE statement belong the same conversation group. The
RECEIVE statement locks the conversation group for the messages that are returned until the transaction
that contains the statement finishes. A RECEIVE statement returns messages that have a status of 1. The
result set returned by a RECEIVE statement is implicitly ordered:
If messages from multiple conversations meet the WHERE clause conditions, the RECEIVE statement
returns all messages from one conversation before it returns messages for any other conversation. The
conversations are processed in descending priority level order.
For a given conversation, a RECEIVE statement returns messages in ascending
message_sequence_number order.
The WHERE clause of the RECEIVE statement can only contain one search condition that uses either
conversation_handle or conversation_group_id. The search condition cannot contain one or more of
the other columns in the queue. The conversation_handle or conversation_group_id cannot be an
expression. The set of messages that is returned depends on the conditions that are specified in the WHERE
clause:
If conversation_handle is specified, RECEIVE returns all messages from the specified conversation that
are available in the queue.
If conversation_group_id is specified, RECEIVE returns all messages that are available in the queue from
any conversation that is a member of the specified conversation group.
If there is no WHERE clause, RECEIVE determines which conversation group:
Has one or more messages in the queue.
Has not been locked by another RECEIVE statement.
Has the highest priority level of all the conversation groups that meet these criteria.
RECEIVE then returns all messages available in the queue from any conversation that is a member
of the selected conversation group.
If the conversation handle or conversation group identifier specified in the WHERE clause does not exist, or
is not associated with the specified queue, the RECEIVE statement returns an error.
If the queue specified in the RECEIVE statement has the queue status set to OFF, the statement fails with a
Transact-SQL error.
When the WAITFOR clause is specified, the statement waits for the specified time out, or until a result set is
available. If the queue is dropped or the status of the queue is set to OFF while the statement is waiting, the
statement immediately returns an error. If the RECEIVE statement specifies a conversation group or
conversation handle and the service for that conversation is dropped or moved to another queue, the
RECEIVE statement reports a Transact-SQL error.
RECEIVE is not valid in a user-defined function.
The RECEIVE statement has no priority starvation prevention. If a single RECEIVE statement locks a
conversation group and retrieves a lot of messages from low priority conversations, no messages can be
received from high priority conversations in the group. To prevent this, when you are retrieving messages
from low priority conversations, use the TOP clause to limit the number of messages retrieved by each
RECEIVE statement.
Queue Columns
The following table lists the columns in a queue:
0=Ready1=Received message2=Not
yet complete3=Retained sent message
E=EmptyN=NoneX=XML
Permissions
To receive a message, the current user must have RECEIVE permission on the queue.
Examples
A. Receiving all columns for all messages in a conversation group
The following example receives all available messages for the next available conversation group from the
ExpenseQueue queue. The statement returns the messages as a result set.
RECEIVE *
FROM ExpenseQueue
WHERE conversation_handle = @conversation_handle ;
SET @conversation_group_id =
<retrieve conversation group ID from database> ;
RECEIVE *
FROM ExpenseQueue
WHERE conversation_group_id = @conversation_group_id ;
WAITFOR (
RECEIVE *
FROM ExpenseQueue) ;
WAITFOR (
RECEIVE message_type_name,
CASE
WHEN validation = 'X' THEN CAST(message_body as XML)
ELSE NULL
END AS message_body
FROM ExpenseQueue ),
TIMEOUT 60000 ;
J. Receiving a message, extracting data from the message body, retrieving conversation state
The following example receives the next available message for the next available conversation group in the
ExpenseQueue queue. When the message is of type //Adventure-Works.com/Expenses/SubmitExpense , the statement
extracts the employee ID and a list of items from the message body. The statement also retrieves state for the
conversation from the ConversationState table.
WAITFOR(
RECEIVE
TOP(1)
message_type_name,
COALESCE(
(SELECT TOP(1) ConversationState
FROM CurrentConversations AS cc
WHERE cc.ConversationHandle = conversation_handle),
'NEW')
AS ConversationState,
COALESCE(
(SELECT TOP(1) ErrorCount
FROM CurrentConversations AS cc
WHERE cc.ConversationHandle = conversation_handle),
0)
AS ConversationErrors,
CASE WHEN message_type_name = N'//Adventure-Works.com/Expenses/SubmitExpense'
THEN CAST(message_body AS XML).value(
'declare namespace rpt = "http://Adventure-Works.com/schemas/expenseReport"
(/rpt:ExpenseReport/rpt:EmployeeID)[1]', 'nvarchar(20)')
ELSE NULL
END AS EmployeeID,
CASE WHEN message_type_name = N'//Adventure-Works.com/Expenses/SubmitExpense'
THEN CAST(message_body AS XML).query(
'declare namespace rpt = "http://Adventure-Works.com/schemas/expenseReport"
/rpt:ExpenseReport/rpt:ItemDetail')
ELSE NULL
END AS ItemList
FROM ExpenseQueue
), TIMEOUT 60000 ;
See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
BEGIN CONVERSATION TIMER (Transact-SQL )
END CONVERSATION (Transact-SQL )
CREATE CONTRACT (Transact-SQL )
CREATE MESSAGE TYPE (Transact-SQL )
SEND (Transact-SQL )
CREATE QUEUE (Transact-SQL )
ALTER QUEUE (Transact-SQL )
DROP QUEUE (Transact-SQL )
SEND (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sends a message, using one or more existing conversations.
Transact-SQL Syntax Conventions
Syntax
SEND
ON CONVERSATION [(]conversation_handle [,.. @conversation_handle_n][)]
[ MESSAGE TYPE message_type_name ]
[ ( message_body_expression ) ]
[ ; ]
Arguments
ON CONVERSATION conversation_handle [.. @conversation_handle_n]
Specifies the conversations that the message belongs to. The conversation_handle must contain a valid
conversation identifier. The same conversation handle cannot be used more than once.
MESSAGE TYPE message_type_name
Specifies the message type of the sent message. This message type must be included in the service contracts used
by these conversations. These contracts must allow the message type to be sent from this side of the conversation.
For example, the target services of the conversations may only send messages specified in the contract as SENT
BY TARGET or SENT BY ANY. If this clause is omitted, the message is of the message type DEFAULT.
message_body_expression
Provides an expression representing the message body. The message_body_expression is optional. However, if the
message_body_expression is present the expression must be of a type that can be converted to varbinary(max).
The expression cannot be NULL. If this clause is omitted, the message body is empty.
Remarks
IMPORTANT
If the SEND statement is not the first statement in a batch or stored procedure, the preceding statement must be terminated
with a semicolon (;).
The SEND statement transmits a message from the services on one end of one or more Service Broker
conversations to the services on the other end of these conversations. The RECEIVE statement is then used to
retrieve the sent message from the queues associated with the target services.
The conversation handles supplied to the ON CONVERSATION clause comes from one of three sources:
When sending a message that is not in response to a message received from another service, use the
conversation handle returned from the BEGIN DIALOG statement that created the conversation.
When sending a message that is a response to a message previously received from another service, use the
conversation handle returned by the RECEIVE statement that returned the original message.
In many cases the code that contains the SEND statement is separate from the code that contains either the
BEGIN DIALOG or RECEIVE statements supplying conversation handle. In these cases, the conversation
handle must be one of the data items in the state information passed to the code that contains the SEND
statement.
Messages that are sent to services in other instances of the SQL Server Database Engine are stored in a
transmission queue in the current database until they can be transmitted to the service queues in the
remote instances. Messages sent to services in the same instance of the Database Engine are put directly
into the queues associated with these services. If a condition prevents a local message from being put
directly in the target service queue, it can be stored in the transmission queue until the condition is resolved.
Examples of when this occurs include some types of errors or the target service queue being inactive. You
can use the sys.transmission_queue system view to see the messages in the transmission queue.
SEND is an atomic statement, that is, if a SEND statement sending a message on multiple conversations
fails, e.g. because a conversation is in an errored state, no messages will be stored in the transmission queue
or put in any target service queue.
Service Broker optimizes the storage and transmission of messages that are sent on multiple conversations
in the same SEND statement.
Messages in the transmission queues for an instance are transmitted in sequence based on:
The priority level of their associated conversation endpoint.
Within priority level, their send sequence in the conversation.
Priority levels specified in conversation priorities are only applied to messages in the transmission queue if
the HONOR_BROKER_PRIORITY database option is set to ON. If HONOR_BROKER_PRIORITY is set to
OFF, all messages put in the transmission queue for that database are assigned the default priority level of
5. Priority levels are not applied to a SEND where the messages are put directly into a service queue in the
same instance of the Database Engine.
The SEND statement separately locks each conversation on which a message is sent to ensure per-
conversation ordered delivery.
SEND is not valid in a user-defined function.
Permissions
To send a message, the current user must have RECEIVE permission on the queue of every service that sends the
message.
Examples
The following example starts a dialog and sends an XML message on the dialog. To send the message, the
example converts the xml object to varbinary(max).
DECLARE @dialog_handle UNIQUEIDENTIFIER,
@ExpenseReport XML ;
SET @ExpenseReport = < construct message as appropriate for the application > ;
The following example starts three dialogs and sends an XML message on each of them.
SET @OrderMsg = < construct message as appropriate for the application > ;
See Also
BEGIN DIALOG CONVERSATION (Transact-SQL )
END CONVERSATION (Transact-SQL )
RECEIVE (Transact-SQL )
sys.transmission_queue (Transact-SQL )
SET Statements (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
The Transact-SQL programming language provides several SET statements that change the current session
handling of specific information. The SET statements are grouped into the categories shown in the following
table.
For information about setting local variables with the SET statement, see SET @local_variable (Transact-
SQL ).
CATEGORY STATEMENTS
SET DATEFORMAT
SET LOCK_TIMEOUT
SET CURSOR_CLOSE_ON_COMMIT
SET FIPS_FLAGGER
SET IDENTITY_INSERT
SET LANGUAGE
SET OFFSETS
SET QUOTED_IDENTIFIER
CATEGORY STATEMENTS
SET ARITHIGNORE
SET FMTONLY
SET NOCOUNT
SET NOEXEC
SET NUMERIC_ROUNDABORT
SET PARSEONLY
SET QUERY_GOVERNOR_COST_LIMIT
SET ROWCOUNT
SET TEXTSIZE
SET ANSI_NULL_DFLT_OFF
SET ANSI_NULL_DFLT_ON
SET ANSI_NULLS
SET ANSI_PADDING
SET ANSI_WARNINGS
SET SHOWPLAN_ALL
SET SHOWPLAN_TEXT
SET SHOWPLAN_XML
SET STATISTICS IO
SET REMOTE_PROC_TRANSACTIONS
SET XACT_ABORT
Considerations When You Use the SET Statements
All SET statements are implemented at execute or run time, except for SET FIPS_FL AGGER, SET
OFFSETS, SET PARSEONLY, and SET QUOTED_IDENTIFIER. These statements are implemented at
parse time.
If a SET statement is run in a stored procedure or trigger, the value of the SET option is restored after
control is returned from the stored procedure or trigger. Also, if a SET statement is specified in a
dynamic SQL string that is run by using either sp_executesql or EXECUTE, the value of the SET
option is restored after control is returned from the batch specified in the dynamic SQL string.
Stored procedures execute with the SET settings specified at execute time except for SET
ANSI_NULLS and SET QUOTED_IDENTIFIER. Stored procedures specifying SET ANSI_NULLS or
SET QUOTED_IDENTIFIER use the setting specified at stored procedure creation time. If used inside
a stored procedure, any SET setting is ignored.
The user options setting of sp_configure allows for server-wide settings and works across multiple
databases. This setting also behaves like an explicit SET statement, except that it occurs at login time.
Database settings set by using ALTER DATABASE are valid only at the database level and take effect
only if explicitly set. Database settings override instance option settings that are set by using
sp_configure.
For any one of the SET statements with ON and OFF settings, you can specify either an ON or OFF
setting for multiple SET options.
NOTE
This does not apply to the statistics related SET options.
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls a group of SQL Server settings that collectively specify some ISO standard behavior.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET ANSI_DEFAULTS ON
Remarks
SET ANSI_DEFAULTS is a server-side setting that the client does not modify. The client manages its own settings.
By default, these settings are the opposite of the server setting. Users should not modify the server setting. To
change client the behavior, users should use the SQL_COPT_SS_PRESERVE_CURSORS. For more information,
see SQLSetConnectAttr.
When enabled (ON ), this option enables the following ISO settings:
SET ANSI_WARNINGS
Together, these ISO standard SET options define the query processing environment for the duration of the work
session of the user, a running trigger, or a stored procedure. However, these SET options do not include all the
options required to comply with the ISO standard.
When dealing with indexes on computed columns and indexed views, four of these defaults (ANSI_NULLS,
ANSI_PADDING, ANSI_WARNINGS, and QUOTED_IDENTIFIER ) must be set to ON. These defaults are among
seven SET options that must be assigned the required values when you are creating and changing indexes on
computed columns and indexed views. The other SET options are ARITHABORT (ON ),
CONCAT_NULL_YIELDS_NULL (ON ), and NUMERIC_ROUNDABORT (OFF ). For more information about the
required SET option settings with indexed views and indexes on computed columns, see "Considerations When
You Use the SET Statements" in SET Statements (Transact-SQL ).
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server
automatically set ANSI_DEFAULTS to ON when connecting. The driver and Provider then set
CURSOR_CLOSE_ON_COMMIT and IMPLICIT_TRANSACTIONS to OFF. The OFF settings for SET
CURSOR_CLOSE_ON_COMMIT and SET IMPLICIT_TRANSACTIONS can be configured in ODBC data
sources, in ODBC connection attributes, or in OLE DB connection properties that are set in the application before
connecting to SQL Server. The default for SET ANSI_DEFAULTS is OFF for connections from DB -Library
applications.
When SET ANSI_DEFAULTS is issued, SET QUOTED_IDENTIFIER is set at parse time, and the following options
are set at execute time:
Permissions
Requires membership in the public role.
Examples
The following example sets SET ANSI_DEFAULTS ON and uses the DBCC USEROPTIONS statement to display the
settings that are affected.
See Also
DBCC USEROPTIONS (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_NULL_DFLT_ON (Transact-SQL )
SET ANSI_NULLS (Transact-SQL )
SET ANSI_PADDING (Transact-SQL )
SET ANSI_WARNINGS (Transact-SQL )
SET CURSOR_CLOSE_ON_COMMIT (Transact-SQL )
SET IMPLICIT_TRANSACTIONS (Transact-SQL )
SET QUOTED_IDENTIFIER (Transact-SQL )
SET ANSI_NULL_DFLT_OFF (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Alters the behavior of the session to override default nullability of new columns when the ANSI null default option
for the database is true. For more information about setting the value for ANSI null default, see ALTER
DATABASE (Transact-SQL ).
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Remarks
This setting only affects the nullability of new columns when the nullability of the column is not specified in the
CREATE TABLE and ALTER TABLE statements. By default, when SET ANSI_NULL_DFLT_OFF is ON, new
columns that are created by using the ALTER TABLE and CREATE TABLE statements are NOT NULL if the
nullability status of the column is not explicitly specified. SET ANSI_NULL_DFLT_OFF does not affect columns
that are created by using an explicit NULL or NOT NULL.
Both SET ANSI_NULL_DFLT_OFF and SET ANSI_NULL_DFLT_ON cannot be set ON at the same time. If one
option is set ON, the other option is set OFF. Therefore, either ANSI_NULL_DFLT_OFF or SET
ANSI_NULL_DFLT_ON can be set ON, or both can be set OFF. If either option is ON, that setting (SET
ANSI_NULL_DFLT_OFF or SET ANSI_NULL_DFLT_ON ) takes effect. If both options are set OFF, SQL Server
uses the value of the is_ansi_null_default_on column in the sys.databases catalog view.
For a more reliable operation of Transact-SQL scripts that are used in databases with different nullability settings,
it is better to always specify NULL or NOT NULL in CREATE TABLE and ALTER TABLE statements.
The setting of SET ANSI_NULL_DFLT_OFF is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.
Permissions
Requires membership in the public role.
Examples
The following example shows the effects of SET ANSI_NULL_DFLT_OFF with both settings for the ANSI null default
database option.
USE AdventureWorks2012;
GO
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Modifies the behavior of the session to override default nullability of new columns when the ANSI null default
option for the database is false. For more information about setting the value for ANSI null default, see ALTER
DATABASE (Transact-SQL ).
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET ANSI_NULL_DFLT_ON ON
Remarks
This setting only affects the nullability of new columns when the nullability of the column is not specified in the
CREATE TABLE and ALTER TABLE statements. When SET ANSI_NULL_DFLT_ON is ON, new columns created
by using the ALTER TABLE and CREATE TABLE statements allow null values if the nullability status of the column
is not explicitly specified. SET ANSI_NULL_DFLT_ON does not affect columns created with an explicit NULL or
NOT NULL.
Both SET ANSI_NULL_DFLT_OFF and SET ANSI_NULL_DFLT_ON cannot be set ON at the same time. If one
option is set ON, the other option is set OFF. Therefore, either ANSI_NULL_DFLT_OFF or
ANSI_NULL_DFLT_ON can be set ON, or both can be set OFF. If either option is ON, that setting (SET
ANSI_NULL_DFLT_OFF or SET ANSI_NULL_DFLT_ON ) takes effect. If both options are set OFF, SQL Server
uses the value of the is_ansi_null_default_on column in the sys.databases catalog view.
For a more reliable operation of Transact-SQL scripts that are used in databases with different nullability settings,
it is better to specify NULL or NOT NULL in CREATE TABLE and ALTER TABLE statements.
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server
automatically set ANSI_NULL_DFLT_ON to ON when connecting. The default for SET ANSI_NULL_DFLT_ON is
OFF for connections from DB -Library applications.
When SET ANSI_DEFAULTS is ON, SET ANSI_NULL_DFLT_ON is enabled.
The setting of SET ANSI_NULL_DFLT_ON is set at execute or run time and not at parse time.
The setting of SET ANSI_NULL_DFLT_ON does not apply when tables are created using the SELECT INTO
statement.
To view the current setting for this setting, run the following query.
DECLARE @ANSI_NULL_DFLT_ON VARCHAR(3) = 'OFF';
IF ( (1024 & @@OPTIONS) = 1024 ) SET @ANSI_NULL_DFLT_ON = 'ON';
SELECT @ANSI_NULL_DFLT_ON AS ANSI_NULL_DFLT_ON;
Permissions
Requires membership in the public role.
Examples
The following example shows the effects of SET ANSI_NULL_DFLT_ON with both settings for the ANSI null default
database option.
USE AdventureWorks2012;
GO
See Also
ALTER TABLE (Transact-SQL )
CREATE TABLE (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
SET ANSI_NULL_DFLT_OFF (Transact-SQL )
SET ANSI_NULLS (Transact-SQL)
5/30/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies ISO compliant behavior of the Equals (=) and Not Equal To (<>) comparison operators when they are
used with null values in SQL Server 2017.
IMPORTANT
In a future version of SQL Server, ANSI_NULLS will be ON and any applications that explicitly set the option to OFF will
generate an error. Avoid using this feature in new development work, and plan to modify applications that currently use this
feature.
Syntax
-- Syntax for SQL Server
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET ANSI_NULLS ON
Remarks
When SET ANSI_NULLS is ON, a SELECT statement that uses WHERE column_name = NULL returns zero rows
even if there are null values in column_name. A SELECT statement that uses WHERE column_name <> NULL
returns zero rows even if there are nonnull values in column_name.
When SET ANSI_NULLS is OFF, the Equals (=) and Not Equal To (<>) comparison operators do not follow the
ISO standard. A SELECT statement that uses WHERE column_name = NULL returns the rows that have null
values in column_name. A SELECT statement that uses WHERE column_name <> NULL returns the rows that
have nonnull values in the column. Also, a SELECT statement that uses WHERE column_name <> XYZ_value
returns all rows that are not XYZ_value and that are not NULL.
When SET ANSI_NULLS is ON, all comparisons against a null value evaluate to UNKNOWN. When SET
ANSI_NULLS is OFF, comparisons of all data against a null value evaluate to TRUE if the data value is NULL. If
SET ANSI_NULLS is not specified, the setting of the ANSI_NULLS option of the current database applies. For
more information about the ANSI_NULLS database option, see ALTER DATABASE (Transact-SQL ).
The following table shows how the setting of ANSI_NULLS affects the results of a number of Boolean expressions
using null and non-null values.
BOOLEAN EXPRESSION SET ANSI_NULLS ON SET ANSI_NULLS OFF
SET ANSI_NULLS ON affects a comparison only if one of the operands of the comparison is either a variable that
is NULL or a literal NULL. If both sides of the comparison are columns or compound expressions, the setting does
not affect the comparison.
For a script to work as intended, regardless of the ANSI_NULLS database option or the setting of SET
ANSI_NULLS, use IS NULL and IS NOT NULL in comparisons that might contain null values.
SET ANSI_NULLS should be set to ON for executing distributed queries.
SET ANSI_NULLS must also be ON when you are creating or changing indexes on computed columns or indexed
views. If SET ANSI_NULLS is OFF, any CREATE, UPDATE, INSERT, and DELETE statements on tables with
indexes on computed columns or indexed views will fail. SQL Server returns an error that lists all SET options that
violate the required values. Also, when you execute a SELECT statement, if SET ANSI_NULLS is OFF, SQL Server
ignores the index values on computed columns or views and resolve the select operation as if there were no such
indexes on the tables or views.
NOTE
ANSI_NULLS is one of seven SET options that must be set to required values when dealing with indexes on computed
columns or indexed views. The options ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, QUOTED_IDENTIFIER, and
CONCAT_NULL_YIELDS_NULL must also be set to ON, and NUMERIC_ROUNDABORT must be set to OFF.
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server
automatically set ANSI_NULLS to ON when connecting. This setting can be configured in ODBC data sources, in
ODBC connection attributes, or in OLE DB connection properties that are set in the application before connecting
to an instance of SQL Server. The default for SET ANSI_NULLS is OFF.
When SET ANSI_DEFAULTS is ON, SET ANSI_NULLS is enabled.
The setting of SET ANSI_NULLS is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query:
DECLARE @ANSI_NULLS VARCHAR(3) = 'OFF';
IF ( (32 & @@OPTIONS) = 32 ) SET @ANSI_NULLS = 'ON';
SELECT @ANSI_NULLS AS ANSI_NULLS;
Permissions
Requires membership in the public role.
Examples
The following example uses the Equals ( = ) and Not Equal To ( <> ) comparison operators to make comparisons
with NULL and nonnull values in a table. The example also shows that IS NULL is not affected by the
SET ANSI_NULLS setting.
-- Create table t1 and insert values.
CREATE TABLE dbo.t1 (a INT NULL);
INSERT INTO dbo.t1 values (NULL),(0),(1);
GO
SELECT a
FROM t1
WHERE a = @varname;
SELECT a
FROM t1
WHERE a <> @varname;
SELECT a
FROM t1
WHERE a IS NULL;
GO
SELECT a
FROM t1
WHERE a = @varname;
SELECT a
FROM t1
WHERE a <> @varname;
SELECT a
FROM t1
WHERE a IS NULL;
GO
SELECT a
FROM t1
WHERE a <> @varname;
SELECT a
FROM t1
WHERE a IS NULL;
GO
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls the way the column stores values shorter than the defined size of the column, and the way the column
stores values that have trailing blanks in char, varchar, binary, and varbinary data.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET ANSI_PADDING ON
Remarks
Columns defined with char, varchar, binary, and varbinary data types have a defined size.
This setting affects only the definition of new columns. After the column is created, SQL Server stores the values
based on the setting when the column was created. Existing columns are not affected by a later change to this
setting.
NOTE
We recommend that ANSI_PADDING always be set to ON.
The following table shows the effects of the SET ANSI_PADDING setting when values are inserted into columns
with char, varchar, binary, and varbinary data types.
ON Pad original value (with Follows same rules as for Trailing blanks in character
trailing blanks for char char(n) or binary(n) NOT values inserted into varchar
columns and with trailing NULL when SET columns are not trimmed.
zeros for binary columns) ANSI_PADDING is ON. Trailing zeros in binary
to the length of the column. values inserted into
varbinary columns are not
trimmed. Values are not
padded to the length of the
column.
CHAR(N) NOT NULL OR CHAR(N) NULL OR BINARY(N) VARCHAR(N) OR
SETTING BINARY(N) NOT NULL NULL VARBINARY(N)
OFF Pad original value (with Follows same rules as for Trailing blanks in character
trailing blanks for char varchar or varbinary when values inserted into a
columns and with trailing SET ANSI_PADDING is OFF. varchar column are
zeros for binary columns) trimmed. Trailing zeros in
to the length of the column. binary values inserted into a
varbinary column are
trimmed.
NOTE
When padded, char columns are padded with blanks, and binary columns are padded with zeros. When trimmed, char
columns have the trailing blanks trimmed, and binary columns have the trailing zeros trimmed.
SET ANSI_PADDING must be ON when you are creating or changing indexes on computed columns or indexed
views. For more information about required SET option settings with indexed views and indexes on computed
columns, see "Considerations When You Use the SET Statements" in SET Statements (Transact-SQL ).
The default for SET ANSI_PADDING is ON. The SQL Server Native Client ODBC driver and SQL Server Native
Client OLE DB Provider for SQL Server automatically set ANSI_PADDING to ON when connecting. This can be
configured in ODBC data sources, in ODBC connection attributes, or OLE DB connection properties set in the
application before connecting. The default for SET ANSI_PADDING is OFF for connections from DB -Library
applications.
The SET ANSI_PADDING setting does not affect the nchar, nvarchar, ntext, text, image, varbinary(max),
varchar(max), and nvarchar(max) data types. They always display the SET ANSI_PADDING ON behavior. This
means trailing spaces and zeros are not trimmed.
When SET ANSI_DEFAULTS is ON, SET ANSI_PADDING is enabled.
The setting of SET ANSI_PADDING is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.
Permissions
Requires membership in the public role.
Examples
The following example shows how the setting affects each of these data types.
PRINT 'Testing with ANSI_PADDING ON'
SET ANSI_PADDING ON;
GO
CREATE TABLE t1 (
charcol CHAR(16) NULL,
varcharcol VARCHAR(16) NULL,
varbinarycol VARBINARY(8)
);
GO
INSERT INTO t1 VALUES ('No blanks', 'No blanks', 0x00ee);
INSERT INTO t1 VALUES ('Trailing blank ', 'Trailing blank ', 0x00ee00);
CREATE TABLE t2 (
charcol CHAR(16) NULL,
varcharcol VARCHAR(16) NULL,
varbinarycol VARBINARY(8)
);
GO
INSERT INTO t2 VALUES ('No blanks', 'No blanks', 0x00ee);
INSERT INTO t2 VALUES ('Trailing blank ', 'Trailing blank ', 0x00ee00);
See Also
SET Statements (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
CREATE TABLE (Transact-SQL )
INSERT (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
SET ANSI_WARNINGS (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies ISO standard behavior for several error conditions.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET ANSI_WARNINGS ON
Remarks
SET ANSI_WARNINGS affects the following conditions:
When set to ON, if null values appear in aggregate functions, such as SUM, AVG, MAX, MIN, STDEV,
STDEVP, VAR, VARP, or COUNT, a warning message is generated. When set to OFF, no warning is issued.
When set to ON, the divide-by-zero and arithmetic overflow errors cause the statement to be rolled back
and an error message is generated. When set to OFF, the divide-by-zero and arithmetic overflow errors
cause null values to be returned. The behavior in which a divide-by-zero or arithmetic overflow error causes
null values to be returned occurs if an INSERT or UPDATE is tried on a character, Unicode, or binary
column in which the length of a new value exceeds the maximum size of the column. If SET
ANSI_WARNINGS is ON, the INSERT or UPDATE is canceled as specified by the ISO standard. Trailing
blanks are ignored for character columns and trailing nulls are ignored for binary columns. When OFF, data
is truncated to the size of the column and the statement succeeds.
NOTE
When truncation occurs in any conversion to or from binary or varbinary data, no warning or error is issued,
regardless of SET options.
NOTE
ANSI_WARNINGS is not honored when passing parameters in a stored procedure, user-defined function, or when
declaring and setting variables in a batch statement. For example, if a variable is defined as char(3), and then set to a
value larger than three characters, the data is truncated to the defined size and the INSERT or UPDATE statement
succeeds.
You can use the user options option of sp_configure to set the default setting for ANSI_WARNINGS for all
connections to the server. For more information, see sp_configure (Transact-SQL ).
SET ANSI_WARNINGS must be ON when you are creating or manipulating indexes on computed
columns or indexed views. If SET ANSI_WARNINGS is OFF, CREATE, UPDATE, INSERT, and DELETE
statements on tables with indexes on computed columns or indexed views will fail. For more information
about required SET option settings with indexed views and indexes on computed columns, see
"Considerations When You Use the SET Statements" in SET Statements (Transact-SQL ).
SQL Server includes the ANSI_WARNINGS database option. This is equivalent to SET
ANSI_WARNINGS. When SET ANSI_WARNINGS is ON, errors or warnings are raised in divide-by-zero,
string too large for database column, and other similar errors. When SET ANSI_WARNINGS is OFF, these
errors and warnings are not raised. The default value in the model database for SET ANSI_WARNINGS is
OFF. If not specified, the setting of ANSI_WARNINGS applies. If SET ANSI_WARNINGS is OFF, SQL
Server uses the value of the is_ansi_warnings_on column in the sys.databases catalog view.
ANSI_WARNINGS should be set to ON for executing distributed queries.
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL
Server automatically set ANSI_WARNINGS to ON when connecting. This can be configured in ODBC data
sources, in ODBC connection attributes, set in the application before connecting. The default for SET
ANSI_WARNINGS is OFF for connections from DB -Library applications.
When SET ANSI_DEFAULTS is ON, SET ANSI_WARNINGS is enabled.
The setting of SET ANSI_WARNINGS is set at execute or run time and not at parse time.
If either SET ARITHABORT or SET ARITHIGNORE is OFF and SET ANSI_WARNINGS is ON, SQL Server
still returns an error message when encountering divide-by-zero or overflow errors.
To view the current setting for this setting, run the following query.
Permissions
Requires membership in the public role.
Examples
The following example demonstrates the three situations that are previously mentioned, with the SET
ANSI_WARNINGS to ON and OFF.
USE AdventureWorks2012;
GO
CREATE TABLE T1
(
a int,
b int NULL,
c varchar(20)
);
GO
INSERT INTO T1
VALUES (1, NULL, '')
,(1, 0, '')
,(2, 1, '')
,(2, 2, '');
DROP TABLE T1
See Also
INSERT (Transact-SQL )
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
SET ARITHABORT (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Terminates a query when an overflow or divide-by-zero error occurs during query execution.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET ARITHABORT ON
Remarks
You should always set ARITHABORT to ON in your logon sessions. Setting ARITHABORT to OFF can negatively
impact query optimization leading to performance issues.
WARNING
The default ARITHABORT setting for SQL Server Management Studio is ON. Client applications setting ARITHABORT to OFF
can receive different query plans making it difficult to troubleshoot poorly performing queries. That is, the same query can
execute fast in management studio but slow in the application. When troubleshooting queries with Management Studio
always match the client ARITHABORT setting.
If SET ARITHABORT is ON and SET ANSI WARNINGS is ON, these error conditions cause the query to
terminate.
If SET ARITHABORT is ON and SET ANSI WARNINGS is OFF, these error conditions cause the batch to
terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one
of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic
operation.
If SET ARITHABORT is OFF and SET ANSI WARNINGS is OFF and one of these errors occurs, a warning
message is displayed, and NULL is assigned to the result of the arithmetic operation.
NOTE
If neither SET ARITHABORT nor SET ARITHIGNORE is set, SQL Server returns NULL and returns a warning message after the
query is executed.
Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the database compatibility level is set
to 90 or higher. If the database compatibility level is set to 80 or earlier, the ARITHABORT option must be explicitly
set to ON.
During expression evaluation when SET ARITHABORT is OFF, if an INSERT, DELETE or UPDATE statement
encounters an arithmetic error, overflow, divide-by-zero, or a domain error, SQL Server inserts or updates a NULL
value. If the target column is not nullable, the insert or update action fails and the user receives an error.
If either SET ARITHABORT or SET ARITHIGNORE is OFF and SET ANSI_WARNINGS is ON, SQL Server still
returns an error message when encountering divide-by-zero or overflow errors.
If SET ARITHABORT is set to OFF and an abort error occurs during the evaluation of the Boolean condition of an
IF statement, the FALSE branch is executed.
SET ARITHABORT must be ON when you are creating or changing indexes on computed columns or indexed
views. If SET ARITHABORT is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on
computed columns or indexed views will fail.
The setting of SET ARITHABORT is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query:
Permissions
Requires membership in the public role.
Examples
The following example demonstrates the divide-by-zero and overflow errors that have both SET ARITHABORT
settings.
-- SET ARITHABORT
-------------------------------------------------------------------------------
-- Create tables t1 and t2 and insert data values.
CREATE TABLE t1 (
a TINYINT,
b TINYINT
);
CREATE TABLE t2 (
a TINYINT
);
GO
INSERT INTO t1
VALUES (1, 0);
INSERT INTO t1
VALUES (255, 1);
GO
See Also
SET Statements (Transact-SQL )
SET ARITHIGNORE (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
SET ARITHIGNORE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls whether error messages are returned from overflow or divide-by-zero errors during a query.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Remarks
The SET ARITHIGNORE setting only controls whether an error message is returned. SQL Server returns a NULL
in a calculation involving an overflow or divide-by-zero error, regardless of this setting. The SET ARITHABORT
setting can be used to determine whether the query is terminated. This setting does not affect errors occurring
during INSERT, UPDATE, and DELETE statements.
If either SET ARITHABORT or SET ARITHIGNORE is OFF and SET ANSI_WARNINGS is ON, SQL Server still
returns an error message when encountering divide-by-zero or overflow errors.
The setting of SET ARITHIGNORE is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.
Permissions
Requires membership in the public role.
Examples
The following example demonstrates using both SET ARITHIGNORE settings with both types of query errors.
SET ARITHABORT OFF;
SET ANSI_WARNINGS OFF
GO
See Also
SET Statements (Transact-SQL )
SET ARITHABORT (Transact-SQL )
SET CONCAT_NULL_YIELDS_NULL (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls whether concatenation results are treated as null or empty string values.
IMPORTANT
In a future version of SQL Server CONCAT_NULL_YIELDS_NULL will always be ON and any applications that explicitly set the
option to OFF will generate an error. Avoid using this feature in new development work, and plan to modify applications that
currently use this feature.
Syntax
-- Syntax for SQL Server
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET CONCAT_NULL_YIELDS_NULL ON
Remarks
When SET CONCAT_NULL_YIELDS_NULL is ON, concatenating a null value with a string yields a NULL result.
For example, SELECT 'abc' + NULL yields NULL . When SET CONCAT_NULL_YIELDS_NULL is OFF, concatenating
a null value with a string yields the string itself (the null value is treated as an empty string). For example,
SELECT 'abc' + NULL yields abc .
NOTE
SET CONCAT_NULL_YIELDS_NULL is the same setting as the CONCAT_NULL_YIELDS_NULL setting of ALTER DATABASE.
The setting of SET CONCAT_NULL_YIELDS_NULL is set at execute or run time and not at parse time.
SET CONCAT_NULL_YIELDS_NULL must be ON when you are creating or changing indexes on computed
columns or indexed views. If SET CONCAT_NULL_YIELDS_NULL is OFF, any CREATE, UPDATE, INSERT, and
DELETE statements on tables with indexes on computed columns or indexed views will fail. For more information
about required SET option settings with indexed views and indexes on computed columns, see "Considerations
When You Use the SET Statements" in SET Statements (Transact-SQL ).
When CONCAT_NULL_YIELDS_NULL is set to OFF, string concatenation across server boundaries cannot occur.
To view the current setting for this setting, run the following query.
Examples
The following example showing using both SET CONCAT_NULL_YIELDS_NULL settings.
See Also
SET Statements (Transact-SQL )
SESSIONPROPERTY (Transact-SQL )
SET CONTEXT_INFO (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Associates up to 128 bytes of binary information with the current session or connection.
Transact-SQL Syntax Conventions
Syntax
SET CONTEXT_INFO { binary_str | @binary_var }
Arguments
binary_str
Is a binary constant, or a constant that is implicitly convertible to binary, to associate with the current session or
connection.
@ binary_var
Is a varbinary or binary variable holding a context value to associate with the current session or connection.
Remarks
The preferred way to retrieve the context information for the current session is to use the CONTEXT_INFO
function. Session context information is also stored in the context_info columns in the following system views:
sys.dm_exec_requests
sys.dm_exec_sessions
sys.sysprocesses
SET CONTEXT_INFO cannot be specified in a user-defined function. You cannot supply a null value to SET
CONTEXT_INFO because the views holding the values do not allow for null values.
SET CONTEXT_INFO does not accept expressions other than constants or variable names. To set the
context information to the result of a function call, you must first include the result of the function call in a
binary or varbinary variable.
When you issue SET CONTEXT_INFO in a stored procedure or trigger, unlike in other SET statements, the
new value set for the context information persists after the stored procedure or trigger is completed.
Examples
A. Setting context information by using a constant
The following example demonstrates SET CONTEXT_INFO by setting the value and displaying the results. Note that
querying sys.dm_exec_sessions requires SELECT and VIEW SERVER STATE permissions, whereas using the
CONTEXT_INFO function does not.
SET CONTEXT_INFO 0x01010101;
GO
SELECT context_info
FROM sys.dm_exec_sessions
WHERE session_id = @@SPID;
GO
See Also
SET Statements (Transact-SQL )
sys.dm_exec_requests (Transact-SQL )
sys.dm_exec_sessions (Transact-SQL )
CONTEXT_INFO (Transact-SQL )
SET CURSOR_CLOSE_ON_COMMIT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls the behavior of the Transact-SQL COMMIT TRANSACTION statement. The default value for this setting
is OFF. This means that the server will not close cursors when you commit a transaction.
Transact-SQL Syntax Conventions
Syntax
SET CURSOR_CLOSE_ON_COMMIT { ON | OFF }
Remarks
When SET CURSOR_CLOSE_ON_COMMIT is ON, this setting closes any open cursors on commit or rollback in
compliance with ISO. When SET CURSOR_CLOSE_ON_COMMIT is OFF, the cursor is not closed when a
transaction is committed.
NOTE
SET CURSOR_CLOSE_ON_COMMIT to ON will not close open cursors on rollback when the rollback is applied to a
savepoint_name from a SAVE TRANSACTION statement.
When SET CURSOR_CLOSE_ON_COMMIT is OFF, a ROLLBACK statement closes only open asynchronous
cursors that are not fully populated. STATIC or INSENSITIVE cursors that were opened after modifications were
made will no longer reflect the state of the data if the modifications are rolled back.
SET CURSOR_CLOSE_ON_COMMIT controls the same behavior as the CURSOR_CLOSE_ON_COMMIT
database option. If CURSOR_CLOSE_ON_COMMIT is set to ON or OFF, that setting is used on the connection. If
SET CURSOR_CLOSE_ON_COMMIT has not been specified, the value in the is_cursor_close_on_commit_on
column in the sys.databases catalog view applies.
The SQL Server Native Client OLE DB Provider for SQL Server and the SQL Server Native Client ODBC driver
both set CURSOR_CLOSE_ON_COMMIT to OFF when they connect. DB -Library does not automatically set the
CURSOR_CLOSE_ON_COMMIT value.
When SET ANSI_DEFAULTS is ON, SET CURSOR_CLOSE_ON_COMMIT is enabled.
The setting of SET CURSOR_CLOSE_ON_COMMIT is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.
Examples
The following example defines a cursor in a transaction and attempts to use it after the transaction is committed.
-- SET CURSOR_CLOSE_ON_COMMIT
-------------------------------------------------------------------------------
SET NOCOUNT ON;
INSERT INTO t1
VALUES (1), (2);
GO
See Also
ALTER DATABASE (Transact-SQL )
BEGIN TRANSACTION (Transact-SQL )
CLOSE (Transact-SQL )
COMMIT TRANSACTION (Transact-SQL )
ROLLBACK TRANSACTION (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
SET DATEFIRST (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets the first day of the week to a number from 1 through 7.
For an overview of all Transact-SQL date and time data types and functions, see Date and Time Data Types and
Functions (Transact-SQL ).
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET DATEFIRST 7 ;
Arguments
number | @number_var
Is an integer that indicates the first day of the week. It can be one of the following values.
1 Monday
2 Tuesday
3 Wednesday
4 Thursday
5 Friday
6 Saturday
Remarks
To see the current setting of SET DATEFIRST, use the @@DATEFIRST function.
The setting of SET DATEFIRST is set at execute or run time and not at parse time.
Specifying SET DATEFIRST has no effect on DATEDIFF. DATEDIFF always uses Sunday as the first day of the
week to ensure the function is deterministic.
Permissions
Requires membership in the public role.
Examples
The following example displays the day of the week for a date value and shows the effects of changing the
DATEFIRST setting.
SET DATEFIRST 3;
-- Because Wednesday is now considered the first day of the week,
-- DATEPART now shows that 1999-1-1 (a Friday) is the third day of the
-- week. The following DATEPART function should return a value of 3.
SELECT CAST('1999-1-1' AS datetime2) AS SelectDate
,DATEPART(dw, '1999-1-1') AS DayOfWeek;
GO
See Also
SET Statements (Transact-SQL )
SET DATEFORMAT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets the order of the month, day, and year date parts for interpreting date, smalldatetime, datetime, datetime2
and datetimeoffset character strings.
For an overview of all Transact-SQL date and time data types and functions, see Date and Time Data Types and
Functions (Transact-SQL ).
Transact-SQL Syntax Conventions
Syntax
SET DATEFORMAT { format | @format_var }
Arguments
format | @format_var
Is the order of the date parts. Valid parameters are mdy, dmy, ymd, ydm, myd, and dym. Can be either Unicode
or double-byte character sets (DBCS ) converted to Unicode. The U.S. English default is mdy. For the default
DATEFORMAT of all support languages, see sp_helplanguage (Transact-SQL ).
Remarks
The DATEFORMAT ydm is not supported for date, datetime2 and datetimeoffset data types.
The effect of the DATEFORMAT setting on the interpretation of character strings might be different for datetime
and smalldatetime values than for date, datetime2 and datetimeoffset values, depending on the string format.
This setting affects the interpretation of character strings as they are converted to date values for storage in the
database. It does not affect the display of date data type values that are stored in the database or the storage
format.
Some character strings formats, for example ISO 8601, are interpreted independently of the DATEFORMAT
setting.
The setting of SET DATEFORMAT is set at execute or run time and not at parse time.
SET DATEFORMAT overrides the implicit date format setting of SET L ANGUAGE.
Permissions
Requires membership in the public role.
Examples
The following example uses different date strings as inputs in sessions with the same DATEFORMAT setting.
-- Set date format to day/month/year.
SET DATEFORMAT dmy;
GO
DECLARE @datevar datetime2 = '31/12/2008 09:01:01.1234567';
SELECT @datevar;
GO
-- Result: 2008-12-31 09:01:01.123
SET DATEFORMAT dmy;
GO
DECLARE @datevar datetime2 = '12/31/2008 09:01:01.1234567';
SELECT @datevar;
GO
-- Result: Msg 241: Conversion failed when converting date and/or time -- from character string.
GO
See Also
SET Statements (Transact-SQL )
SET DEADLOCK_PRIORITY (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the relative importance that the current session continue processing if it is deadlocked with another
session.
Transact-SQL Syntax Conventions
Syntax
SET DEADLOCK_PRIORITY { LOW | NORMAL | HIGH | <numeric-priority> | @deadlock_var | @deadlock_intvar }
Arguments
LOW
Specifies that the current session will be the deadlock victim if it is involved in a deadlock and other sessions
involved in the deadlock chain have deadlock priority set to either NORMAL or HIGH or to an integer value
greater than -5. The current session will not be the deadlock victim if the other sessions have deadlock priority set
to an integer value less than -5. It also specifies that the current session is eligible to be the deadlock victim if
another session has set deadlock priority set to LOW or to an integer value equal to -5.
NORMAL
Specifies that the current session will be the deadlock victim if other sessions involved in the deadlock chain have
deadlock priority set to HIGH or to an integer value greater than 0, but will not be the deadlock victim if the other
sessions have deadlock priority set to LOW or to an integer value less than 0. It also specifies that the current
session is eligible to be the deadlock victim if another other session has set deadlock priority to NORMAL or to an
integer value equal to 0. NORMAL is the default priority.
HIGH
Specifies that the current session will be the deadlock victim if other sessions involved in the deadlock chain have
deadlock priority set to an integer value greater than 5, or is eligible to be the deadlock victim if another session
has also set deadlock priority to HIGH or to an integer value equal to 5.
<numeric-priority>
Is an integer value range (-10 to 10) to provide 21 levels of deadlock priority. It specifies that the current session
will be the deadlock victim if other sessions in the deadlock chain are running at a higher deadlock priority value,
but will not be the deadlock victim if the other sessions are running at a deadlock priority value lower than the
value of the current session. It also specifies that the current session is eligible to be the deadlock victim if another
session is running with a deadlock priority value that is the same as the current session. LOW maps to -5,
NORMAL to 0, and HIGH to 5.
@ deadlock_var
Is a character variable specifying the deadlock priority. The variable must be set to a value of 'LOW', 'NORMAL' or
'HIGH'. The variable must be large enough to hold the entire string.
@ deadlock_intvar
Is an integer variable specifying the deadlock priority. The variable must be set to an integer value in the range (-10
to 10).
Remarks
Deadlocks arise when two sessions are both waiting for access to resources locked by the other. When an instance
of SQL Server detects that two sessions are deadlocked, it resolves the deadlock by choosing one of the sessions
as a deadlock victim. The current transaction of the victim is rolled back and deadlock error message 1205 is
returned to the client. This releases all of the locks held by that session, allowing the other session to proceed.
Which session is chosen as the deadlock victim depends on each session's deadlock priority:
If both sessions have the same deadlock priority, the instance of SQL Server chooses the session that is less
expensive to roll back as the deadlock victim. For example, if both sessions have set their deadlock priority
to HIGH, the instance will choose as a victim the session it estimates is less costly to roll back. The cost is
determined by comparing the number of log bytes written to that point in each transaction. (You can see this
value as "Log Used" in a deadlock graph).
If the sessions have different deadlock priorities, the session with the lowest deadlock priority is chosen as
the deadlock victim.
SET DEADLOCK_PRIORITY is set at execute or run time and not at parse time.
Permissions
Requires membership in the public role.
Examples
The following example uses a variable to set the deadlock priority to LOW .
See Also
@@LOCK_TIMEOUT (Transact-SQL )
SET Statements (Transact-SQL )
SET LOCK_TIMEOUT (Transact-SQL )
SET FIPS_FLAGGER (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies checking for compliance with the FIPS 127-2 standard. This is based on the ISO standard. For
information about SQL Server FIPS compliance, see How to use SQL Server 2016 in FIPS 140-2-compliant
mode.
Transact-SQL Syntax Conventions
Syntax
SET FIPS_FLAGGER ( 'level' | OFF )
Arguments
' level '
Is the level of compliance against the FIPS 127-2 standard for which all database operations are checked. If a
database operation conflicts with the level of ISO standards chosen, Microsoft SQL Server generates a warning.
level must be one of the following values.
VALUE DESCRIPTION
Remarks
The setting of SET FIPS_FLAGGER is set at parse time and not at execute or run time. Setting at parse time means
that if the SET statement is present in the batch or stored procedure, it takes effect, regardless of whether code
execution actually reaches that point; and the SET statement takes effect before any statements are executed. For
example, even if the SET statement is in an IF...ELSE statement block that is never reached during execution, the
SET statement still takes effect because the IF...ELSE statement block is parsed.
If SET FIPS_FLAGGER is set in a stored procedure, the value of SET FIPS_FLAGGER is restored after control is returned
from the stored procedure. Therefore, a SET FIPS_FLAGGER statement specified in dynamic SQL does not have any
effect on any statements following the dynamic SQL statement.
Permissions
Requires membership in the public role.
See Also
SET Statements (Transact-SQL )
SET FMTONLY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Returns only metadata to the client. Can be used to test the format of the response without actually running the
query.
NOTE
Do not use this feature. This feature has been replaced by sp_describe_first_result_set (Transact-SQL),
sp_describe_undeclared_parameters (Transact-SQL), sys.dm_exec_describe_first_result_set (Transact-SQL), and
sys.dm_exec_describe_first_result_set_for_object (Transact-SQL).
Syntax
SET FMTONLY { ON | OFF }
Remarks
No rows are processed or sent to the client because of the request when SET FMTONLY is turned ON.
The setting of SET FMTONLY is set at execute or run time and not at parse time.
Permissions
Requires membership in the public role.
Examples
A: View the column header information for a query without actually running the query.
The following example changes the SET FMTONLY setting to ON and executes a SELECT statement. The setting
causes the statement to return the column information only; no rows of data are returned.
USE AdventureWorks2012;
GO
SET FMTONLY ON;
GO
SELECT *
FROM HumanResources.Employee;
GO
SET FMTONLY OFF;
GO
-- Uses AdventureWorks
BEGIN
SET FMTONLY OFF;
SET DATEFORMAT mdy;
SET FMTONLY ON;
SELECT * FROM dbo.DimCustomer;
SET FMTONLY OFF;
END
See Also
SET Statements (Transact-SQL )
SET FORCEPLAN (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
When FORCEPL AN is set to ON, the SQL Server query optimizer processes a join in the same order as the tables
appear in the FROM clause of a query. In addition, setting FORCEPL AN to ON forces the use of a nested loop join
unless other types of joins are required to construct a plan for the query, or they are requested with join hints or
query hints.
Transact-SQL Syntax Conventions
Syntax
SET FORCEPLAN { ON | OFF }
Remarks
SET FORCEPL AN essentially overrides the logic used by the query optimizer to process a Transact-SQL SELECT
statement. The data returned by the SELECT statement is the same regardless of this setting. The only difference is
the way in which SQL Server processes the tables to satisfy the query.
Query optimizer hints can also be used in queries to affect how SQL Server processes the SELECT statement.
SET FORCEPL AN is applied at execute or run time and not at parse time.
Permissions
SET FORCEPL AN permissions default to all users.
Examples
The following example performs a join of four tables. The SHOWPLAN_TEXT setting is enabled, so SQL Server returns
information about how it is processing the query differently after the SET FORCE_PLAN setting is enabled.
USE AdventureWorks2012;
GO
-- Make sure FORCEPLAN is set to OFF.
SET SHOWPLAN_TEXT OFF;
GO
SET FORCEPLAN OFF;
GO
SET SHOWPLAN_TEXT ON;
GO
-- Example where the query plan is not forced.
SELECT p.LastName, p.FirstName, v.Name
FROM Person.Person AS p
INNER JOIN HumanResources.Employee AS e
ON e.BusinessEntityID = p.BusinessEntityID
INNER JOIN Purchasing.PurchaseOrderHeader AS poh
ON e.BusinessEntityID = poh.EmployeeID
INNER JOIN Purchasing.Vendor AS v
ON poh.VendorID = v.BusinessEntityID;
GO
-- SET FORCEPLAN to ON.
SET SHOWPLAN_TEXT OFF;
GO
SET FORCEPLAN ON;
GO
SET SHOWPLAN_TEXT ON;
GO
-- Reexecute inner join to see the effect of SET FORCEPLAN ON.
SELECT p.LastName, p.FirstName, v.Name
FROM Person.Person AS p
INNER JOIN HumanResources.Employee AS e
ON e.BusinessEntityID = p.BusinessEntityID
INNER JOIN Purchasing.PurchaseOrderHeader AS poh
ON e.BusinessEntityID = poh.EmployeeID
INNER JOIN Purchasing.Vendor AS v
ON poh.VendorID = v.BusinessEntityID;
GO
SET SHOWPLAN_TEXT OFF;
GO
SET FORCEPLAN OFF;
GO
See Also
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET SHOWPL AN_TEXT (Transact-SQL )
SET IDENTITY_INSERT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Allows explicit values to be inserted into the identity column of a table.
Transact-SQL Syntax Conventions
Syntax
SET IDENTITY_INSERT [ [ database_name . ] schema_name . ] table { ON | OFF }
Arguments
database_name
Is the name of the database in which the specified table resides.
schema_name
Is the name of the schema to which the table belongs.
table
Is the name of a table with an identity column.
Remarks
At any time, only one table in a session can have the IDENTITY_INSERT property set to ON. If a table already has
this property set to ON, and a SET IDENTITY_INSERT ON statement is issued for another table, SQL Server
returns an error message that states SET IDENTITY_INSERT is already ON and reports the table it is set ON for.
If the value inserted is larger than the current identity value for the table, SQL Server automatically uses the new
inserted value as the current identity value.
The setting of SET IDENTITY_INSERT is set at execute or run time and not at parse time.
Permissions
User must own the table or have ALTER permission on the table.
Examples
The following example creates a table with an identity column and shows how the SET IDENTITY_INSERT setting
can be used to fill a gap in the identity values caused by a DELETE statement.
USE AdventureWorks2012;
GO
-- Create tool table.
CREATE TABLE dbo.Tool(
ID INT IDENTITY NOT NULL PRIMARY KEY,
Name VARCHAR(40) NOT NULL
);
GO
-- Inserting values into products table.
INSERT INTO dbo.Tool(Name)
VALUES ('Screwdriver')
, ('Hammer')
, ('Saw')
, ('Shovel');
GO
SELECT *
FROM dbo.Tool;
GO
SELECT *
FROM dbo.Tool;
GO
-- Drop products table.
DROP TABLE dbo.Tool;
GO
See Also
CREATE TABLE (Transact-SQL )
IDENTITY (Property) (Transact-SQL )
SCOPE_IDENTITY (Transact-SQL )
INSERT (Transact-SQL )
SET Statements (Transact-SQL )
SET IMPLICIT_TRANSACTIONS (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Sets the BEGIN TRANSACTION mode to implicit, for the connection.
Transact-SQL Syntax Conventions
Syntax
SET IMPLICIT_TRANSACTIONS { ON | OFF }
Remarks
When ON, the system is in implicit transaction mode. This means that if @@TRANCOUNT = 0, any of the
following Transact-SQL statements begins a new transaction. It is equivalent to an unseen BEGIN TRANSACTION
being executed first:
DROP . .
When OFF, each of the preceding T-SQL statements is bounded by an unseen BEGIN TRANSACTION and an
unseen COMMIT TRANSACTION statement. When OFF, we say the transaction mode is autocommit. If your T-
SQL code visibly issues a BEGIN TRANSACTION, we say the transaction mode is explicit.
There are several clarifying point to understand:
When the transaction mode is implicit, no unseen BEGIN TRANSACTION is issued if @@trancount > 0
already. However, any explicit BEGIN TRANSACTION statements still increment @@TRANCOUNT.
When your INSERT statements and anything else in your unit of work is finished, you must issue COMMIT
TRANSACTION statements until @@TRANCOUNT is decremented back down to 0. Or you can issue one
ROLLBACK TRANSACTION.
SELECT statements that do not select from a table do not start implicit transactions. For example
SELECT GETDATE(); or SELECT 1, 'ABC'; do not require transactions.
Implicit transactions may unexpectedly be ON due to ANSI defaults. For details see SET ANSI_DEFAULTS
(Transact-SQL ).
IMPLICIT_TRANSACTIONS ON is not popular. In most cases where IMPLICIT_TRANSACTIONS is ON, it
is because the choice of SET ANSI_DEFAULTS ON has been made.
The SQL Server Native Client OLE DB Provider for SQL Server, and the SQL Server Native Client ODBC
driver, automatically set IMPLICIT_TRANSACTIONS to OFF when connecting. SET
IMPLICIT_TRANSACTIONS defaults to OFF for connections with the SQLClient managed provider, and
for SOAP requests received through HTTP endpoints.
To view the current setting for IMPLICIT_TRANSACTIONS, run the following query.
Examples
The following Transact-SQL script runs a few different test cases. The text output is also provided, which shows the
detailed behavior and results from each test case.
-- Transact-SQL.
go
-- Preparations.
SET NOCOUNT ON;
SET IMPLICIT_TRANSACTIONS OFF;
go
WHILE (@@TranCount > 0) COMMIT TRANSACTION;
go
IF (OBJECT_ID(N'dbo.t1',N'U') IS NOT NULL) DROP TABLE dbo.t1;
go
CREATE table dbo.t1 (a int);
go
-- Clean up.
SET IMPLICIT_TRANSACTIONS OFF;
go
WHILE (@@TranCount > 0) COMMIT TRANSACTION;
go
DROP TABLE dbo.t1;
go
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the language environment for the session. The session language determines the datetime formats and
system messages.
Transact-SQL Syntax Conventions
Syntax
SET LANGUAGE { [ N ] 'language' | @language_var }
Arguments
[N ]'language' | @language_var
Is the name of the language as stored in sys.syslanguages. This argument can be either Unicode or DBCS
converted to Unicode. To specify a language in Unicode, use N'language'. If specified as a variable, the variable
must be sysname.
Remarks
The setting of SET L ANGUAGE is set at execute or run time and not at parse time.
SET L ANGUAGE implicitly sets the setting of SET DATEFORMAT.
Permissions
Requires membership in the public role.
Examples
The following example sets the default language to Italian , displays the month name, and then switches back to
us_english and displays the month name again.
See Also
Data Types (Transact-SQL )
syslanguages
sp_helplanguage (Transact-SQL )
SET Statements (Transact-SQL )
SET LOCK_TIMEOUT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the number of milliseconds a statement waits for a lock to be released.
Transact-SQL Syntax Conventions
Syntax
SET LOCK_TIMEOUT timeout_period
Arguments
timeout_period
Is the number of milliseconds that will pass before Microsoft SQL Server returns a locking error. A value of -1
(default) indicates no time-out period (that is, wait forever).
When a wait for a lock exceeds the time-out value, an error is returned. A value of 0 means to not wait at all and
return a message as soon as a lock is encountered.
Remarks
At the beginning of a connection, this setting has a value of -1. After it is changed, the new setting stays in effect
for the remainder of the connection.
The setting of SET LOCK_TIMEOUT is set at execute or run time and not at parse time.
The READPAST locking hint provides an alternative to this SET option.
CREATE DATABASE, ALTER DATABASE, and DROP DATABASE statements do not honor the SET
LOCK_TIMEOUT setting.
Permissions
Requires membership in the public role.
Examples
A: Set the lock timeout to 1800 milliseconds
The following example sets the lock time-out period to 1800 milliseconds.
The following example sets the lock time-out period to 1800 milliseconds. In this release, SQL Data Warehouse
will parse the statement successfully, but will ignore the value 1800 and continue to use the default behavior.
See Also
@@LOCK_TIMEOUT (Transact-SQL )
SET Statements (Transact-SQL )
SET NOCOUNT (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Stops the message that shows the count of the number of rows affected by a Transact-SQL statement or stored
procedure from being returned as part of the result set.
Transact-SQL Syntax Conventions
Syntax
SET NOCOUNT { ON | OFF }
Remarks
When SET NOCOUNT is ON, the count is not returned. When SET NOCOUNT is OFF, the count is returned.
The @@ROWCOUNT function is updated even when SET NOCOUNT is ON.
SET NOCOUNT ON prevents the sending of DONE_IN_PROC messages to the client for each statement in a
stored procedure. For stored procedures that contain several statements that do not return much actual data, or
for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant
performance boost, because network traffic is greatly reduced.
The setting specified by SET NOCOUNT is in effect at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.
Permissions
Requires membership in the public role.
Examples
The following example prevents the message about the number of rows affected from being displayed.
USE AdventureWorks2012;
GO
SET NOCOUNT OFF;
GO
-- Display the count message.
SELECT TOP(5)LastName
FROM Person.Person
WHERE LastName LIKE 'A%';
GO
-- SET NOCOUNT to ON to no longer display the count message.
SET NOCOUNT ON;
GO
SELECT TOP(5) LastName
FROM Person.Person
WHERE LastName LIKE 'A%';
GO
-- Reset SET NOCOUNT to OFF
SET NOCOUNT OFF;
GO
See Also
@@ROWCOUNT (Transact-SQL )
SET Statements (Transact-SQL )
SET NOEXEC (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Compiles each query but does not execute it.
Transact-SQL Syntax Conventions
Syntax
SET NOEXEC { ON | OFF }
Remarks
When SET NOEXEC is ON, SQL Server compiles each batch of Transact-SQL statements but does not execute
them. When SET NOEXEC is OFF, all batches are executed after compilation.
The execution of statements in SQL Server has two phases: compilation and execution. This setting is useful for
having SQL Server validate the syntax and object names in Transact-SQL code when executing. It is also useful for
debugging statements that would generally be part of a larger batch of statements.
The setting of SET NOEXEC is set at execute or run time and not at parse time.
Permissions
Requires membership in the public role.
Examples
The following example uses NOEXEC with a valid query, a query with an object name that is not valid, and a query
with incorrect syntax.
USE AdventureWorks2012;
GO
PRINT 'Valid query';
GO
-- SET NOEXEC to ON.
SET NOEXEC ON;
GO
-- Inner join.
SELECT e.BusinessEntityID, e.JobTitle, v.Name
FROM HumanResources.Employee AS e
INNER JOIN Purchasing.PurchaseOrderHeader AS poh
ON e.BusinessEntityID = poh.EmployeeID
INNER JOIN Purchasing.Vendor AS v
ON poh.VendorID = v.BusinessEntityID;
GO
-- SET NOEXEC to OFF.
SET NOEXEC OFF;
GO
See Also
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET SHOWPL AN_TEXT (Transact-SQL )
SET NUMERIC_ROUNDABORT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the level of error reporting generated when rounding in an expression causes a loss of precision.
Transact-SQL Syntax Conventions
Syntax
SET NUMERIC_ROUNDABORT { ON | OFF }
Remarks
When SET NUMERIC_ROUNDABORT is ON, an error is generated after a loss of precision occurs in an
expression. When OFF, losses of precision do not generate error messages and the result is rounded to the
precision of the column or variable storing the result.
Loss of precision occurs when an attempt is made to store a value with a fixed precision in a column or variable
with less precision.
If SET NUMERIC_ROUNDABORT is ON, SET ARITHABORT determines the severity of the generated error. This
table shows the effects of these two settings when a loss of precision occurs.
SET ARITHABORT OFF Warning is returned; expression returns No errors or warnings; result is
NULL. rounded.
The setting of SET NUMERIC_ROUNDABORT is set at execute or run time and not at parse time.
SET NUMERIC_ROUNDABORT must be OFF when you are creating or changing indexes on computed columns
or indexed views. If SET NUMERIC_ROUNDABORT is ON, CREATE, UPDATE, INSERT, and DELETE statements
on tables with indexes on computed columns or indexed views fail. For more information about required SET
option settings with indexed views and indexes on computed columns, see "Considerations When You Use the SET
Statements" in SET Statements (Transact-SQL ).
To view the current setting for this setting, run the following query:
Permissions
Requires membership in the public role.
Examples
The following example shows two values with a precision of four decimal places that are added and stored in a
variable with a precision of two decimal places. The expressions demonstrate the effects of the different
SET NUMERIC_ROUNDABORT and SET ARITHABORT settings.
-- SET NOCOUNT to ON,
-- SET NUMERIC_ROUNDABORT to ON, and SET ARITHABORT to ON.
SET NOCOUNT ON;
PRINT 'SET NUMERIC_ROUNDABORT ON';
PRINT 'SET ARITHABORT ON';
SET NUMERIC_ROUNDABORT ON;
SET ARITHABORT ON;
GO
DECLARE @result DECIMAL(5, 2),
@value_1 DECIMAL(5, 4),
@value_2 DECIMAL(5, 4);
SET @value_1 = 1.1234;
SET @value_2 = 1.1234 ;
SELECT @result = @value_1 + @value_2;
SELECT @result;
GO
See Also
Data Types (Transact-SQL )
SET Statements (Transact-SQL )
SET ARITHABORT (Transact-SQL )
SET OFFSETS (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Returns the offset (position relative to the start of a statement) of specified keywords in Transact-SQL statements
to DB -Library applications.
IMPORTANT
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.
Syntax
SET OFFSETS keyword_list { ON | OFF }
Arguments
keyword_list
Is a comma-separated list of Transact-SQL constructs including SELECT, FROM, ORDER, TABLE, PROCEDURE,
STATEMENT, PARAM, and EXECUTE.
Remarks
SET OFFSETS is used only in DB -Library applications.
The setting of SET OFFSETS is set at parse time and not at execute time or run time. Setting at parse time means
that if the SET statement is present in the batch or stored procedure, the setting takes effect, regardless of whether
code execution actually reaches that point; and the SET statement takes effect before any statements are executed.
For example, even if the set statement is in an IF...ELSE statement block that is never reached during execution, the
SET statement still takes effect because the IF...ELSE statement block is parsed.
If SET OFFSETS is set in a stored procedure, the value of SET OFFSETS is restored after control is returned from
the stored procedure. Therefore, a SET OFFSETS statement specified in dynamic SQL does not have any effect on
any statements following the dynamic SQL statement.
SET PARSEONLY returns offsets if the OFFSETS option is ON and no errors occur.
Permissions
Requires membership in the public role.
See Also
SET Statements (Transact-SQL )
SET PARSEONLY (Transact-SQL )
SET PARSEONLY (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Examines the syntax of each Transact-SQL statement and returns any error messages without compiling or
executing the statement.
Transact-SQL Syntax Conventions
Syntax
SET PARSEONLY { ON | OFF }
Remarks
When SET PARSEONLY is ON, SQL Server only parses the statement. When SET PARSEONLY is OFF, SQL
Server compiles and executes the statement.
The setting of SET PARSEONLY is set at parse time and not at execute or run time.
Do not use PARSEONLY in a stored procedure or a trigger. SET PARSEONLY returns offsets if the OFFSETS
option is ON and no errors occur.
Permissions
Requires membership in the public role.
See Also
SET Statements (Transact-SQL )
SET OFFSETS (Transact-SQL )
SET QUERY_GOVERNOR_COST_LIMIT (Transact-
SQL)
5/4/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Overrides the currently configured query governor cost limit value for the current connection.
Transact-SQL Syntax Conventions
Syntax
SET QUERY_GOVERNOR_COST_LIMIT value
Arguments
value
Is a numeric or integer value specifying the longest time in which a query can run. Values are rounded down to the
nearest integer. Negative values are rounded up to 0. The query governor disallows execution of any query that
has an estimated cost exceeding that value. Specifying 0 (the default) for this option turns off the query governor,
and all queries are allowed to run indefinitely.
"Query cost" refers to the estimated elapsed time, in seconds, required to complete a query on a specific hardware
configuration.
Remarks
Using SET QUERY_GOVERNOR_COST_LIMIT applies to the current connection only and lasts the duration of the
current connection. Use the Configure the query governor cost limit Server Configuration Optionoption of
sp_configure to change the server-wide query governor cost limit value. For more information about configuring
this option, see sp_configure and Server Configuration Options (SQL Server).
The setting of SET QUERY_GOVERNOR_COST_LIMIT is set at execute or run time and not at parse time.
Permissions
Requires membership in the public role.
See Also
SET Statements (Transact-SQL )
SET QUOTED_IDENTIFIER (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes SQL Server to follow the ISO rules regarding quotation mark delimiting identifiers and literal strings.
Identifiers delimited by double quotation marks can be either Transact-SQL reserved keywords or can contain
characters not generally allowed by the Transact-SQL syntax rules for identifiers.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET QUOTED_IDENTIFIER ON
Remarks
When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals
must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be
quoted and must follow all Transact-SQL rules for identifiers. For more information, see Database Identifiers.
Literals can be delimited by either single or double quotation marks.
When SET QUOTED_IDENTIFIER is ON (default), all strings delimited by double quotation marks are
interpreted as object identifiers. Therefore, quoted identifiers do not have to follow the Transact-SQL rules for
identifiers. They can be reserved keywords and can include characters not generally allowed in Transact-SQL
identifiers. Double quotation marks cannot be used to delimit literal string expressions; single quotation marks
must be used to enclose literal strings. If a single quotation mark (') is part of the literal string, it can be
represented by two single quotation marks ("). SET QUOTED_IDENTIFIER must be ON when reserved
keywords are used for object names in the database.
When SET QUOTED_IDENTIFIER is OFF, literal strings in expressions can be delimited by single or double
quotation marks. If a literal string is delimited by double quotation marks, the string can contain embedded
single quotation marks, such as apostrophes.
SET QUOTED_IDENTIFIER must be ON when you are creating or changing indexes on computed columns or
indexed views. If SET QUOTED_IDENTIFIER is OFF, CREATE, UPDATE, INSERT, and DELETE statements on
tables with indexes on computed columns or indexed views will fail. For more information about required SET
option settings with indexed views and indexes on computed columns, see "Considerations When You Use the
SET Statements" in SET Statements (Transact-SQL ).
SET QUOTED_IDENTIFIER must be ON when you are creating a filtered index.
SET QUOTED_IDENTIFIER must be ON when you invoke XML data type methods.
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server
automatically set QUOTED_IDENTIFIER to ON when connecting. This can be configured in ODBC data
sources, in ODBC connection attributes, or OLE DB connection properties. The default for SET
QUOTED_IDENTIFIER is OFF for connections from DB -Library applications.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the table's metadata even
if the option is set to OFF when the table is created.
When a stored procedure is created, the SET QUOTED_IDENTIFIER and SET ANSI_NULLS settings are
captured and used for subsequent invocations of that stored procedure.
When executed inside a stored procedure, the setting of SET QUOTED_IDENTIFIER is not changed.
When SET ANSI_DEFAULTS is ON, SET QUOTED_IDENTIFIER is enabled.
SET QUOTED_IDENTIFIER also corresponds to the QUOTED_IDENTIFIER setting of ALTER DATABASE. For
more information about database settings, see ALTER DATABASE (Transact-SQL ).
SET QUOTED_IDENTIFIER is takes effect at parse-time and only affects parsing, not query execution.
For a top-level Ad-Hoc batch parsing begins using the session’s current setting for QUOTED_IDENTIFIER. As
the batch is parsed any occurrence of SET QUOTED_IDENTIFIER will change the parsing behavior from that
point on, and save that setting for the session. So after the batch is parsed and executed, the session’s
QUOTED_IDENTIFER setting will be set according to the last occurrence of SET QUOTED_IDENTIFIER in the
batch.
Static SQL in a stored procedure is parsed using the QUOTED_IDENTIFIER setting in effect for the batch that
created or altered the stored procedure. SET QUOTED_IDENTIFIER has no effect when it appears in the body
of a stored procedure as static SQL.
For a nested batch using sp_executesql or exec() the parsing begins using the QUOTED_IDENTIFIER setting of
the session. If the nested batch is inside a stored procedure the parsing starts using the QUOTED_IDENTIFIER
setting of the stored procedure. As the nested batch is parsed the any occurrence of SET
QUOTED_IDENTIFIER will change the parsing behavior from that point on, but the session’s
QUOTED_IDENTIFIER setting will not be updated.
Using brackets, [ and ], to delimit identifiers is not affected by the QUOTED_IDENTIFIER setting.
To view the current setting for this setting, run the following query.
Permissions
Requires membership in the public role.
Examples
A. Using the quoted identifier setting and reserved word object names
The following example shows that the SET QUOTED_IDENTIFIER setting must be ON , and the keywords in table
names must be in double quotation marks to create and use objects that have reserved keyword names.
SET QUOTED_IDENTIFIER OFF
GO
-- An attempt to create a table with a reserved keyword as a name
-- should fail.
CREATE TABLE "select" ("identity" INT IDENTITY NOT NULL, "order" INT NOT NULL);
GO
-- Will succeed.
CREATE TABLE "select" ("identity" INT IDENTITY NOT NULL, "order" INT NOT NULL);
GO
SELECT "identity","order"
FROM "select"
ORDER BY "order";
GO
B. Using the quoted identifier setting with single and double quotation marks
The following example shows the way single and double quotation marks are used in string expressions with
SET QUOTED_IDENTIFIER set to ON and OFF .
SET QUOTED_IDENTIFIER OFF;
GO
USE AdventureWorks2012;
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'Test')
DROP TABLE dbo.Test;
GO
USE AdventureWorks2012;
CREATE TABLE dbo.Test (ID INT, String VARCHAR(30)) ;
GO
ID String
----------- ------------------------------
1 'Text in single quotes'
2 'Text in single quotes'
3 Text with 2 '' single quotes
4 "Text in double quotes"
5 "Text in double quotes"
6 Text with 2 "" double quotes
7 Text with a single ' quote
See Also
CREATE DATABASE (SQL Server Transact-SQL )
CREATE DEFAULT (Transact-SQL )
CREATE PROCEDURE (Transact-SQL )
CREATE RULE (Transact-SQL )
CREATE TABLE (Transact-SQL )
CREATE TRIGGER (Transact-SQL )
CREATE VIEW (Transact-SQL )
Data Types (Transact-SQL )
EXECUTE (Transact-SQL )
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
SET ANSI_DEFAULTS (Transact-SQL )
sp_rename (Transact-SQL )
SET REMOTE_PROC_TRANSACTIONS (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies that when a local transaction is active, executing a remote stored procedure starts a Transact-SQL
distributed transaction managed by Microsoft Distributed Transaction Coordinator (MS DTC ).
IMPORTANT
This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work,
and modify applications that currently use this feature as soon as possible. This option is provided for backward compatibility
for applications that use remote stored procedures. Instead of issuing remote stored procedure calls, use distributed queries
that reference linked servers. These are defined by using sp_addlinkedserver.
Syntax
SET REMOTE_PROC_TRANSACTIONS { ON | OFF }
Arguments
ON | OFF
When ON, a Transact-SQL distributed transaction is started when a remote stored procedure is executed from a
local transaction. When OFF, calling remote stored procedures from a local transaction does not start a Transact-
SQL distributed transaction.
Remarks
When REMOTE_PROC_TRANSACTIONS is ON, calling a remote stored procedure starts a distributed
transaction and enlists the transaction with MS DTC. The instance of SQL Server making the remote stored
procedure call is the transaction originator and controls the completion of the transaction. When a subsequent
COMMIT TRANSACTION or ROLLBACK TRANSACTION statement is issued for the connection, the controlling
instance requests that MS DTC manage the completion of the distributed transaction across the computers
involved.
After a Transact-SQL distributed transaction has been started, remote stored procedure calls can be made to other
instances of SQL Server that have been defined as remote servers. The remote servers are all enlisted in the
Transact-SQL distributed transaction, and MS DTC ensures that the transaction is completed against each remote
server.
REMOTE_PROC_TRANSACTIONS is a connection-level setting that can be used to override the instance-level
sp_configure remote proc trans option.
When REMOTE_PROC_TRANSACTIONS is OFF, remote stored procedure calls are not made part of a local
transaction. The modifications made by the remote stored procedure are committed or rolled back at the time the
stored procedure completes. Subsequent COMMIT TRANSACTION or ROLLBACK TRANSACTION statements
issued by the connection that called the remote stored procedure have no effect on the processing done by the
procedure.
The REMOTE_PROC_TRANSACTIONS option is a compatibility option that affects only remote stored procedure
calls made to instances of SQL Server defined as remote servers using sp_addserver. The option does not apply
to distributed queries that execute a stored procedure on an instance defined as a linked server using
sp_addlinkedserver.
The setting of SET REMOTE_PROC_TRANSACTIONS is set at execute or run time and not at parse time.
Permissions
Requires membership in the public role.
See Also
BEGIN DISTRIBUTED TRANSACTION (Transact-SQL )
SET Statements (Transact-SQL )
SET ROWCOUNT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes SQL Server to stop processing the query after the specified number of rows are returned.
Transact-SQL Syntax Conventions
Syntax
SET ROWCOUNT { number | @number_var }
Arguments
number | @number_var
Is the number, an integer, of rows to be processed before stopping the specific query.
Remarks
IMPORTANT
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using
SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications
that currently use it. For a similar behavior, use the TOP syntax. For more information, see TOP (Transact-SQL).
To set this option off so that all rows are returned, specify SET ROWCOUNT 0.
Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have
been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect
dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with
caution.
SET ROWCOUNT overrides the SELECT statement TOP keyword if the rowcount is the smaller value.
The setting of SET ROWCOUNT is set at execute or run time and not at parse time.
Permissions
Requires membership in the public role.
Examples
SET ROWCOUNT stops processing after the specified number of rows. In the following example, note that over
500 rows meet the criteria of Quantity less than 300 . However, after applying SET ROWCOUNT, you can see
that not all rows were returned.
USE AdventureWorks2012;
GO
SELECT count(*) AS Count
FROM Production.ProductInventory
WHERE Quantity < 300;
GO
Count
-----------
537
(1 row(s) affected)
Now, set ROWCOUNT to 4 and return all rows to demonstrate that only 4 rows are returned.
SET ROWCOUNT 4;
SELECT *
FROM Production.ProductInventory
WHERE Quantity < 300;
GO
(4 row(s) affected)
-- Uses AdventureWorks
SET ROWCOUNT 5;
SELECT * FROM [dbo].[DimAccount]
WHERE AccountType = 'Assets';
-- Uses AdventureWorks
SET ROWCOUNT 0;
SELECT * FROM [dbo].[DimAccount]
WHERE AccountType = 'Assets';
See Also
SET Statements (Transact-SQL )
SET SHOWPLAN_ALL (Transact-SQL)
5/3/2018 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes Microsoft SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed
information about how the statements are executed and provides estimates of the resource requirements for the
statements.
Transact-SQL Syntax Conventions
Syntax
SET SHOWPLAN_ALL { ON | OFF }
Remarks
The setting of SET SHOWPL AN_ALL is set at execute or run time and not at parse time.
When SET SHOWPL AN_ALL is ON, SQL Server returns execution information for each statement without
executing it, and Transact-SQL statements are not executed. After this option is set ON, information about all
subsequent Transact-SQL statements are returned until the option is set OFF. For example, if a CREATE TABLE
statement is executed while SET SHOWPL AN_ALL is ON, SQL Server returns an error message from a
subsequent SELECT statement involving that same table, informing users that the specified table does not exist.
Therefore, subsequent references to this table fail. When SET SHOWPL AN_ALL is OFF, SQL Server executes the
statements without generating a report.
SET SHOWPL AN_ALL is intended to be used by applications written to handle its output. Use SET
SHOWPL AN_TEXT to return readable output for Microsoft Win32 command prompt applications, such as the
osql utility.
SET SHOWPL AN_TEXT and SET SHOWPL AN_ALL cannot be specified inside a stored procedure; they must be
the only statements in a batch.
SET SHOWPL AN_ALL returns information as a set of rows that form a hierarchical tree representing the steps
taken by the SQL Server query processor as it executes each statement. Each statement reflected in the output
contains a single row with the text of the statement, followed by several rows with the details of the execution
steps. The table shows the columns that the output contains.
StmtText For rows that are not of type PLAN_ROW, this column
contains the text of the Transact-SQL statement. For rows of
type PLAN_ROW, this column contains a description of the
operation. This column contains the physical operator and
may optionally also contain the logical operator. This column
may also be followed by a description that is determined by
the physical operator. For more information, see Showplan
Logical and Physical Operators Reference.
COLUMN NAME DESCRIPTION
EstimateIO Estimated I/O cost* for this operator. For rows of type
PLAN_ROWS only.
EstimateCPU Estimated CPU cost* for this operator. For rows of type
PLAN_ROWS only.
AvgRowSize Estimated average row size (in bytes) of the row being passed
through this operator.
Type Node type. For the parent node of each query, this is the
Transact-SQL statement type (for example, SELECT, INSERT,
EXECUTE, and so on). For subnodes representing execution
plans, the type is PLAN_ROW.
*Cost units are based on an internal measurement of time, not wall-clock time. They are used for determining the
relative cost of a plan in comparison to other plans.
Permissions
In order to use SET SHOWPL AN_ALL, you must have sufficient permissions to execute the statements on which
SET SHOWPL AN_ALL is executed, and you must have SHOWPL AN permission for all databases containing
referenced objects.
For SELECT, INSERT, UPDATE, DELETE, EXEC stored_procedure, and EXEC user_defined_function statements, to
produce a Showplan the user must:
Have the appropriate permissions to execute the Transact-SQL statements.
Have SHOWPL AN permission on all databases containing objects referenced by the Transact-SQL
statements, such as tables, views, and so on.
For all other statements, such as DDL, USE database_name, SET, DECL ARE, dynamic SQL, and so on, only
the appropriate permissions to execute the Transact-SQL statements are needed.
Examples
The two statements that follow use the SET SHOWPL AN_ALL settings to show the way SQL Server analyzes
and optimizes the use of indexes in queries.
The first query uses the Equals comparison operator (=) in the WHERE clause on an indexed column. This results
in the Clustered Index Seek value in the LogicalOp column and the name of the index in the Argument column.
The second query uses the LIKE operator in the WHERE clause. This forces SQL Server to use a clustered index
scan and find the data that satisfies the WHERE clause condition. This results in the Clustered Index Scan value in
the LogicalOp column with the name of the index in the Argument column, and the Filter value in the
LogicalOp column with the WHERE clause condition in the Argument column.
The values in the EstimateRows and the TotalSubtreeCost columns are smaller for the first indexed query,
indicating that it is processed much faster and uses fewer resources than the nonindexed query.
USE AdventureWorks2012;
GO
SET SHOWPLAN_ALL ON;
GO
-- First query.
SELECT BusinessEntityID
FROM HumanResources.Employee
WHERE NationalIDNumber = '509647174';
GO
-- Second query.
SELECT BusinessEntityID, EmergencyContactID
FROM HumanResources.Employee
WHERE EmergencyContactID LIKE '1%';
GO
SET SHOWPLAN_ALL OFF;
GO
See Also
SET Statements (Transact-SQL )
SET SHOWPL AN_TEXT (Transact-SQL )
SET SHOWPL AN_XML (Transact-SQL )
SET SHOWPLAN_TEXT (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes Microsoft SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed
information about how the statements are executed.
Transact-SQL Syntax Conventions
Syntax
SET SHOWPLAN_TEXT { ON | OFF }
Remarks
The setting of SET SHOWPL AN_TEXT is set at execute or run time and not at parse time.
When SET SHOWPL AN_TEXT is ON, SQL Server returns execution information for each Transact-SQL
statement without executing it. After this option is set ON, execution plan information about all subsequent SQL
Server statements is returned until the option is set OFF. For example, if a CREATE TABLE statement is executed
while SET SHOWPL AN_TEXT is ON, SQL Server returns an error message from a subsequent SELECT
statement involving that same table informing the user that the specified table does not exist. Therefore,
subsequent references to this table fail. When SET SHOWPL AN_TEXT is OFF, SQL Server executes statements
without generating a report with execution plan information.
SET SHOWPL AN_TEXT is intended to return readable output for Microsoft Win32 command prompt
applications such as the osql utility. SET SHOWPL AN_ALL returns more detailed output intended to be used
with programs designed to handle its output.
SET SHOWPL AN_TEXT and SET SHOWPL AN_ALL cannot be specified in a stored procedure. They must be the
only statements in a batch.
SET SHOWPL AN_TEXT returns information as a set of rows that form a hierarchical tree representing the steps
taken by the SQL Server query processor as it executes each statement. Each statement reflected in the output
contains a single row with the text of the statement, followed by several rows with the details of the execution
steps. The table shows the column that the output contains.
StmtText For rows which are not of type PLAN_ROW, this column
contains the text of the Transact-SQL statement. For rows of
type PLAN_ROW, this column contains a description of the
operation. This column contains the physical operator and
may optionally also contain the logical operator. This column
may also be followed by a description which is determined by
the physical operator. For more information about physical
operators, see the Argument column in SET SHOWPLAN_ALL
(Transact-SQL).
For more information about the physical and logical operators that can be seen in Showplan output, see
Showplan Logical and Physical Operators Reference
Permissions
In order to use SET SHOWPL AN_TEXT, you must have sufficient permissions to execute the statements on which
SET SHOWPL AN_TEXT is executed, and you must have SHOWPL AN permission for all databases containing
referenced objects.
For SELECT, INSERT, UPDATE, DELETE, EXEC stored_procedure, and EXEC user_defined_function statements, to
produce a Showplan the user must:
Have the appropriate permissions to execute the Transact-SQL statements.
Have SHOWPL AN permission on all databases containing objects referenced by the Transact-SQL
statements, such as tables, views, and so on.
For all other statements, such as DDL, USE database_name, SET, DECL ARE, dynamic SQL, and so on, only
the appropriate permissions to execute the Transact-SQL statements are needed.
Examples
This example shows how indexes are used by SQL Server as it processes the statements.
This is the query using an index:
USE AdventureWorks2012;
GO
SET SHOWPLAN_TEXT ON;
GO
SELECT *
FROM Production.Product
WHERE ProductID = 905;
GO
SET SHOWPLAN_TEXT OFF;
GO
StmtText
---------------------------------------------------
SELECT *
FROM Production.Product
WHERE ProductID = 905;
StmtText
--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
|--Clustered Index Seek(OBJECT:([AdventureWorks2012].[Production].[Product].[PK_Product_ProductID]), SEEK:
([AdventureWorks2012].[Production].[Product].[ProductID]=CONVERT_IMPLICIT(int,[@1],0)) ORDERED FORWARD)
StmtText
------------------------------------------------------------------------
SELECT *
FROM Production.ProductCostHistory
WHERE StandardCost < 500.00;
StmtText
--------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------
|--Clustered Index Scan(OBJECT:([AdventureWorks2012].[Production].[ProductCostHistory].
[PK_ProductCostHistory_ProductCostID]), WHERE:([AdventureWorks2012].[Production].[ProductCostHistory].
[StandardCost]<[@1]))
See Also
Operators (Transact-SQL )
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET SHOWPLAN_XML (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed information
about how the statements are going to be executed in the form of a well-defined XML document
Transact-SQL Syntax Conventions
Syntax
SET SHOWPLAN_XML { ON | OFF }
Remarks
The setting of SET SHOWPL AN_XML is set at execute or run time and not at parse time.
When SET SHOWPL AN_XML is ON, SQL Server returns execution plan information for each statement without
executing it, and Transact-SQL statements are not executed. After this option is set ON, execution plan information
about all subsequent Transact-SQL statements is returned until the option is set OFF. For example, if a CREATE
TABLE statement is executed while SET SHOWPL AN_XML is ON, SQL Server returns an error message from a
subsequent SELECT statement involving that same table; the specified table does not exist. Therefore, subsequent
references to this table fail. When SET SHOWPL AN_XML is OFF, SQL Server executes the statements without
generating a report.
SET SHOWPL AN_XML is intended to return output as nvarchar(max) for applications such as the sqlcmd
utility, where the XML output is subsequently used by other tools to display and process the query plan
information.
NOTE
The dynamic management view, sys.dm_exec_query_plan, returns the same information as SET SHOWPLAN XML in the
xml data type. This information is returned from the query_plan column of sys.dm_exec_query_plan. For more
information, see sys.dm_exec_query_plan (Transact-SQL).
SET SHOWPL AN_XML cannot be specified inside a stored procedure. It must be the only statement in a batch.
SET SHOWPL AN_XML returns information as a set of XML documents. Each batch after the SET
SHOWPL AN_XML ON statement is reflected in the output by a single document. Each document contains the
text of the statements in the batch, followed by the details of the execution steps. The document shows the
estimated costs, numbers of rows, accessed indexes, and types of operators performed, join order, and more
information about the execution plans.
The document containing the XML schema for the XML output by SET SHOWPL AN_XML is copied during setup
to a local directory on the computer on which Microsoft SQL Server is installed. It can be found on the drive
containing the SQL Server installation files, at:
\Microsoft SQL Server\130\Tools\Binn\schemas\sqlserver\2004\07\showplan\showplanxml.xsd
The Showplan Schema can also be found at this Web site.
NOTE
If Include Actual Execution Plan is selected in SQL Server Management Studio, this SET option does not produce XML
Showplan output. Clear the Include Actual Execution Plan button before using this SET option.
Permissions
In order to use SET SHOWPL AN_XML, you must have sufficient permissions to execute the statements on which
SET SHOWPL AN_XML is executed, and you must have SHOWPL AN permission for all databases containing
referenced objects.
For SELECT, INSERT, UPDATE, DELETE, EXEC stored_procedure, and EXEC user_defined_function statements, to
produce a Showplan the user must:
Have the appropriate permissions to execute the Transact-SQL statements.
Have SHOWPL AN permission on all databases containing objects referenced by the Transact-SQL
statements, such as tables, views, and so on.
For all other statements, such as DDL, USE database_name, SET, DECL ARE, dynamic SQL, and so on, only
the appropriate permissions to execute the Transact-SQL statements are needed.
Examples
The two statements that follow use the SET SHOWPL AN_XML settings to show the way SQL Server analyzes
and optimizes the use of indexes in queries.
The first query uses the Equals comparison operator (=) in the WHERE clause on an indexed column. The second
query uses the LIKE operator in the WHERE clause. This forces SQL Server to use a clustered index scan and find
the data meeting the WHERE clause condition. The values in the EstimateRows and the
EstimatedTotalSubtreeCost attributes are smaller for the first indexed query, indicating that it is processed
much faster and uses fewer resources than the nonindexed query.
USE AdventureWorks2012;
GO
SET SHOWPLAN_XML ON;
GO
-- First query.
SELECT BusinessEntityID
FROM HumanResources.Employee
WHERE NationalIDNumber = '509647174';
GO
-- Second query.
SELECT BusinessEntityID, JobTitle
FROM HumanResources.Employee
WHERE JobTitle LIKE 'Production%';
GO
SET SHOWPLAN_XML OFF;
See Also
SET Statements (Transact-SQL )
SET STATISTICS IO (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes SQL Server to display information regarding the amount of disk activity generated by Transact-SQL
statements.
Transact-SQL Syntax Conventions
Syntax
SET STATISTICS IO { ON | OFF }
Remarks
When STATISTICS IO is ON, statistical information is displayed. When OFF, the information is not displayed.
After this option is set ON, all subsequent Transact-SQL statements return the statistical information until the
option is set to OFF.
The following table lists and describes the output items.
Scan count Number of seeks/scans started after reaching the leaf level in
any direction to retrieve all the values to construct the final
dataset for the output.
Scan count is 1 when you are searching for one value using a
non-unique clustered index which is defined on a non-primary
key column. This is done to check for duplicate values for the
key value that you are searching for. For example
WHERE Clustered_Index_Key_Column = <value> .
read-ahead reads Number of pages placed into the cache for the query.
OUTPUT ITEM MEANING
lob logical reads Number of text, ntext, image, or large value type
(varchar(max), nvarchar(max), varbinary(max)) pages
read from the data cache.
lob physical reads Number of text, ntext, image or large value type pages read
from disk.
lob read-ahead reads Number of text, ntext, image or large value type pages
placed into the cache for the query.
The setting of SET STATISTICS IO is set at execute or run time and not at parse time.
NOTE
When Transact-SQL statements retrieve LOB columns, some LOB retrieval operations might require traversing the LOB tree
multiple times. This may cause SET STATISTICS IO to report higher than expected logical reads.
Permissions
To use SET STATISTICS IO, users must have the appropriate permissions to execute the Transact-SQL statement.
The SHOWPL AN permission is not required.
Examples
This example shows how many logical and physical reads are used by SQL Server as it processes the statements.
USE AdventureWorks2012;
GO
SET STATISTICS IO ON;
GO
SELECT *
FROM Production.ProductCostHistory
WHERE StandardCost < 500.00;
GO
SET STATISTICS IO OFF;
GO
See Also
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET STATISTICS TIME (Transact-SQL )
SET STATISTICS PROFILE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Displays the profile information for a statement. STATISTICS PROFILE works for ad hoc queries, views, and stored
procedures.
Transact-SQL Syntax Conventions
Syntax
SET STATISTICS PROFILE { ON | OFF }
Remarks
When STATISTICS PROFILE is ON, each executed query returns its regular result set, followed by an additional
result set that shows a profile of the query execution.
The additional result set contains the SHOWPL AN_ALL columns for the query and these additional columns.
Permissions
To use SET STATISTICS PROFILE and view the output, users must have the following permissions:
Appropriate permissions to execute the Transact-SQL statements.
SHOWPL AN permission on all databases containing objects that are referenced by the Transact-SQL
statements.
For Transact-SQL statements that do not produce STATISTICS PROFILE result sets, only the appropriate
permissions to execute the Transact-SQL statements are required. For Transact-SQL statements that do
produce STATISTICS PROFILE result sets, checks for both the Transact-SQL statement execution
permission and the SHOWPL AN permission must succeed, or the Transact-SQL statement execution is
aborted and no Showplan information is generated.
See Also
SET Statements (Transact-SQL )
SET SHOWPL AN_ALL (Transact-SQL )
SET STATISTICS TIME (Transact-SQL )
SET STATISTICS IO (Transact-SQL )
SET STATISTICS TIME (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Displays the number of milliseconds required to parse, compile, and execute each statement.
Transact-SQL Syntax Conventions
Syntax
SET STATISTICS TIME { ON | OFF }
Remarks
When SET STATISTICS TIME is ON, the time statistics for a statement are displayed. When OFF, the time
statistics are not displayed.
The setting of SET STATISTICS TIME is set at execute or run time and not at parse time.
Microsoft SQL Server is unable to provide accurate statistics in fiber mode, which is activated when you enable
the lightweight pooling configuration option.
The cpu column in the sysprocesses table is only updated when a query executes with SET STATISTICS TIME
ON. When SET STATISTICS TIME is OFF, 0 is returned.
ON and OFF settings also affect the CPU column in the Process Info View for Current Activity in SQL Server
Management Studio.
Permissions
To use SET STATISTICS TIME, users must have the appropriate permissions to execute the Transact-SQL
statement. The SHOWPL AN permission is not required.
Examples
This example shows the server execution, parse, and compile times.
USE AdventureWorks2012;
GO
SET STATISTICS TIME ON;
GO
SELECT ProductID, StartDate, EndDate, StandardCost
FROM Production.ProductCostHistory
WHERE StandardCost < 500.00;
GO
SET STATISTICS TIME OFF;
GO
See Also
SET Statements (Transact-SQL )
SET STATISTICS IO (Transact-SQL )
SET STATISTICS XML (Transact-SQL)
5/3/2018 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Causes Microsoft SQL Server to execute Transact-SQL statements and generate detailed information about how
the statements were executed in the form of a well-defined XML document.
Transact-SQL Syntax Conventions
Syntax
SET STATISTICS XML { ON | OFF }
Remarks
The setting of SET STATISTICS XML is set at execute or run time and not at parse time.
When SET STATISTICS XML is ON, SQL Server returns execution information for each statement after executing
it. After this option is set ON, information about all subsequent Transact-SQL statements is returned until the
option is set to OFF. Note that SET STATISTICS XML need not be the only statement in a batch.
SET STATISTICS XML returns output as nvarchar(max) for applications, such as the sqlcmd utility, where the
XML output is subsequently used by other tools to display and process the query plan information.
SET STATISTICS XML returns information as a set of XML documents. Each statement after the SET STATISTICS
XML ON statement is reflected in the output by a single document. Each document contains the text of the
statement, followed by the details of the execution steps. The output shows run-time information such as the costs,
accessed indexes, and types of operations performed, join order, the number of times a physical operation is
performed, the number of rows each physical operator produced, and more.
The document containing the XML schema for the XML output by SET STATISTICS XML is copied during setup to
a local directory on the computer on which Microsoft SQL Server is installed. It can be found on the drive
containing the SQL Server installation files, at:
\Microsoft SQL Server\100\Tools\Binn\schemas\sqlserver\2004\07\showplan\showplanxml.xsd
The Showplan Schema can also be found at this Web site.
SET STATISTICS PROFILE and SET STATISTICS XML are counterparts of each other. The former produces textual
output; the latter produces XML output. In future versions of SQL Server, new query execution plan information
will only be displayed through the SET STATISTICS XML statement, not the SET STATISTICS PROFILE statement.
NOTE
If Include Actual Execution Plan is selected in SQL Server Management Studio, this SET option does not produce XML
Showplan output. Clear the Include Actual Execution Plan button before using this SET option.
Permissions
To use SET STATISTICS XML and view the output, users must have the following permissions:
Appropriate permissions to execute the Transact-SQL statements.
SHOWPL AN permission on all databases containing objects that are referenced by the Transact-SQL
statements.
For Transact-SQL statements that do not produce STATISTICS XML result sets, only the appropriate
permissions to execute the Transact-SQL statements are required. For Transact-SQL statements that do
produce STATISTICS XML result sets, checks for both the Transact-SQL statement execution permission
and the SHOWPL AN permission must succeed, or the Transact-SQL statement execution is aborted and no
Showplan information is generated.
Examples
The two statements that follow use the SET STATISTICS XML settings to show the way SQL Server analyzes and
optimizes the use of indexes in queries. The first query uses the Equals (=) comparison operator in the WHERE
clause on an indexed column. The second query uses the LIKE operator in the WHERE clause. This forces SQL
Server to use a clustered index scan to find the data that satisfies the WHERE clause condition. The values in the
EstimateRows and the EstimatedTotalSubtreeCost attributes are smaller for the first indexed query indicating
that it was processed much faster and used fewer resources than the nonindexed query.
USE AdventureWorks2012;
GO
SET STATISTICS XML ON;
GO
-- First query.
SELECT BusinessEntityID
FROM HumanResources.Employee
WHERE NationalIDNumber = '509647174';
GO
-- Second query.
SELECT BusinessEntityID, JobTitle
FROM HumanResources.Employee
WHERE JobTitle LIKE 'Production%';
GO
SET STATISTICS XML OFF;
GO
See Also
SET SHOWPL AN_XML (Transact-SQL )
sqlcmd Utility
SET TEXTSIZE (Transact-SQL)
5/3/2018 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Specifies the size of varchar(max), nvarchar(max), varbinary(max), text, ntext, and image data returned by a
SELECT statement.
IMPORTANT
ntext, text, and image data types will be removed in a future version of Microsoft SQL Server. Avoid using these data types
in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max), and
varbinary(max) instead.
Syntax
SET TEXTSIZE { number }
Arguments
number
Is the length of varchar(max), nvarchar(max), varbinary(max), text, ntext, or image data, in bytes. number is
an integer with a maximum value of 2147483647 (2 GB ). A value of -1 indicates unlimited size. A value of 0 resets
the size to the default value of 4 KB.
The SQL Server Native Client (10.0 and higher) and ODBC Driver for SQL Server automatically specify -1
(unlimited) when connecting.
Drivers older than SQL Server 2008: The SQL Server Native Client ODBC driver and SQL Server Native Client
OLE DB Provider (version 9) for SQL Server automatically set TEXTSIZE to 2147483647 when connecting.
Remarks
Setting SET TEXTSIZE affects the @@TEXTSIZE function.
The setting of set TEXTSIZE is set at execute or run time and not at parse time.
Permissions
Requires membership in the public role.
See Also
@@TEXTSIZE (Transact-SQL )
Data Types (Transact-SQL )
SET Statements (Transact-SQL )
SET TRANSACTION ISOLATION LEVEL (Transact-
SQL)
5/3/2018 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Controls the locking and row versioning behavior of Transact-SQL statements issued by a connection to SQL
Server.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
READ UNCOMMITTED
Specifies that statements can read rows that have been modified by other transactions but not yet committed.
Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other transactions
from modifying data read by the current transaction. READ UNCOMMITTED transactions are also not blocked
by exclusive locks that would prevent the current transaction from reading rows that have been modified but not
committed by other transactions. When this option is set, it is possible to read uncommitted modifications, which
are called dirty reads. Values in the data can be changed and rows can appear or disappear in the data set before
the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT
statements in a transaction. This is the least restrictive of the isolation levels.
In SQL Server, you can also minimize locking contention while protecting transactions from dirty reads of
uncommitted data modifications using either:
The READ COMMITTED isolation level with the READ_COMMITTED_SNAPSHOT database option set to
ON.
The SNAPSHOT isolation level.
READ COMMITTED
Specifies that statements cannot read data that has been modified but not committed by other
transactions. This prevents dirty reads. Data can be changed by other transactions between individual
statements within the current transaction, resulting in nonrepeatable reads or phantom data. This option is
the SQL Server default.
The behavior of READ COMMITTED depends on the setting of the READ_COMMITTED_SNAPSHOT
database option:
If READ_COMMITTED_SNAPSHOT is set to OFF (the default), the Database Engine uses shared locks to
prevent other transactions from modifying rows while the current transaction is running a read operation.
The shared locks also block the statement from reading rows modified by other transactions until the other
transaction is completed. The shared lock type determines when it will be released. Row locks are released
before the next row is processed. Page locks are released when the next page is read, and table locks are
released when the statement finishes.
NOTE
If READ_COMMITTED_SNAPSHOT is set to ON, the Database Engine uses row versioning to present each statement
with a transactionally consistent snapshot of the data as it existed at the start of the statement. Locks are not used
to protect the data from updates by other transactions.
Snapshot isolation supports FILESTREAM data. Under snapshot isolation mode, FILESTREAM data read by any
statement in a transaction will be the transactionally consistent version of the data that existed at the start of the
transaction.
When the READ_COMMITTED_SNAPSHOT database option is ON, you can use the
READCOMMITTEDLOCK table hint to request shared locking instead of row versioning for individual
statements in transactions running at the READ COMMITTED isolation level.
NOTE
When you set the READ_COMMITTED_SNAPSHOT option, only the connection executing the ALTER DATABASE command is
allowed in the database. There must be no other open connection in the database until ALTER DATABASE is complete. The
database does not have to be in single-user mode.
REPEATABLE READ
Specifies that statements cannot read data that has been modified but not yet committed by other transactions
and that no other transactions can modify data that has been read by the current transaction until the current
transaction completes.
Shared locks are placed on all data read by each statement in the transaction and are held until the transaction
completes. This prevents other transactions from modifying any rows that have been read by the current
transaction. Other transactions can insert new rows that match the search conditions of statements issued by the
current transaction. If the current transaction then retries the statement it will retrieve the new rows, which results
in phantom reads. Because shared locks are held to the end of a transaction instead of being released at the end
of each statement, concurrency is lower than the default READ COMMITTED isolation level. Use this option only
when necessary.
SNAPSHOT
Specifies that data read by any statement in a transaction will be the transactionally consistent version of the data
that existed at the start of the transaction. The transaction can only recognize data modifications that were
committed before the start of the transaction. Data modifications made by other transactions after the start of the
current transaction are not visible to statements executing in the current transaction. The effect is as if the
statements in a transaction get a snapshot of the committed data as it existed at the start of the transaction.
Except when a database is being recovered, SNAPSHOT transactions do not request locks when reading data.
SNAPSHOT transactions reading data do not block other transactions from writing data. Transactions writing
data do not block SNAPSHOT transactions from reading data.
During the roll-back phase of a database recovery, SNAPSHOT transactions will request a lock if an attempt is
made to read data that is locked by another transaction that is being rolled back. The SNAPSHOT transaction is
blocked until that transaction has been rolled back. The lock is released immediately after it has been granted.
The ALLOW_SNAPSHOT_ISOL ATION database option must be set to ON before you can start a transaction
that uses the SNAPSHOT isolation level. If a transaction using the SNAPSHOT isolation level accesses data in
multiple databases, ALLOW_SNAPSHOT_ISOL ATION must be set to ON in each database.
A transaction cannot be set to SNAPSHOT isolation level that started with another isolation level; doing so will
cause the transaction to abort. If a transaction starts in the SNAPSHOT isolation level, you can change it to
another isolation level and then back to SNAPSHOT. A transaction starts the first time it accesses data.
A transaction running under SNAPSHOT isolation level can view changes made by that transaction. For example,
if the transaction performs an UPDATE on a table and then issues a SELECT statement against the same table, the
modified data will be included in the result set.
NOTE
Under snapshot isolation mode, FILESTREAM data read by any statement in a transaction will be the transactionally
consistent version of the data that existed at the start of the transaction, not at the start of the statement.
SERIALIZABLE
Specifies the following:
Statements cannot read data that has been modified but not yet committed by other transactions.
No other transactions can modify data that has been read by the current transaction until the current
transaction completes.
Other transactions cannot insert new rows with key values that would fall in the range of keys read by any
statements in the current transaction until the current transaction completes.
Range locks are placed in the range of key values that match the search conditions of each statement
executed in a transaction. This blocks other transactions from updating or inserting any rows that would
qualify for any of the statements executed by the current transaction. This means that if any of the
statements in a transaction are executed a second time, they will read the same set of rows. The range locks
are held until the transaction completes. This is the most restrictive of the isolation levels because it locks
entire ranges of keys and holds the locks until the transaction completes. Because concurrency is lower, use
this option only when necessary. This option has the same effect as setting HOLDLOCK on all tables in all
SELECT statements in a transaction.
Remarks
Only one of the isolation level options can be set at a time, and it remains set for that connection until it is
explicitly changed. All read operations performed within the transaction operate under the rules for the specified
isolation level unless a table hint in the FROM clause of a statement specifies different locking or versioning
behavior for a table.
The transaction isolation levels define the type of locks acquired on read operations. Shared locks acquired for
READ COMMITTED or REPEATABLE READ are generally row locks, although the row locks can be escalated to
page or table locks if a significant number of the rows in a page or table are referenced by the read. If a row is
modified by the transaction after it has been read, the transaction acquires an exclusive lock to protect that row,
and the exclusive lock is retained until the transaction completes. For example, if a REPEATABLE READ
transaction has a shared lock on a row, and the transaction then modifies the row, the shared row lock is
converted to an exclusive row lock.
With one exception, you can switch from one isolation level to another at any time during a transaction. The
exception occurs when changing from any isolation level to SNAPSHOT isolation. Doing this causes the
transaction to fail and roll back. However, you can change a transaction started in SNAPSHOT isolation to any
other isolation level.
When you change a transaction from one isolation level to another, resources that are read after the change are
protected according to the rules of the new level. Resources that are read before the change continue to be
protected according to the rules of the previous level. For example, if a transaction changed from READ
COMMITTED to SERIALIZABLE, the shared locks acquired after the change are now held until the end of the
transaction.
If you issue SET TRANSACTION ISOL ATION LEVEL in a stored procedure or trigger, when the object returns
control the isolation level is reset to the level in effect when the object was invoked. For example, if you set
REPEATABLE READ in a batch, and the batch then calls a stored procedure that sets the isolation level to
SERIALIZABLE, the isolation level setting reverts to REPEATABLE READ when the stored procedure returns
control to the batch.
NOTE
User-defined functions and common language runtime (CLR) user-defined types cannot execute SET TRANSACTION
ISOLATION LEVEL. However, you can override the isolation level by using a table hint. For more information, see Table Hints
(Transact-SQL).
When you use sp_bindsession to bind two sessions, each session retains its isolation level setting. Using SET
TRANSACTION ISOL ATION LEVEL to change the isolation level setting of one session does not affect the
setting of any other sessions bound to it.
SET TRANSACTION ISOL ATION LEVEL takes effect at execute or run time, and not at parse time.
Optimized bulk load operations on heaps block queries that are running under the following isolation levels:
SNAPSHOT
READ UNCOMMITTED
READ COMMITTED using row versioning
Conversely, queries that run under these isolation levels block optimized bulk load operations on heaps.
For more information about bulk load operations, see Bulk Import and Export of Data (SQL Server).
FILESTREAM -enabled databases support the following transaction isolation levels.
Examples
The following example sets the TRANSACTION ISOLATION LEVEL for the session. For each Transact-SQL statement
that follows, SQL Server holds all of the shared locks until the end of the transaction.
USE AdventureWorks2012;
GO
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
GO
BEGIN TRANSACTION;
GO
SELECT *
FROM HumanResources.EmployeePayHistory;
GO
SELECT *
FROM HumanResources.Department;
GO
COMMIT TRANSACTION;
GO
See Also
ALTER DATABASE (Transact-SQL )
DBCC USEROPTIONS (Transact-SQL )
SELECT (Transact-SQL )
SET Statements (Transact-SQL )
Table Hints (Transact-SQL )
SET XACT_ABORT (Transact-SQL)
5/3/2018 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
NOTE
The THROW statement honors SET XACT_ABORT. RAISERROR does not. New applications should use THROW instead of
RAISERROR.
Specifies whether SQL Server automatically rolls back the current transaction when a Transact-SQL statement
raises a run-time error.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
SET XACT_ABORT ON
Remarks
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is
terminated and rolled back.
When SET XACT_ABORT is OFF, in some cases only the Transact-SQL statement that raised the error is rolled
back and the transaction continues processing. Depending upon the severity of the error, the entire transaction
may be rolled back even when SET XACT_ABORT is OFF. OFF is the default setting.
Compile errors, such as syntax errors, are not affected by SET XACT_ABORT.
XACT_ABORT must be set ON for data modification statements in an implicit or explicit transaction against most
OLE DB providers, including SQL Server. The only case where this option is not required is if the provider
supports nested transactions.
When ANSI_WARNINGS=OFF, permissions violations cause transactions to abort.
The setting of SET XACT_ABORT is set at execute or run time and not at parse time.
To view the current setting for this setting, run the following query.
USE AdventureWorks2012;
GO
IF OBJECT_ID(N't2', N'U') IS NOT NULL
DROP TABLE t2;
GO
IF OBJECT_ID(N't1', N'U') IS NOT NULL
DROP TABLE t1;
GO
CREATE TABLE t1
(a INT NOT NULL PRIMARY KEY);
CREATE TABLE t2
(a INT NOT NULL REFERENCES t1(a));
GO
INSERT INTO t1 VALUES (1);
INSERT INTO t1 VALUES (3);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (6);
GO
SET XACT_ABORT OFF;
GO
BEGIN TRANSACTION;
INSERT INTO t2 VALUES (1);
INSERT INTO t2 VALUES (2); -- Foreign key error.
INSERT INTO t2 VALUES (3);
COMMIT TRANSACTION;
GO
SET XACT_ABORT ON;
GO
BEGIN TRANSACTION;
INSERT INTO t2 VALUES (4);
INSERT INTO t2 VALUES (5); -- Foreign key error.
INSERT INTO t2 VALUES (6);
COMMIT TRANSACTION;
GO
-- SELECT shows only keys 1 and 3 added.
-- Key 2 insert failed and was rolled back, but
-- XACT_ABORT was OFF and rest of transaction
-- succeeded.
-- Key 5 insert error with XACT_ABORT ON caused
-- all of the second transaction to roll back.
SELECT *
FROM t2;
GO
See Also
THROW (Transact-SQL )
BEGIN TRANSACTION (Transact-SQL )
COMMIT TRANSACTION (Transact-SQL )
ROLLBACK TRANSACTION (Transact-SQL )
SET Statements (Transact-SQL )
@@TRANCOUNT (Transact-SQL )
TRUNCATE TABLE (Transact-SQL)
5/3/2018 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Removes all rows from a table or specified partitions of a table, without logging the individual row deletions.
TRUNCATE TABLE is similar to the DELETE statement with no WHERE clause; however, TRUNCATE TABLE is
faster and uses fewer system and transaction log resources.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
TRUNCATE TABLE
[ { database_name .[ schema_name ] . | schema_name . } ]
table_name
[ WITH ( PARTITIONS ( { <partition_number_expression> | <range> }
[ , ...n ] ) ) ]
[ ; ]
<range> ::=
<partition_number_expression> TO <partition_number_expression>
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of the table to truncate or from which all rows are removed. table_name must be a literal. table_name
cannot be the OBJECT_ID () function or a variable.
WITH ( PARTITIONS ( { <partition_number_expression> | <range> } [ , ...n ] ) )
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version)
Specifies the partitions to truncate or from which all rows are removed. If the table is not partitioned, the WITH
PARTITIONS argument will generate an error. If the WITH PARTITIONS clause is not provided, the entire table
will be truncated.
<partition_number_expression> can be specified in the following ways:
Provide the number of a partition, for example: WITH (PARTITIONS (2))
Provide the partition numbers for several individual partitions separated by commas, for example:
WITH (PARTITIONS (1, 5))
Provide both ranges and individual partitions, for example: WITH (PARTITIONS (2, 4, 6 TO 8))
<range> can be specified as partition numbers separated by the word TO, for example:
WITH (PARTITIONS (6 TO 8))
To truncate a partitioned table, the table and indexes must be aligned (partitioned on the same partition
function).
Remarks
Compared to the DELETE statement, TRUNCATE TABLE has the following advantages:
Less transaction log space is used.
The DELETE statement removes rows one at a time and records an entry in the transaction log for each
deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table
data and records only the page deallocations in the transaction log.
Fewer locks are typically used.
When the DELETE statement is executed using a row lock, each row in the table is locked for deletion.
TRUNCATE TABLE always locks the table (including a schema (SCH-M ) lock) and page but not each row.
Without exception, zero pages are left in the table.
After a DELETE statement is executed, the table can still contain empty pages. For example, empty pages in
a heap cannot be deallocated without at least an exclusive (LCK_M_X) table lock. If the delete operation
does not use a table lock, the table (heap) will contain many empty pages. For indexes, the delete operation
can leave empty pages behind, although these pages will be deallocated quickly by a background cleanup
process.
TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints,
indexes, and so on remain. To remove the table definition in addition to its data, use the DROP TABLE
statement.
If the table contains an identity column, the counter for that column is reset to the seed value defined for
the column. If no seed was defined, the default value 1 is used. To retain the identity counter, use DELETE
instead.
Restrictions
You cannot use TRUNCATE TABLE on tables that:
Are referenced by a FOREIGN KEY constraint. (You can truncate a table that has a foreign key that
references itself.)
Participate in an indexed view.
Are published by using transactional replication or merge replication.
For tables with one or more of these characteristics, use the DELETE statement instead.
TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions.
For more information, see CREATE TRIGGER (Transact-SQL ).
In Azure SQL Data Warehouse and Parallel Data Warehouse:
TRUNCATE TABLE is not allowed within the EXPL AIN statement.
TRUNCATE TABLE cannot be ran inside of a transaction.
Permissions
The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table
owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and
are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a
stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.
Examples
A. Truncate a Table
The following example removes all data from the JobCandidate table. SELECT statements are included before and
after the TRUNCATE TABLE statement to compare results.
USE AdventureWorks2012;
GO
SELECT COUNT(*) AS BeforeTruncateCount
FROM HumanResources.JobCandidate;
GO
TRUNCATE TABLE HumanResources.JobCandidate;
GO
SELECT COUNT(*) AS AfterTruncateCount
FROM HumanResources.JobCandidate;
GO
See Also
DELETE (Transact-SQL )
DROP TABLE (Transact-SQL )
IDENTITY (Property) (Transact-SQL )
UPDATE STATISTICS (Transact-SQL)
5/3/2018 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Updates query optimization statistics on a table or indexed view. By default, the query optimizer already updates
statistics as necessary to improve the query plan; in some cases you can improve query performance by using
UPDATE STATISTICS or the stored procedure sp_updatestats to update statistics more frequently than the default
updates.
Updating statistics ensures that queries compile with up-to-date statistics. However, updating statistics causes
queries to recompile. We recommend not updating statistics too frequently because there is a performance
tradeoff between improving query plans and the time it takes to recompile queries. The specific tradeoffs depend
on your application. UPDATE STATISTICS can use tempdb to sort the sample of rows for building statistics.
Transact-SQL Syntax Conventions
Syntax
-- Syntax for SQL Server and Azure SQL Database
<update_stats_stream_option> ::=
[ STATS_STREAM = stats_stream ]
[ ROWCOUNT = numeric_constant ]
[ PAGECOUNT = numeric_contant ]
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
Arguments
table_or_indexed_view_name
Is the name of the table or indexed view that contains the statistics object.
index_or_statistics_name
Is the name of the index to update statistics on or name of the statistics to update. If index_or_statistics_name is
not specified, the query optimizer updates all statistics for the table or indexed view. This includes statistics created
using the CREATE STATISTICS statement, single-column statistics created when AUTO_CREATE_STATISTICS is
on, and statistics created for indexes.
For more information about AUTO_CREATE_STATISTICS, see ALTER DATABASE SET Options (Transact-SQL ). To
view all indexes for a table or view, you can use sp_helpindex.
FULLSCAN
Compute statistics by scanning all rows in the table or indexed view. FULLSCAN and SAMPLE 100 PERCENT
have the same results. FULLSCAN cannot be used with the SAMPLE option.
SAMPLE number { PERCENT | ROWS }
Specifies the approximate percentage or number of rows in the table or indexed view for the query optimizer to
use when it updates statistics. For PERCENT, number can be from 0 through 100 and for ROWS, number can be
from 0 to the total number of rows. The actual percentage or number of rows the query optimizer samples might
not match the percentage or number specified. For example, the query optimizer scans all rows on a data page.
SAMPLE is useful for special cases in which the query plan, based on default sampling, is not optimal. In most
situations, it is not necessary to specify SAMPLE because the query optimizer uses sampling and determines the
statistically significant sample size by default, as required to create high-quality query plans.
Starting with SQL Server 2016 (13.x), sampling of data to build statistics is done in parallel, when using
compatibility level 130, to improve the performance of statistics collection. The query optimizer will use parallel
sample statistics, whenever a table size exceeds a certain threshold.
SAMPLE cannot be used with the FULLSCAN option. When neither SAMPLE nor FULLSCAN is specified, the
query optimizer uses sampled data and computes the sample size by default.
We recommend against specifying 0 PERCENT or 0 ROWS. When 0 PERCENT or ROWS is specified, the
statistics object is updated but does not contain statistics data.
For most workloads, a full scan is not required, and default sampling is adequate.
However, certain workloads that are sensitive to widely varying data distributions may require an increased
sample size, or even a full scan.
For more information, see the CSS SQL Escalation Services blog.
RESAMPLE
Update each statistic using its most recent sample rate.
Using RESAMPLE can result in a full-table scan. For example, statistics for indexes use a full-table scan for their
sample rate. When none of the sample options (SAMPLE, FULLSCAN, RESAMPLE ) are specified, the query
optimizer samples the data and computes the sample size by default.
PERSIST_SAMPLE_PERCENT = { ON | OFF }
When ON, the statistics will retain the set sampling percentage for subsequent updates that do not explicitly
specify a sampling percentage. When OFF, statistics sampling percentage will get reset to default sampling in
subsequent updates that do not explicitly specify a sampling percentage. The default is OFF.
NOTE
If AUTO_UPDATE_STATISTICS is executed, it uses the persisted sampling percentage if available, or use default sampling
percentage if not. RESAMPLE behavior is not affected by this option.
TIP
DBCC SHOW_STATISTICS and sys.dm_db_stats_properties expose the persisted sample percent value for the selected
statistic.
Applies to: SQL Server 2016 (13.x) (starting with SQL Server 2016 (13.x) SP1 CU4) through SQL Server 2017
(starting with SQL Server 2017 (14.x) CU1).
ON PARTITIONS ( { <partition_number> | <range> } [, …n] ) ] Forces the leaf-level statistics covering the
partitions specified in the ON PARTITIONS clause to be recomputed, and then merged to build the global
statistics. WITH RESAMPLE is required because partition statistics built with different sample rates cannot be
merged together.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017
ALL | COLUMNS | INDEX
Update all existing statistics, statistics created on one or more columns, or statistics created for indexes. If none of
the options are specified, the UPDATE STATISTICS statement updates all statistics on the table or indexed view.
NORECOMPUTE
Disable the automatic statistics update option, AUTO_UPDATE_STATISTICS, for the specified statistics. If this
option is specified, the query optimizer completes this statistics update and disables future updates.
To re-enable the AUTO_UPDATE_STATISTICS option behavior, run UPDATE STATISTICS again without the
NORECOMPUTE option or run sp_autostats.
WARNING
Using this option can produce suboptimal query plans. We recommend using this option sparingly, and then only by a
qualified system administrator.
For more information about the AUTO_STATISTICS_UPDATE option, see ALTER DATABASE SET Options
(Transact-SQL ).
INCREMENTAL = { ON | OFF }
When ON, the statistics are recreated as per partition statistics. When OFF, the statistics tree is dropped and SQL
Server re-computes the statistics. The default is OFF.
If per partition statistics are not supported an error is generated. Incremental stats are not supported for following
statistics types:
Statistics created with indexes that are not partition-aligned with the base table.
Statistics created on Always On readable secondary databases.
Statistics created on read-only databases.
Statistics created on filtered indexes.
Statistics created on views.
Statistics created on internal tables.
Statistics created with spatial indexes or XML indexes.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017
MAXDOP = max_degree_of_parallelism
Applies to: SQL Server (Starting with SQL Server 2016 (13.x) SP2 and SQL Server 2017 (14.x) CU3).
Overrides the max degree of parallelism configuration option for the duration of the statistic operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to
limit the number of processors used in a parallel plan execution. The maximum is 64 processors.
max_degree_of_parallelism can be:
1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel statistic operation to the specified number or
fewer based on the current system workload.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
<update_stats_stream_option> Identified for informational purposes only. Not supported. Future compatibility is
not guaranteed.
Remarks
When to Use UPDATE STATISTICS
For more information about when to use UPDATE STATISTICS, see Statistics.
EXEC sp_updatestats;
Permissions
Requires ALTER permission on the table or view.
Examples
A. Update all statistics on a table
The following example updates the statistics for all indexes on the SalesOrderDetail table.
USE AdventureWorks2012;
GO
UPDATE STATISTICS Sales.SalesOrderDetail;
GO
USE AdventureWorks2012;
GO
UPDATE STATISTICS Sales.SalesOrderDetail AK_SalesOrderDetail_rowguid;
GO
USE AdventureWorks2012;
GO
UPDATE STATISTICS Production.Product(Products)
WITH FULLSCAN, NORECOMPUTE;
GO
See Also
Statistics
ALTER DATABASE (Transact-SQL )
CREATE STATISTICS (Transact-SQL )
DBCC SHOW_STATISTICS (Transact-SQL )
DROP STATISTICS (Transact-SQL )
sp_autostats (Transact-SQL )
sp_updatestats (Transact-SQL )
STATS_DATE (Transact-SQL )
sys.dm_db_stats_properties (Transact-SQL ) sys.dm_db_stats_histogram (Transact-SQL )