Escolar Documentos
Profissional Documentos
Cultura Documentos
The Best of
Your community knowledge base
Visit www.sqlserverpedia.com
Follow us on Twitter:
twitter.com/SQLServerPedia
Quest Software Incorporated. To learn more about our solutions, contact your local sales representative or visit www.quest.com Headquarters: 5 Polaris Way, Aliso Viejo, CA 92656, USA 2009 Quest Software Incorporated. ALL RIGHTS RESERVED. Quest Software and SQL Server are trademarks and registered trademarks of Quest Software, Inc. in the U.S.A. and/or other countries. All other trademarks and registered trademarks are property of their respective owners. 8_6_09
Visit www.sqlserverpedia.com
The Best of
Contents
Examining Query Execution Plans SQL Server Troubleshooting Tips and Tricks Shrinking Databases Restoring File and Filegroup Backups Killing Sessions with SSIS Moving SQL Server Logins Between Servers Configuring Database Files for Optimal Performance Stored Procedure Execution 2 5 7 8 10 11 13 14
This article will explain how to generate both estimated and actual execution plans as well as how to interpret them.
Graphical execution plans are accessed through the query window in Management Studio in SQL Server 2005/2008 or through Query Analyzer in SQL Server 2000. To a large degree, the functionality of graphical plans is the same in SQL Server 2000 and SQL Server 2008. However, there are some fundamental differences, which are highlighted in this section. All graphical plans are read from right to left and from top to bottom. Thats important to know so that you can understand other concepts, such as how a hash join works. Each icon represents an operation. Some operations are the same in the estimated plan and the actual plan, and some are not. Operators are connected by arrows that represents the data feedthe output from one operator and the input for the next. The thickness of the data feed varies according to the amount of the data it represents: thinner arrows represent fewer rows and thicker arrows represent more rows. Operators represent various objects and actions within the execution plan. A full listing of operators is available in Books Online.
An actual execution plan requires the query to be executed. To enable the generation of the actual execution plan, take the following steps: 1. Select the Include Actual Execution Plan button from the tool bar. 2. Right click within the query window and select Include Actual Execution Plan. 3. Select the Query menu and then the Include Actual Execution Plan menu choice. 4. Press Ctrl-M. After the query executes, the actual execution plan will be available in a different tab in the results pane of the query window.
The primary reason for generating an execution plan is to work through it to understand what is happening in the query and what needs to get fixed. For example, consider the following query:
Estimated Execution Plan There are several ways to generate an estimated execution plan: Select the Display Estimated Execution Plan from the tool bar Right click within the query window and select Display Estimated Execution Plan Select the Query menu and then the Display Estimated Execution Plan menu choice Press Ctrl-L When any of these actions is performed, an estimated,
1 Visit www.sqlserverpedia.com
Start at the top right. There is an Index Seek (NonClustered) against the index named [SalesOrderHeader]. [IX_SalesOrderHeader_CustomerId]. This feeds data out to a Nested Loop (Inner Join). Working down you can see a Key Lookup (Clustered) operation against the PK_SalesOrderHeader_SalesOrderID. This is a classic key lookup, or what used to be called a bookmark lookup. You can see that the data feeds back up to the Nested Loop and then on down to another Nested Loop operator. Below that is a
2
Visit www.sqlserverpedia.com
The Best of
Clustered Index Seek (Clustered) against the [PK_SalesOrderDetail_SalesOrderId] primary key. Finally the data flow goes out to the SELECT operator. Thats the basic information available within the execution plan. We will explore this further a bit later in the article. Hover the mouse over any operator to see the tool tip for that operation type, showing some of the detail behind the operator. take a long time. Instead of running a query each time you make changes to it, you might want to examine the execution plan first. SET STATISTICS TIME ON will inform you about the CPU time used and SQL Server time used to execute a particular query. For example, consider the following query: The resulting messages are shown below:
Description Repeats the submitted query (in which case its not very useful) or contains the physical and logical operations included in the query execution plan. Provides additional information about the physical operation. For instance if a clustered index is being scanned, this column will show the name of the index as well as index keys. Contains a comma-separated list of columns defined in the query, or the list of internal values examined by the query optimizer. Shows the number of rows affected by the query. Provides the estimated I/O for the operation mentioned in this row. Provides the estimated CPU usage for the operation mentioned in this row. Provides the average row size in bytes passed by this operation. Gives the estimated cost of this operation as well as all child operations. Lists the columns in the result set. Contains a coma-separated list of warnings that pertain to the current operation. For instance, it might warn you that the statistics on a particular index being queried are out of date. Contains the appropriate Transact-SQL command type (such as SELECT or UPDATE) for the statements referenced in the query. For the rows that show the actual execution plan, this column contains plan_row. Shows that the operation is running parallel if this column contains 1. Shows the estimated number this operation will have to be executed, satisfying the current query.
Argument
To generate an estimated execution plan in SQL Server 2000, simply choose Query, Display Execution Plan. This option is equivalent to setting NOEXEC and SHOWPLAN_ ALL on, but displays the execution plan in a graphical format. The query will not be executed; SQL Server will display the execution plan chosen by the optimizer. To execute the query and see the actual execution plan, choose Query and then select Show Execution Plan. The graphical output in the query analyzer is extremely helpful. This utility also lets you create and update statistics, and create, modify or drop existing indexes. If the statistics are missing or out-of-date the graphical output will be shown in red in the table or index. Getting used to various icons might take a little while. However, if you move your mouse pointer over an icon, a tool tip will appear with a brief explanation for the icon. It is not recommended that you memorize the meaning of each icon. After looking at this graphical plan you will be able to tell if your query has a problem. The icon that you rarely want to see is a table scan, which looks like a table with a blue arrow in the middle of it.
DefinedValues The output above might be somewhat confusing initially. The first statement refers to the time it took to execute the SET STATISTICS TIME ON statement, which is too small to measure. The second and third statements provide the parse and compilation time for two statements: GO and SELECT * FROM authors. The last statement is the one we are most interested in; that is the actual time spent executing SELECT * FROM authors command. SET SHOWPLAN_ALL ON will give you detailed information about the execution plan. The output of SHOWPLAN_ALL is not straightforward, but understanding it gives you the opportunity to know what is going on behind the scenes. The following table describes the output of SHOWPLAN: Column Name StmtText Description Repeats the submitted query (in which case its not very useful) or contains the physical and logical operations included in the query execution plan. Shows the number of the statements issued before the current statement in the current connection. Provides the Node ID in the query. Displays ID for the parent step of the current node. Shows the physical implementation of the algorithm chosen by the query optimizer. If the row type is not plan_ rows then this column is NULL. Shows the Logical implementation of the algorithm chosen by the query optimizer. If the row type is not plan_ rows then this column is NULL. Parallel Estimate Executions
Grant Fritchey works for FM Global, an industryleading engineering and insurance company, as a principal DBA. Hes developed large-scale applications in languages such as VB, C#, and Java. He has worked with SQL Server since version 6.0. He has worked in finance and consulting and for three failed dot coms. He is the author of Dissecting SQL Server Execution Plans (Simple Talk Publishing, 2008) and SQL Server 2008 Performance Tuning Distilled (Apress, 2009). His online presences include: Blog - http://scarydba.wordpress.com/ Twitter - http://twitter.com/GFritchey
There are a few SET commands that can help you examine the query optimizers decisions and decide whether they produce the desired results. Just like other commands, these SET commands can be turned ON or OFF. They stay in force for the duration of the connection, or until you explicitly change the setting. SET STATISTICS IO ON will provide the number of physical reads (reads from the disk), the number of logical reads (reads from the memory cache), scan count, and the number of read-ahead reads (number of data or index pages placed in cache for the query). For example, consider the following query and the statistics it retrieves: Resulting message: SET NOEXEC ON will compile the query but will not execute it. This is helpful if you are testing a query that might
3
Type
Visit www.sqlserverpedia.com
StmtID
LogicalOp
Perhaps the most useful column out of the entire SHOWPLAN_ALL output is the StmtText, which tells you about the type of operation performed: whether it is a table scan, clustered or non-clustered index scan, etc. Most of this information is repeated again in the PhysicalOp, LogicalOp or Argument columns (whichever is appropriate). Another column to watch is the Warningsit might give you a clue to why your query isnt performing up to
Visit www.sqlserverpedia.com 4
Visit www.sqlserverpedia.com
The Best of
Indexes
If you use included columns, you know the frustration associated with figuring out which columns are included. The following stored procedures can help: sp_helpindex A system stored procedure that reports information about the indexes on a table or view sp_helpindex2 A rewrite of the sp_helpindex stored procedure, written by Kimberly Tripp dba_indexLookup_sp A custom, non-system stored procedure, written by Michelle Ufford Take a look at all of these and use the one that best meets your needs.
Block selection
Copy Behavior
This tip is not specific to SQL Server; its useful for any Microsoft product. Holding down Alt while you drag your mouse will change your selection behavior to block selection.
Missing index
SMSS offers advanced settings to help prevent unintentional issues in production environments, such as a query that causes locking or blocking. To access these options in SSMS, choose Tools | Options| Query Execution | SQL Server | Advanced.
Keyboard Shortcuts
To choose a keyboard scheme in SQL Server Management Studio (SSMS), select Tools | Options | Environment | Keyboard.
One of the great updates available in SQL Server 2008 is the Object Detail Explorer. For example, you can quickly find the table size and row counts of all the tables in a particular database. The Object Detail Explorer requires SQL 2008 Management Studio, but you can connect SQL 2008 SSMS to a 2005 instance. Note: If these options are not visible, right-click the column headers and add them to the display.
This wiki article was adapted from a blog post by Michelle Ufford. Michelle is a SQL developer DBA for GoDaddy.com, where she works with high-volume, mission-critical databases. She has more than a decade of experience in a variety of technical roles and has worked with SQL Server for the last five years. She enjoys performance tuning and maintains an active SQL Server blog. Learn more at: Blog: http://sqlfool.com/ Twitter: http://twitter.com/sqlfool/
Some suggestions include: Change SET TRANSACTION ISOLATION LEVEL to READ UNCOMMITTED. This will minimize the impact of your ad-hoc queries by allowing dirty reads. While this can be beneficial for many production environments, make sure to understand the implications of this setting before implementing it. Change SET DEADLOCK_PRIORITY to Low. This will tell SQL Server to select your session as the victim in the event of a deadlock. Here are some suggestions for additional shortcuts: Change SET LOCK TIMEOUT to a smaller, defined value, such as 30,000 milliseconds (30 seconds). By default, SQL Server will wait forever for a lock to be released. By specifying a value, SQL Server will abort after the specified timeout period when a lock is encountered. You can also make these same setting changes in Visual Studio.
Keyboard shortcuts
Pain-of-the-Week Webcasts
Dont let SQL Server challenges beat you down
Object detail explorer
Missing Indexes
If you use SSMS 2008 to execute Display Estimated Query Plan (Ctrl+L), it will show whether youre missing any indexes. This will even work if you connect SSMS 2008 to SQL 2005.
Get solutions and learn best practices from these free, educational Pain-of-theWeek webcasts at http://www.quest.com/ backstage/pow.aspx Visit www.sqlserverpedia.com
Visit www.sqlserverpedia.com
Visit www.sqlserverpedia.com
The Best of
Shrinking Databases
Dont Touch That Shrink Button!
Many of us encounter unnecessary database autoshrinks and scheduled shrink jobs. This article offers insight on what you should be doing about your database sizes. look at your backup and recovery plan. If you arent doing log backups or you just dont understand them, there are plenty of resources to help. It is relatively simple to begin working on a proper backup and recovery strategy and avoid future problems. If you dont need point-in-time recovery, you should consider simple recovery mode, which will truncate the log at certain events. However, do not go straight to simple recovery mode. You should analyze your situation and learn about recovery models to do what is right for your organization. Contain Your Transaction Log If your transaction log is growing out of control, there is a strong possibility that you are in full recovery mode and you are not backing up your log file on a regular basis. Your transaction log then continues to grow until you deliberately back it up (a full backup wont do). This is the expected result, since full recovery mode means you want the ability to back up to a point in time. As long as your backup is someplace safe, it will limit your losses according to the frequency of your backups. Solutions: Set up a log backup schedule that meets your business needs. Search books online and understand recovery models. Also, figure out the SLAs you are supposed to be supporting, and get your logs backed up on the same schedule. Make sure the backups will handle more than your mdf/ldf files so they are useful in the event of a failure. You could even send them to tape directly or after a copy. You should be able to see the size of your log files become more manageable. Get more space. Maybe you are doing log backups but you still dont have enough space. Either your activity is quite high or your allocated space is quite low. If its the former, try log backups more frequently. If its both, more space for your log files may be required. Switch to simple recovery mode. This is not to be done lightly. You are no longer able to restore to a point in time; you can restore only to the last full backup. If this is in line with your SLA and you have no desire to restore to a point in time, switch to this mode. Your log file will now truncate at certain intervals. Look at your growth ratio while you are adding that space or setting up your backup. The default growth rate for a transaction log is 10%. How large is your log file? Is 10% really the growth rate you want? On that same note, has your log file grown a lot larger than it need be because of poor management? Perhaps once you do your first T-Log backup, you should look at setting a reasonable size, knowing that it will be truncated on a regular basis. If that
When you click that shrink button (or leave a database in autoshrink, or schedule a job to perform shrinks), you are asking SQL Server to remove the unused space from your databases files. Deallocate that space and let the O/S do what it needs with it. If you do, theres a good chance that your database will continue to grow (as the majority of non-static databases tend to do). Depending on your autogrowth settings, this growth will probably be necessary and you will end up shrinking it again. At best, this is just extra work (shrink, grow, shrink, grow), and the resulting file fragmentation can be handled by your I/O subsystem. At worst, this causes file fragmentation, interrupting what would have otherwise been contiguous files and potentially causing I/O-related performance problems.
Theres a lot of confusion surrounding the difference between truncate and shrink. You may have truncated your log file but you still have no free space and the file hasnt reduced its footprint at all. This is because a truncation does nothing to the physical size of the allocated file on the operating system. A shrink operation actually clears space from a file, and a truncate essentially frees up the used space within that file. This is why a shrinking of a log file that is using all of the space wont affect the size and why a truncation of a log file wont reduce the size. A truncation would have to happen first to make room available for the shrink to work. This is not recommended, however.
Better Strategies
Just Delete the Log File This advice can take the form of advice like, Just stop SQL, detach your database, delete the log file, and reattach without log. This will definitely remove any transactions in your log, and possibly leave your database in a transactionally inconsistent state, meaning theres potential for loss of data or worse. If you are stuck without space for future growth, try the log backup. If you must do one last
This wiki article was adapted from a series of blog posts by Mike Walsh. Mike is an experienced SQL Server professional who has worked almost exclusively with SQL Server in various capacities for nearly 10 years. He has fulfilled the roles of DBA, developer, business analyst and performance team lead, but he always works his DBA experience into each role. Currently he is the principal DBA and SQL Server subject matter expert for a global insurance company. He also assists organizations with SQL problems through his consulting firm, StraightPath Solutions. Learn more at: Mikes Blog: http://www.StraightPathSQL.com/blog Twitter: http://twitter.com/mike_walsh
What are the alternatives to shrinks? Allocate More Space than Necessary Determine what your future data size needs will be, not what they are when the database initially goes live. Based on your needs, create the database size and set the autogrowth to a reasonable number in bytes rather than as a percentage. Then monitor your free space and look at size trending over time to plan for a larger allocation of space if your planning turns out to have been inaccurate. Your SAN teams may indicate that the free space in the file is just sitting there doing nothing, but you are better off having that extra space instead of scrambling to allocate space at the last minute. Dont Run Out of Space on Your Transaction Log If you are in a full recovery mode on a database, that means you intend to recover to a point in time in the event of a failure. It also means you plan to use a combination of full backups and transaction log backups (and possibly differentials). SQL Server understands your intent, and it will not truncate the log files of your database (the .LDF files). Instead, the files will continue to grow until you do a transaction log backup. If your transaction log growth is out of control, you are likely incurring the cost of full recovery mode (with a growing log file, the full logging of qualified events, etc.) but gaining none of the benefit. The simple solution is to
7
With SQL Server 2005, the database can be available for querying after its PRIMARY filegroup has been restored.
Visit www.sqlserverpedia.com
Visit www.sqlserverpedia.com
The Best of
At some point when user activity on the system is minimal, execute a statement similar to the following to restore the secondary filegroup:
Since the primary filegroup must come online first, keep the primary filegroup relatively small. If the database is one terabyte, dont create a 900 GB filegroup as the primary file, because filegroup restores wont be much faster than conventional restores. Instead, create a primary filegroup with the most urgently needed tables: configuration tables, user security, and whatever your application absolutely must have in order to run. Then create a secondary filegroup with the most commonly queried tables: customers, items, warehouses and employeeswhatever data is relatively small and helpful. Finally, create additional secondary filegroups with large tables that are not queried as frequently, like archived data or reporting tables.
you have to have the complete log chain to bring your secondary server up to speed with the rest of the database.
Brent Ozar is the Editor-in-Chief at SQLServerPedia.com and a SQL Server Domain Expert with Quest Software. Brent has a decade of broad IT experience, having performed systems administration and project management before moving into database administration. In his current role, Brent specializes in performance tuning, disaster recovery and
automating SQL Server management. Previously, Brent spent two years at Southern Wine & Spirits, a Miami-based wine and spirits distributor. Brent has experience conducting training sessions, has written several technical articles, and blogs prolifically at http://www.BrentOzar.com. His online presence includes: Email: brent.ozar@quest.com Blog: http://www.brentozar.com/ Twitter: http://twitter.com/brento/
Note that users can continue accessing the database while the secondary filegroup is being restored.
Things get more complicated when you want to restore a single filegroup from a backup. Lets say we have a oneterabyte data warehouse. If our SQL Server goes down, we cant restore one terabyte of data fast enough to make sure we dont get fired, but we cant afford a hot standby system. SQL Server 2005s new filegroup backup and restores give us a way around that. First, long before disaster strikes, we have to break the data up into a series of filegroups for easier management: 500 GB filegroup with old sales data (data that is more than one year old and that doesnt change), which weve made read-only 400 GB filegroup with old payroll data (also more than one year old) 100 GB primary filegroup with current sales and payroll data (data from the last year), which is where all current data goes When disaster strikes, heres what SQL 2005s new filegroup restore lets us do: 1. Restore the primary filegroup with the current data in a matter of minutes, and put the database online. The users can query, but if they try to query data from more than a year ago, theyll get an error message. 2. Restore the old sales filegroup while the users are already querying the current data. We want this faster than payroll, since payroll needs data only every two weeks while the sales people may want to query old sales data sooner. 3. Restore the old payroll filegroup, bringing the database fully online. But keep in mind that were talking about complete disasters here, like when the server craters altogether and we have to start with a restore of our primary filegroup.
9
Continuing our example above, lets say we want to restore the 400 GB old payroll filegroup out of a full backup chosen at random from last week, without any matching transaction logs to bring that filegroup up to speed. We know its only old data, so were sure nothings changed, and we just want SQL Server to restore it. That wont work, but to understand why, we have to zoom back out again to look at other ways filegroups can be configured. Forget the nice, clean breaks that we did in our example abovehere are some other different and completely valid ways to configure multiple filegroups: Scenario A: Load Balancing Data vs Indexes Primary filegroup has the data in it (the tables) Index filegroup has the indexes in it If we restored the index filegroup after weve been making changes to the primary (data) filegroup, the indexes would be garbage. They would point at records that may not even exist in the primary filegroup, or vice versawe may have records in the primary filegroup that dont have matching indexes in the index filegroup. Scenario B: Load Balancing Types of Tables Primary filegroup has our OrderLineItems table Secondary filegroup has our Orders table If we restored the secondary filegroup from a full backup without having all the transactions to match, we might have OrderLineItems with no matching Order records. To eliminate these risks, SQL Server wont let you pluck a filegroup out of a full backup unless you have the matching log to bring it up to speed with the rest of your database. You might have a perfect design and say, I know for sure nothing changed, believe me! but SQL wont take your word for it. You either have to have the disaster recovery scenario we talked about earlier (restoring the primary filegroup first, followed by the secondary filegroups), or else
Visit www.sqlserverpedia.com
Visit www.sqlserverpedia.com
The Best of
then store the SPIDs of the sessions you want to kill in a Recordset destination. The dataflow looked like this: 4. The last step is to schedule the package to run frequently, perhaps every 30-60 seconds, using SQL Server Agent. It is very easy to do. Of course you could add loads more functionality to this basic package. For example, you can send an e-mail to each user whose session is killed that explains what has happened, or you might want to kill a session only if there are other users running queries at the same time.
A Workaround
If we try to use the login to access the database or create the login, neither will work:
A workaround for this problem is to specify the SID value when creating the login in TSQL. It is an optional parameter. If you provide the same value as on the other server, you dont have the problem. For example, instead of executing sp_change_users_login or ALTER USER as in the previous example, we could have done the following:
3. With the resulting recordset stored in a variable, you can then loop over the recordset. For each task in your control flow and use an Analysis Services Execute DDL task to run the XMLA Cancel command to kill each query:
This wiki article was adapted from a blog post by Chris Webb. Chris is an independent consultant specializing in SQL Server analysis services cube design, tuning and troubleshooting, and the MDX query language. Hes a co-author of MDX Solutions with Microsoft SQL Server 2005 Analysis Services and Hyperion Essbase and a regular speaker at user groups and conferences in the UK and Europe. He can be contacted via his company web site, Crossjoin.co.uk. Learn more at: Blog - http://cwebbbi.spaces.live.com/
The standard resolution for this has been to use sp _change_users_login. It has an option to list any mismatched logins and database usersthose with the same names but different SIDs.
The upside of this is that its a permanent fix. The next time you restore the database, you wont have to fix it again.
sp_change_users_login then offers an option to fix it. The way sp_change_users_login fixes this issue is to update the SID in the database user to match the login: In Service Pack 2 of SQL Server 2005, new syntax was introduced to deal with this:
There are issues with this standard approach: it temporarily fixes the problem or, at worst, propagates it to other servers. Its not the database SID that needs fixing; its the logins SID. If the logins SID were correct, there wouldnt be a problem with copying the databases around. Here are couple common scenarios: A database is restored from another server (or a reinstalled server) The logins that use the database need to be recreated
This wiki article was adapted from a blog post by Greg Low. Greg is an internationally recognized consultant, developer and trainer. He has been working in development since 1978 and holds a Ph.D. in computer science and a host of Microsoft certifications. Greg is the country lead for Solid Quality, a SQL Server MVP, and one of only three Microsoft regional directors for Australia. Greg also hosts the popular SQL Down Under podcast (www.sqldownunder.com), organizes the SQL Down Under Code Camp, and co-organizes CodeCampOz. He is a board member of PASS (the Professional Association for SQL Server). He speaks regularly at SQL Server events around the world. Learn more at: E-mail: glow@solidq.com Twitter: http://twitter.com/greglow Web: http://www.sqldownunder.com
Author Credits
SQLServerPedia.com
Next, detach the database, and then drop and recreate the login:
Need a quick tutorial on backup and recovery or performance tuning? Receive priceless information through SQLServerPedias video tutorials to help you get started with DBA tasks. Visit www.sqlserverpedia.com
Visit www.sqlserverpedia.com 12
If we reattach the database, we now have a new login with a SID thats different from the one the user has in the
Visit www.sqlserverpedia.com
The Best of
Generally speaking, it doesnt make sense to break up small databases into multiple files until you can conclusively prove that theres an I/O bottleneck that will be solved by dividing the database files up.
SQL Server 2005 introduced partitioning: splitting tables and indexes up into multiple partitions, with different sets of data going into different filegroups. For example, a data warehouses 500-million-row sales table might be partitioned by year in order to keep the most recent data on faster, more expensive hard drives. On the other hand, it might be partitioned by state in order to facilitate faster data loads: data could be loaded by section of the country. Partitioning is outside of the scope of this article, but were mentioning it here because it affects file configuration. Each partition is usually stored in its own filegroup. For more information about how to configure partitioned databases, read the Partitioning articles on SQLServerPedia.com.
Example 4 Some limitations still apply with EXEC and sp_executesql. For instance, you cannot create and use local variables within the string passed to either command. The following example will fail: SQL Server will inform you that you must declare @qty variable before using it. Similarly, any temporary objects created within dynamic SQL exist only for the duration of dynamic SQL execution and are not visible outside of dynamic SQLs scope. Changing the Database Content Sp_executesql behaves identically to EXEC when it comes to changing the database context. If you change the database context within the stored procedure or batch and execute that module with sp_executesql or EXEC, youll be back to the database where you started as soon as the module is done executing. SET Commands Similarly, the SET commands issued within the context of EXEC or sp_executesql do not affect the main block of code and are effective only while the dynamic SQL is executing. On the other hand, the SET commands used in a batch prior to calling the dynamic SQL will have an effect on how dynamic SQL executes.
SQLServerPedia.com
Boost your street cred in the SQL Server community by contributing or revising articles See how by checking out: http://sqlserverpedia.com/wiki/How_To_Help Visit www.sqlserverpedia.com
13
Visit www.sqlserverpedia.com
Visit www.sqlserverpedia.com
14