Escolar Documentos
Profissional Documentos
Cultura Documentos
INDEX:
III….Transact SQL
IV….Transactions
V……Isolation Levels
VI…..Locking
VIII..ASE Utilities
XI….ASE Resources
For Details of the INDEX refer the next pages w.r .to each INDEX.
Leave a Comment
Data types
List system objects and attributes
Creating a table
Altering a table
Table creation - Alternatives
Indexes
Clustered vs. non-clustered indexes
Creating constraints
Defaults
Table partitioning
Object permissions
Binding Rules
Modifying a Column
Moving an object
Views
Stored Procedures
Triggers
Datatypes:–
select name
from sysobjects where type = ‘U’
go
OR
use system SP sp_tables
select name
from sysobjects where type = ‘P’
go
sp_help emp
go
sp_spaceused emp
go
sp_helptext emp_proc
go
Creating a table:–
The table definition provides the column name, datatype and specifies how each
column handles null values.
Decide what columns you need in the table, the datatype, length, precision, and
scale, for each column.
Create any new user-defined datatypes before you define the table where they are
to be used.
Decide what integrity constraints or column defaults, if any, you need to add to
the columns in the table. This also includes rules, indexes, and triggers to enforce
data integrity.
Decide whether you need defaults and rules, and if so, where and what kind.
Consider the relationship between the NULL and NOT NULL status of a column
and defaults and rules.
Decide what kind of indexes you need and where. Indexes are discussed later
1.)
create table emp (
emp_id numeric(8,0) identity,
emp_name varchar(50) not null,
salary money not null,
dept_cd char(3) not null,
fax_no integer null
)
go
2.)
create table invoice (
invoice_id numeric(8,0) identity,
sales_rep_id numeric(8,0) not null,
date smalldatetime not null,
comment varchar(255) null )
on data_seg2
go
3.)
create table err_cd (
err_id integer not null,
err_desc varchar(60) not null,
constraint pk_err_cd primary key clustered (err_id)
)
go
Altering a table:–
If you already have a table and would like to create another with the same column
definitions, use ‘select into’
select * into my_copy from source_table
Indexes:—
on emp (emp_id)
go
on emp (emp_name)
go
Clustered
Non-clustered
Typically,
Clustered index will be created on the primary key of a table
Clustered indexes:
Non-clustered indexes:
Creating a constraint:—
Constraints are used to define primary keys, enforce uniqueness, and to describe foreign
key relationships. By default, indexes are created upon creation of unique or primary key
constraints.
Defaults:–
Creating Defaults
create default def_highsal as 15000
go
Binding defaults
sp_bindefault def_highsal,’emp.salary’
go
Creating your own custom defaults has an advantage that the name of the default
can be chosen to reflect the application/functionality.
Table Partitioning:–
Insert performance on partitioned tables is improved, as multiple ‘entry points’ (last page
entries) are created. Partitioned tables require slightly more disk space and small bit of
additional memory.
Tables containing large amounts of data (> 2 GB) need to be spread across several
devices, using sp_placeobject. Note that this procedure affects only future operations
- if a table load of more than 2 GB is to be performed, it would have to be split into two
or more stages.
sp_placeobject ‘data_seg2′,’invoice’
go
Object Permissions:–
Object security is fairly straightforward and is handled using the ‘grant’ Transact SQL
statement.
grant all on emp to Srinath
go
go
go
go
Stored procedure security allows you to grant access on a business logic basis. For
example, if you had a stored proc that updated the invoice table and selected data from
the customer table, you could grant the execute privilege on the stored proc, and you’re
done. The user would be able to run the procedure to update/select from the tables, but
will not get to the tables directly.
grant execute
on proc_upd_invoice to Srinath
go
Examples
create table emp (
emp_id integer not null,
emp_name varchar(50) null,
salary money default 15000
)
go
Modifying Column:–
To modify/alter a column type you need to be “dbo” and have select into option enabled,
in the database defaults.
create table emp (
emp_id integer not null,
emp_name varchar(50) null,
salary money default 0
)
go
When databases contain more segments, it is often necessary to move tables between
segments.
Examples
Leaving the table in the existing location but future allocations go to the new
segment
To leave a table in the existing location and use a new segment for future
allocations of the text column (emp_notes):
go
Views:–
Views offer an alternative way to look at data and can also be used to enforce security by
limiting the data that is seen by users.
Create Views
Updates on views are possible but there are restrictions. Some update restrictions
are:
The data accessed via a view must require permissions irrespective of the
permissions existing in the underlying tables that are used to create the view. The
data in the underlying tables that is not included in the view are not visible to the
users.
Advantages:
Stored Procedures:–
Stored procedures are compiled versions of SQL statements. Performance benefits are
significant as network traffic is reduced, and the optimizer does not need to re-parse the
code.
Examples:
- sp_who –Provides information on users on ASE
- sp_help – Provides a information/listing of all objects in the current database.
All of the System Stored Procedures start with “sp_” and are located in the
‘sybsystemprocs’ system database
User Stored Procedures – Defined by the users of ASE in a designated user
database.
Triggers:–
Trigger is a special type of SP that gets executed automatically when any DML operation
takes place on a table.
Triggers can be used to apply complex restrictions than that enforced using rules.
Trigger can perform analysis before and after changes to the table.
Trigger creation
Trigger Example
Leave a Comment
* master
* model
* sybsystemprocs
* tempdb
* sybsecurity
* sybsystemdb
* dbcc
* sybdiag
* pubs2 and pubs3
ASE databases:
master database:–
Controls the operation of ASE and stores the information of user databases and devices
on the ASE server
model database:–-
All new user databases use model database as a template database. ASE copies this
database whenever a new database is created and extends the space of the new user
database as specified in the ‘create database’ command Use sp_tables system SP to get
the list of system tables in model database.
sybsystemprocs:-–
tempdb :–-
This is a temporary database used by ASE. “tempdb” database is used to store temporary
tables and other working structures.
tempdb is installed by default during ASE installation.
Multiple tempdb(s) can be created based on need.
Optional databases::
sybsecurity :–-
This database contains the audit system to audit database users activity on ASE.
sysbsystemdb:-–
dbcc :-–
This database is used to store configuration information of target database operation
activity and results of the operation from ‘dbcc checkstorage’
sybdiag :–-
This database may be created and used by SYBASE Technical Support to troubleshoot
the system. The database holds diagnostic information.
Leave a Comment
I. A S E OVERVIEW.
ASE Server
Memory Model
Transaction Processing
Backup Procedures
Recovery Procedures
ASE Logins
ASE Groups
Security and Account Setup
Database Creation
Storage Concepts
Transact SQL
Transact SQL Extensions
ASE Server:–
Memory Model:–
The ASE executable runs as a single process within the OS. Multiple users connect to the
database. Only one process is managed by the OS. Each Sybase database connection
requires memory.
Transaction Processing:–
Transactions are written to the data cache first and then they advance to the
transaction log and database device.
Pages are discarded from the data cache on rollback.
Transaction logs are used to restore data in event of a hardware failure.
Checkpoint operation flushes all updated/committed memory pages to their
respective tables.
Transaction logging is required for all databases. Image (blob) fields may be
exempt.
During update transaction, the data page(s) containing the row(s) are locked.
Row level record locking is available and can be enabled. To facilitate this, the
table structures may need to be tuned.
Backup Procedures :–
Backup procedures are facilitated by the backup server.
Backup procedure is carried out using “dump database” operation.
Backup operation can be performed when the database is on-line or offline.
Transactions can be dumped using “dump transaction” commands.
Recovery Procedures:–
ASE Logins:–
ASE Groups:–
Storage Concepts:–
Transact-SQL:–
Any number of result sets can be returned to calling applications via SELECT
statements.
Triggers and Stored procedures (System and User) are supported for writing SQL
that are stored in a compiled format, which allows for automatic execution and
faster execution of DML SQL code.
Cursors are supported for row by row processing.
Functions (System/Arithmetic/Date/String), Rules and Defaults are supported.
Temporary tables are supported, which allows customized private work tables to
be created for complex processes.
Global and Local variable support.
Flow control statements (IF-ELSE, WHILE …).
Federal Information Processing Standards (fipsflagger).
Leave a Comment
Exception Handlers:
In SQL server the errors are always written to sysmessages table, so no exception can be
handled here. The type of access to the SQL server from the front end determines the
state of the error and how the error information can be retrieved.
Eg.
a). For ODBC type of access the error messages are stored in the SQLError object of the
SQL server, the front end should query that object to get the error message.
b). For OLE-DB type of access the error messages are stored in the
ISQLServerErrorInfo object. So the OLE-DB provider library
The first RAISERROR returns an @@ERROR value of 50000. The second returns the
syntax error message used by Microsoft® SQL Server™ with an @@ERROR value of
101.
1. Header
2. Declaration section
3. Executable section.
4. Exception section.
PROCEDURE (Parameters)
RETURN DATATYPE
AS
Variables DATATYPE;
BEGIN
Executable_statements;
If err then goto Err_block
RETURN Value
Err_block:
Process error and return the error string
END
GO
Recommendations:
1. All the executable statements after the BEGIN and error handling block are
indented in from the BEGIN
2. Include a blank line after each section.
3. IS statement to be on a new line where everused.
Leave a Comment
Control Structures
IF <expression> THEN
Executable_statements
ELSE
Executable_Statements
If executable statements are more than one statement then use BEGIN … END block for
grouping the statements.
Recommendation:
Transact-SQL offers the WHILE loop. The GOTO statement can also be used for
looping purposes.
Syntax:
WHILE Boolean_expression
{sql_statement | statement_block}
[BREAK] [CONTINUE]
Eg:
WHILE condition
Executable_Statements
END
BREAK / CONTINUE can be used within the loop to encounter the exceptions.
Each loop has a loop boundary and a loop body. The loop body should be indented from
the boundary.
Leave a Comment
Rule
Leave a Comment
Sybase Naming Convention Standard
Filed in Sybase on May.14, 2009
The following should be the templates used in T-SQL source code files.
Any T-SQL source file should have a file header of the following format (Header)
followed by the body of the procedure. Since SQL server doesn’t support the concept of
Packages the only type of header is as follows:
/*
** (C) <Year>, Deutsche Bank Group
** Description :
** Author :
The calling procedure/function should always have to check the return status and
proceed with further processing.
For example, the following statement should not be used
The select * statement should not be used because, the statement would be invalid in case
the structure of the table changes. So the columns have to be explicitly mentioned.
1.1 Indenting
Tabbing is used for indenting. Statement blocks used with the following statements are
indented one tab stop from the corresponding statements.
Loops and
Conditional Statements.
1.2 Spacing
A single space should be placed before and after all operators. A single space should also
be placed after the comma of each argument in function parameter lists.
1.3 Comments
Leave a Comment
Rule
1. Primary and Unique Key constraints are automatically indexed by the database
when the constraint is enabled and cannot be indexed separately.
2. Foreign Key columns should only be indexed where they improve performance.
3. Additional non-unique indexes must only be created for specific performance
reasons on the basis of demonstrated need.
4. Redundant Indexes must be removed.
5. Two indexes must not share the same leading edge, i.e. The same column(s) as
the first part(s) of the index.
6. Indexes must be named <table name>_<index type><seq> where <index type>
is: ‘PK’, ‘AK’, ‘FK’ or ‘IK’ and <table name> is truncated to 25 characters.
Note: ‘PK’ refers to a Primary Key index, ‘FK’ refers to a Foreign Key index,
‘AK’ refers to an Alternate Key index, ‘IK’ refers to a non-unique index.
7. Indexed columns must be specified in order of decreasing selectivity, i.e. the first
column in the index should have the highest number of distinct values. Note: This
is to enhance performance.
8. Columns with only a few distinct values relative to the total number of records
must not be indexed. Bitmap Indexes should be used only when there is little or
no update activity and tables have low cardinality. Note: A serial scan through
the table is faster.
Guidelines
Leave a Comment
Rule
1. Each table must have one or more unique identifiers. At least one must be
implemented as a primary key.
2. The unique identifier must have true business meaning where possible, and not
simply be a generated sequence number. A surrogate key may be used where
performance considerations would warrant it.
3. If relationships exist between tables then they should be enforced using a foreign
key constraint (unless triggers are being used to enforce advanced RI)
4. Check constraints must be used to restrict the allowable values of columns for
appropriate business rules where the list of values is relatively small.
Guidlines
Leave a Comment
ata Suffix
CODES Code
IDENTIFIER Id
INDICATOR/FLAG Flag
AMOUNT Amount
RATES Rate
KEY Key
Number Number
Name Name
Type Type
Leave a Comment
1. The table name should be the same as the logical entity name or should bear some
significance to the data stored therein.
2. The table must have a primary key that uniquely identifies each row.
3. The column names will be same as logical attribute name.
4. Columns must be defined as NOT NULL wherever possible.
5. Table names should be no more than 30 alphanumeric characters.
1. Keep table names to around 20 characters. This will prevent the truncation of
table name in the naming of other objects that contain the table name as part of
the object name. e.g. Indexes
2. Columns may only be denormalized to improve performance, but this should be
very carefully balanced against the additional update that may be required.
Leave a Comment
Leave a Comment
Leave a Comment
A default installation of Sybase ASE has a small tempdb located on the master device.
Almost all ASE implementations need a much larger temporary database to handle sorts
and worktables and therefore DBA’s need to increase tempdb.
This document gives some recommendations how this could be done and describes
various techniques to guarantee maximum availability of tempdb.
Content
• About Segments
• Prevention of a full logsegment
• Prevention of a full segment for data
• Separation of data and log segments
• Using the dsync option
• Moving tempdb off the master device
• Summary of the recommendations
About Segments
Tempdb is basically just another database within the server and has three segments:
’system’ for system tables like sysobjects and syscolumns, ‘default’ to store objects such
as tables and ‘logsegment’ for the transaction log (syslogs table). This type of
segmentation, no matter the size of the database, has an undefined space for the
transaction log; the only limitation is the available size within the database. The
following script illustrates that this can lead to nasty problems.
Running the script populates table #a and the transaction log at the same time, until
tempdb is full. Then the log gets automatically truncated by ASE, allowing for more rows
to be inserted in the table until tempdb is full again. This cycle repeats itself a number of
times until tempdb is filled up to the point that even the transaction log cannot be
truncated anymore. At that point the ASE errorlog will show messages like “1 task(s) are
sleeping waiting for space to become available in the log segment for database tempdb”.
When you log on to ASE to resolve this problem and you run an sp_who, you will get
“Failed to allocate disk space for a work table in database ‘tempdb’. You may be able to
free up space by using the DUMP TRANsaction command, or you may want to extend
the size of the database by using the ALTER DATABASE command.”.
Your first task is to kill off the process that causes the problem, but how can you know
which process to kill if you even can’t run an sp_who? This problem can be solved with
the lct_admin function. In the format lct_admin(“abort”,0,) you can kill sessions that are
waiting on a log suspend. So you do a:
select @a = 1
while @a > 0
begin
insert into #a values(“get full”)
end
go
When you execute the lct_admin function the session is killed but tempdb is still full. In
fact it’s so full that the table #a cannot be dropped because this action must also be
logged in the transaction log of tempdb. Besides a reboot of the server you would have no
other option than to increase tempdb with just a bit more space for the logsegment.
This extends tempdb and makes it possible to drop table #a and to truncate the transaction
log.
In a real-life situation this scenario could cause significant problems for users.
(for pre 12.5.1: followed by a checkpoint in tempdb) the transaction that fills up the
transaction log in tempdb is automatically aborted by the server.
This message can be caused by a query that creates a large table in tempdb, or an internal
worktable created by ASE used for sorts, etc.
Potentially, this problem is much worse than a full transaction log since the transaction is
cancelled. A full log segment leads to “sleeping” processes until the problem is resolved.
However, a full data segment leads to aborted transactions.
After a reboot of the server (12.5.1. too) you can use limits:
This limit means that the user petersap is allowed to use 200 pages within tempdb. When
the limit is exceeded the session receives an error message (Msg 11056) and the query is
aborted. Different options for sp_add_resource_limit make it possible to kill the session
when the limit is exceeded.
Just how much pages a user should be allowed to use in tempdb depends on your
environment.
Things like the size of tempdb, the number of concurrent users and the type of queries
should be taken into account when setting the resource limit. When a resource limit for
tempdb is crossed it is logged into the Sybase errorlog. This makes it possible to trace
how often a limit is exceeded
and by who. With this information the resource limit can be tuned.
When you use multiple temporary databases the limit is enforced on all of these.
The following example illustrates how tempdb could be configured with separate devices
for the logsegment and the data. The example is based on an initial setting of tempdb on
the master device.
First we increase tempdb for the system and data segments:
When you have done this and run an ‘sp_helpdb tempdb’ you will see that data and log
are still on the same segment. Submit the following to resolve this:
Please note that tempdb should not be increased on the master device.
Remove the –m switch from the dataserver options and restart ASE. Your tempdb is now
available with the default allocation on the master device.