Você está na página 1de 9

1.

5: Data Concurrency and Locking


Transactions > one or more SQL statements treated as one Rules (ACID): o o o o Atomicity -> treated as unit Consistency -> only valid consistent data is stored in the database Isolation -> transactions cannot interfere with each other Durability -> committed transactions have their changes persisted in the database

Commit -> makes permanent changes to the database Rollback -> reverses changes until the last commit Concurrency -> multiple users accessing the same resources at the same time Locking -> mechanism to ensure data integrity and consistency -> acquired automatically based on isolation levels Types of Locks: o o Share locks (S locks) -> read operations such as SELECT Exclusive locks (X locks) -> operations such as DELETE, UPDATE, INSERT

There are two applications: A & B. If no concurrency control, o Lost Update -> Application A updated a cell in the database and then B updated the same cell as well slightly later than A. The value entered by B will be the value of the cell, losing the update of A. Uncommitted Read -> Also known as dirty read. Application A updated a cell in the database and then B reads the current value of the cell. The value of the cell that B will get is the updated value of A. If A rollbacks, then B does not know that the value is incorrect. It is the usage of data that was uncommitted, and is incorrect. Non-repeatable Read -> Application A issues a select statement. Application B issues an update. Then Application A again issues a select statement. The value/s from the first select statement that was issued is different from the second select statement and gives you lesser number of rows. Phantom Read -> Similar with non-repeatable read, assuming the table has a value, application A gets the null values of the table. Application B updates the table by removing the value, setting it to null. If application A queries the table again, the results will differ from the first query and gives you more number of rows.

Isolation Levels -> policies to control when locks are taken

-> can be specified at many/different levels (session/application, connection, statement) (db2 change isolation level to {RR, RS, CS, UR}) Statement Level -> use WITH {RR, RS, CS, UR}; SELECT COUNT(*) FROM tab1 WITH UR Embedded SQL -> level is at bind time Dynamic SQL -> level is at run time DB2 Isolation Levels: o Cursor Stability -> Lock each row you fetch Cursor Stability with currently committed semantics -> default isolation level -> cur_commit db cfg to enable or disable (db2 update db cfg for sample using cur_commit disabled) -> avoids timeouts/deadlocks -> retrieve committed values

Read Stability / Repeated Read -> Fetch row 1, a lock is taken; fetch row 2, another lock is taken and lock in row 1 is remained; fetch row 3, another lock is taken and locks in rows 1 and 2 remained. Provides least concurrency, allows maximum accuracy. Uncommitted Read -> No row locks for read-only cursors. Allows maximum concurrency, provides least accuracy in terms of results

Summary of concurrency problems resolved by isolation level:

When to use which type of isolation level depending on the application type and data stability (how accurate you want it to be)

Lock Wait -> application waits indefinitely to obtain the needed lock LOCKTIMEOUT (db cfg) -> specifies the number of seconds to wait for a lock where -1 (infinite wait) is the default Deadlock -> two or more applications wait indefinitely for a resource DB2 deadlock detector: o o o DLCHKTIME (db cfg) -> sets the time interval for checking deadlocks When deadlock is detected, DB2 uses an internal algorithm to pick which transaction to roll back and which one to continue. The transaction that is forced to roll back gets a SQL error. The rollback causes all of its locks to be released.

2.0: Database Security


DB2 security 2 steps: o o Authentication (External to DB2) -> checks user name and password -> usually done by an OS or a security plug-in Authorization (Performed by DB2) -> check if authenticated user may perform requested operation -> done by DB2 using information stored in the DB2 catalog, DBM configuration file

AUTHENTICATION (dbm cfg) -> parameter which determines where/how authentication is performed -> db2 GET DBM CFG -> db2 UPDATE DBM CFG USING AUTHENTICATION CLIENT

AUTHORIZATION -> if an authorization ID has sufficient privileges to perform a desired database operation -> can be assigned using privileges (granular), authorities (built-in), roles (user-defined)

3 ways to assign a privilege to a user: o Explicit -> using explicitly the GRANT and REVOKE statements to a user or group -> grant select on table db2inst1.employee to user mary -> revoke select on table db2inst1.employee from user mary Implicit -> DB2 may grant privileges automatically when certain commands are issued -> create table mytable (user automatically has full access to the table)

Indirect -> uses packages where specific SQL statements are placed and requires EXECUTE to run

Authorities: o Instance-level -> SYSADM, SYSCTRL, SYSMAINT, SYSMON

-> update dbm cfg using SYSADM_GROUP <group_name> SYSADM -> highest level of administrative authority at the instance level -> upgrades and restores databases, change dbm configuration file -> does not implicitly get DBADM authority (data access) Database-level -> DBADM, SECADM, SQLADM, WLMADM, EXPLAIN, ACCESSCTRL, DATAACCESS

SECADM -> security administrator for a given database -> grants/revokes all security at the database level; no authority at the instance level -> cannot access data stored in user tables -> can grant the SYSADM the SECADM authority DBADM -> super user for a given database (database level privileges, not at instance) -> DATAACCESS and ACCESSCTRL authorities by default

Roles -> user-defined database-level authorities

Public group -> special group in DB2 where every member is a user from the operating system RESTRICTIVE -> revoke privileges to the public Granular access through views -> different presentations of the same data to multiple users

Label-based access control -> granular security at the row and column level (SECADM) -> not available in the DB2 Express-C

2.1: Backup and Recovery


Common problems which can be encountered: o o o o System outage (power failure, hardware failure) Transaction failure (Users may inadvertently corrupt the database) Media failure (Disk drive becomes unusable) Disaster (Database facility damaged by fire, flooding, or other casualties)

T1 -> database backup operation T2 -> damage to database T3 -> all committed data is recovered

Database logging -> logs which keep track of the changes made to the database objects and their data -> logging is always ON for regular tables in DB2 (cannot be turned off) Upon commit, DB2 guarantees data has been written to logs only During an update operation, update changes -> buffer pool; when committed, old and new values -> log files

CHNGPGS_THRES -> change page threshold parameter which indicates the percentage of the buffer pool with dirty pages which are pages containing changes. -> When reached, pages in the buffer pool are externalized and written to the table space disk Types of logs based on the log file allocation: o o Primary logs -> Pre-allocated (LOGPRIMARY [db cfg] -> number of primary logs available) Secondary logs -> Allocated as needed (LOGSECOND [db cfg] -> max number of secondary logs) -> costly; deleted when all the connections to a database are terminated

Type of logs based on the information stored in logs: o o Active logs -> information that has not been externalized (not on the table space disk) -> transactions that have not been committed or rolled back Archive logs -> committed and externalized logs -> Online (in the active directory); Offline (separate repository)

Types of logging: o Circular logging (non-productive systems) -> logs that become archived, can be overwritten -> default type of logging

-> LOGARCHMETH1 = OFF and LOGARCHMETH2 = OFF [db cfg] -> BLK_LOG_DSK_FUL = continue writing to the logs every 5 minutes -> cannot have roll forward recovery Archival logging (productive systems) -> no logs are deleted and logs are never overwritten but kept either offline or online -> log retain logging -> either LOGARCHMETH1 or LOGARCHMETH2 not set to OFF or LOGRETAIN -> required for roll forward recovery and online backup

Infinite logging -> provides infinite log space (set LOGSECOND to -1) Database backups -> copy of a database or table space (user data, DB2 catalog, control files) Backup modes: o o Offline backup -> does not allow other applications to access the database -> backup database sample to C:\backups Online backup -> allows other applications to access the database while backing up -> backup database sample online to C:\backups

Incremental backups -> for large databases o Cumulative/Incremental Backup -> all database data that has changed since the most recent successful full backup -> TRACKMOD (db cfg) = ON -> SUN = full backup; MON = SUN-MON; TUES = SUN-TUES; WED = SUN-WED Delta Backup -> all database data that has changed since the last successful backup -> SUN = full backup; MON = MON; TUES = TUES; If WED fails, restoration = SUN + MON + TUES

Database recovery -> recreate the database or tablespace from backups or logs -> use the restore and rollforward commands

Type of database recovery: o o o Crash recovery -> protects the database from being left inconsistent (power failure) Version recovery -> restores database from backup Rollforward recovery -> needs archival logging to be enabled -> RESTORE from a backup image -> ROLLFORWARD command to apply the logs on top of the backup -> minimizes data loss

RESTORE -> recover a database from a backup image

Table space backups & restores -> enables user to backup a subset of database (archival logging = on)

Você também pode gostar