Escolar Documentos
Profissional Documentos
Cultura Documentos
• TYPES OF LOCKS
• SHARED LOCKS
• EXCLUSIVE LOCKS
Transaction uses locks to deny access to other
transactions and so prevent incorrect updates.
• Most widely used approach to ensure serializability.
• Generally, a transaction must claim a shared (read) or
exclusive (write) lock on a data item before read or
write.
• Lock prevents another transaction from modifying item
or even reading it, in the case of a write lock.
• Locking - Basic Rules
• If transaction has shared lock on item, can read but not
update item.
• If transaction has exclusive lock on item, can both read
and update item.
• Reads cannot conflict, so more than one transaction can
hold shared locks simultaneously on same item.
• Exclusive lock gives transaction exclusive access to that
item.
• Some systems allow transaction to upgrade read lock to
an exclusive lock, or downgrade exclusive lock to a shared
lock.
• Two-Phase Locking (2PL)
Strict 2PL:
– If T wants to read an object, first obtains an S lock.
– If T wants to modify an object, first obtains X lock.
– Hold all locks until end of transaction.
– Guarantees serializability, and recoverable schedule,
too! also avoids WW problems!
2PL:
– Slight variant of strict 2PL
– transactions can release locks before the end (commit
or abort)
? But after releasing any lock it can acquire no new locks
– Guarantees serializability
A two-phase locking ( 2PL) scheme is a locking scheme in
which a transaction cannot request a new lock after
releasing a lock. Two phase locking therefore involves two
phases:
– Growing Phase ( Locking Phase) - When locks are
acquired and none released.
– Shrinking Phase ( Unlocking Phase) - When locks are
released and none acquired.
The attraction of the two-phase algorithm derives from a
theorem which proves that the two-phase locking
algorithm always leads to serializable schedules. This is a
sufficient condition for serializability although it is not
necessary.
Strict two-phase locking ( Strict 2PL) is the most widely
used locking protocol, and has following two rules:
– If a transaction wants to read (respectively, modify) an
object, it first requests a shared (respectively,
exclusive) lock on the object.
–All locks held by a transaction are released when the
transaction is completed
In effect the locking protocol allows only ‘safe’
interleavings of transactions.
Q) Three transactions A, B and C arrive in the time
sequence A, then B and then C. The transactions are run
concurrently on the database. Can we predict what the
result would be if 2PL is used?
– No, we cannot do that since we are not able to predict
which serial schedule the 2PL schedule is going to be
equivalent to. The 2PL schedule could be equivalent to any
of the following six serial schedules: ABC, ACB, BAC,
BCA, CAB, CBA.
• Two-Phase Locking (2PL)
Transaction follows 2PL protocol if all locking operations
precede first unlock operation in the transaction.
• Two phases for transaction:
– Growing phase - acquires all locks but cannot release
any locks.
– Shrinking phase - releases locks but cannot acquire any
new locks.
• Preventing Lost Update Problem using 2PL Eg in slides
• Preventing Uncommitted Dependency Problem using 2PL
• Locking Granularity
A database item which can be locked could be
– a database record
– a field value of a database record
– the whole database
• Trade-offs
?coarse granularity - the larger the data item size, the
lower the degree of concurrency
?fine granularity - the smaller the data item size, the
more locks to be managed and stored, and the more
lock/unlock operations needed.
Deadlock
An impasse that may result when two (or more)
transactions are each waiting for locks held by the other
to be released. Eg in slides
• Conditions For Deadlock
• Mutual Exclusion
• Hold And Wait
• Non Preemption
• Circular Wait
Recovery
• Occurs in case of transaction failures.
• Database (DB) is restored to the most recent
consistent state just before the time of failure.
• To do this, the DB system needs information about
changes applied by various transactions. It is the system
log.
Contents of System Log:
• [start_transaction, T]: Indicates that transaction T has
started execution.
• [write_item, T, X, old_value, new_value]: Indicates that
transaction T has changed the value of DB item X from
old_value to new_value.
• [read_item, T, X]: Indicates that transaction T has
read the value of DB item X.
• [commit, T]: Indicates that transaction T has completed
successfully, and affirms that its effect can be
committed (recorded permenantly) to the database.
• [abort, T]: Indicates that transaction T has been
aborted.
Deadlock
• Only one way to break deadlock: abort one or more of
the transactions.
• Deadlock should be transparent to user, so DBMS
should restart transaction(s).
• Three general techniques for handling deadlock:
– Timeouts.
– Deadlock prevention.
– Deadlock detection and recovery.
Timeouts
• Transaction that requests lock will only wait for a
system-defined period of time.
• If lock has not been granted within this period, lock
request times out.
• In this case, DBMS assumes transaction may be
deadlocked, even though it may not be, and it aborts and
automatically restarts the transaction.
Deadlock Prevention
• DBMS looks ahead to see if transaction would cause
deadlock and never allows deadlock to occur.
• Could order transactions using transaction timestamps:
– Wait-Die - only an older transaction can wait for
younger one, otherwise transaction is aborted (dies) and
restarted with same timestamp.
– Wound-Wait - only a younger transaction can wait for
an older one. If older transaction requests lock held by
younger one, younger one is aborted (wounded).
Deadlock Detection and Recovery
• DBMS allows deadlock to occur but recognizes it and
breaks it.
• Usually handled by construction of wait-for graph
(WFG) showing transaction dependencies:
– Create a node for each transaction.
– Create edge Ti -> Tj, if Ti waiting to lock item locked by
Tj.
• Deadlock exists if and only if WFG contains cycle.
• WFG is created at regular intervals.
• Recovery Outline
• Restore to most recent “consistent” state just before
time of failure
– Use data in the log file
• Catastrophic Failure
– Restore database from backup
– Replay transactions from log file
• Database becomes inconsistent (non-catastrophic
errors)
– Undo or Redo last transactions until consistent state is
restored
Recovery Algorithms for Non-catastrophic Errors:
Deferred Update (NO-UNDO/REDO):
– Data written to buffers
– Not physically updated until after commit point reached
and logs have been updated
– No undo is even necessary
– Redo might be necessary on transactions that have
been logged but not physically updated
– Known as the NO-UNDO/REDO algorithm
Immediate Update (UNDO/REDO):
– Database being updated as transaction occurs
– However, log always force written first
– Partially completed transactions will have to be undone
– Committed transactions might have to be redone
– Known as the UNDO/REDO algorithm
– Variation on the scheme:
• Data is physically updated before commit
• Only requires UNDO
• Known as the UNDO/NO-REDO algorithm
• Logging
• Record REDO and UNDO information, for every update,
in a log.
– Sequential writes to log (put it on a separate disk).
– Minimal info (diff) written to log, so multiple updates
fit in a single log page.
Log: An ordered list of REDO/ UNDO actions
– Log record contains:
– and additional control info
Caching of Disk Blocks:
– Disk blocks typically cached to main memory
– Changes made to cache block which is then written back
at some later time
– Many DBMS’s even handle the low-level I/O
DBMS Caching:
– All database accesses check to see if required item is
in the cache first. If not, item is loaded into cache
– Dirty bit: Determines if cache block has been updated
and needs to be written back to disk
– Pin/Unpin bit: Is it OK to write block back to disk yet?
– In-place updating: Block is written back out to same
location. Overwrite original
– Shadowing: Block is written to new location
? Old copy is kept
? Before Image (BFIM) & After Image (AFIM)
• The Write- Ahead Logging Protocol:
– Must force the log record for an update before the
corresponding data page gets to disk.
– Must write all log records for a transaction before
commit
• The rule that all transactions follow in the WAL
protocol is "Write the log before you write the data”.
When a transaction wants to update a record, it pins the
page containing the record in the main-memory buffer
pool, modifies the page in memory, generates an
undo/redo record, forces the undo/redo record to the
log, and unpins the page in the buffer pool. At some later
time, the page replacement algorithm or a checkpoint will
write the page back to the database.
• WAL protocol:
• #1 guarantees Atomicity.
• #2 guarantees Durability.
• Each log record has a unique Log Sequence Number
(LSN). LSNs always increasing.
• Each data page contains a pageLSN. The LSN of the
most recent log record for an update to that page.
• System keeps track of flushedLSN. The max LSN
flushed so far.
• WAL: Before a page is written, pageLSN <= flushedLSN
• Normal Execution of a transaction
• Series of reads & writes, followed by commit or abort.
We will assume that write is atomic on disk.
• In practice, additional details to deal with non- atomic
writes.
• Strict 2PL. STEAL, NO- FORCE buffer management,
with Write- Ahead Logging.
Checkpoints in the System Log
• Checkpoint record written in log when all updated DB
buffers written out to disk
• Any committed transaction occurring before checkpoint
in log can be considered permanent (won’t have to be
redone after crash)
• Actions
– suspend execution of all transactions
– force-write all modified buffers to disk
– write checkpoint entry in log and force write log
– resume transactions
• Checkpointing: Periodically, the DBMS creates a
checkpoint, in order to minimize the time taken to
recover in the event of a system crash. It quiesces the
system (makes all currently executing transactions
pause), writes all dirty buffers to disk, and then allows
transactions to resume normal processing.
• The problem with this simple checkpoint is that it
makes the data unavailable for too long, possibly several
minutes, while the checkpoint is being done. There is
another technique called Fuzzy checkpoint, which does
not have this problem, because it does not quiesce the
system. Instead, for each buffer, the fuzzy checkpoint
procedure latches the buffer (gets an exclusive
semaphore on it), writes it to disk if it is dirty, and then
unlatches the buffer. In addition, fuzzy checkpoint
writes the ID’s of the currently active transactions to
the log. Fuzzy checkpoint just locks buffers one at a time
and releases them.
• How does the system recovers from a crash: The crash
recovery algorithm reads the most recent checkpoint
information from the log, which yields a set of
transaction ID’s that were active at the time of the
checkpoint. Then it scans the log forward from the
checkpoint, reapplying every undo/redo record to the
database (this is called REDO ALL). During the forward
pass, it analyzes the log to determine which transactions
did not commit or abort before the crash. These
transactions are called the "losers." Then, the recovery
algorithm scans the log in reverse, undoing log records
for all the losers (this is called UNDO LOSERS).
• Additional Crash Issues: What happens if system
crashes during Analysis? During REDO? How do you limit
the amount of work in REDO ?
– Flush asynchronously in the background.
– Watch “hot spots”!
• How do you limit the amount of work in UNDO ?
– Avoid long- running transactions.
Introduction
PL/SQL Fundamentals
SQL> BEGIN
2 dbms_output.put_line('Welcome to PL/SQL');
3 END;
4 /
Debugging.
Executing PL/SQL
2 /
IS
BEGIN -- `BEGIN' ex
|| user_name || '!');
END;
What Is PL/SQL?
PL/SQL is a modern, block-structured programming
language. It provides several features that make
developing powerful database applications very
convenient. For example, PL/SQL provides procedural
constructs, such as loops and conditional statements,
that are not available in standard SQL.
PL/SQL code runs on the server, so using PL/SQL lets
you centralize significant parts of your database
applications for increased maintainability and security.
It also enables you to achieve a significant reduction
of network overhead in client/server applications.
SQL> begin
2 dbms_output.put_line('Hello world!');
3 end;
4 /
Hello world!
SQL> begin
2 dbms_output.put_line('Hello world!');
3 end;
4 /
Hello world!
SQL> begin
2 dbms_output.put_line('This is a PL/SQL FAQ.');
3 end;
4 /
This is a PL/SQL FAQ.
Advantages of PL/SQL:-
Advantages of PL/SQL.