Você está na página 1de 5

1.

Event Ordering

■A distributed system, however, has no common memory and no common clock.


Therefore, it is sometimes impossible to say which of two events occurred first.
■ Happened-before relation (denoted by →).
Happened-before relation
✦ If A and B are events in the same process, and A was executed before B, then A
→ B.
✦ If A is the event of sending a message by one process and B is the event of
receiving that message by another process, then A → B.
✦ If A → B and B → C then A → C.
✦ If two events, A and B, are not related by the —> relation (that is, A did not
happen before B, and B did not happen before A), then we say that these two
events were executed concurrently
✦ A space-time diagram, can best illustrate the definitions of concurrency and
happened-before.
✦ The horizontal direction represents different processes and the vertical direction
represents time.
✦ The labeled vertical lines denote processes .
✦ The labeled dots denote events.
✦ A wavy line denotes a message sent from one process to another.
✦ Events are concurrent if and only if no path exists between them.

Relative Time for Three Concurrent Processes


Implementation

■ Associate a timestamp with each system event. Require that for every pair of
events A and B, if A → B, then the timestamp of A is less than the timestamp of B.
■ Within each process Pi a logical clock, LCi is associated. The logical clock can be
implemented as a simple counter that is incremented between any two successive
events executed within a process.
■ A process advances its logical clock when it receives a message whose timestamp
is greater than the current value of its logical clock.
■ If the timestamps of two events A and B are the same, then the events are
concurrent. We may use the process identity numbers to break ties and to create a
total ordering.
2.Distributed Mutual Exclusion (DME)
■ Assumptions
✦ The system consists of n processes; each process Pi resides at a different
processor.
✦ Each process has a critical section that requires mutual exclusion.
■ Requirement
✦ If Pi is executing in its critical section, then no other process Pj is executing in
its critical section.
■ We present two algorithms to ensure the mutual exclusion execution of processes
in their critical sections.
DME: Centralized Approach
■ One of the processes in the system is chosen to coordinate the entry to the critical
section.
■ A process that wants to enter its critical section sends a request message to the
coordinator.
■ The coordinator decides which process can enter the critical section next, and its
sends that process a reply message.
■ When the process receives a reply message from the coordinator, it enters its
critical section.
■ After exiting its critical section, the process sends a release message to the
coordinator and proceeds with its execution.
■ This scheme requires three messages per critical-section entry:
✦ Request
✦ reply
✦ release

DME: Fully Distributed Approach


■ When process Pi wants to enter its critical section, it generates a new timestamp,
TS, and sends the message request (Pi, TS) to all other processes in the system.
■ When process Pj receives a request message, it may reply immediately or it may
defer sending a reply back.
■ When process Pi receives a reply message from all other processes in the system,
it can enter its critical section.
■ After exiting its critical section, the process sends reply messages to all its
deferred requests.
■ The decision whether process Pj replies immediately to a request(Pi, TS) message
or defers its reply is based on three factors:
✦ If Pj is in its critical section, then it defers its reply to Pi.
✦ If Pj does not want to enter its critical section, then it sends a reply immediately
to Pi.
✦ If Pj wants to enter its critical section but has not yet entered it, then it compares
its own request timestamp with the timestamp TS.
✔ If its own request timestamp is greater than TS, then it sends a reply immediately
to Pi (Pi asked first).
✔ Otherwise, the reply is deferred.
Desirable Behavior of Fully Distributed Approach
■ Freedom from Deadlock is ensured.
■ Freedom from starvation is ensured, since entry to the critical section is scheduled
according to the timestamp ordering. The timestamp ordering ensures that
processes are served in a first-come, first served order.
■ The number of messages per critical-section entry is
2 x (n – 1).
This is the minimum number of required messages per critical-section entry when
processes act independently and concurrently.
Three Undesirable Consequences
■ The processes need to know the identity of all other processes in the system,
which makes the dynamic addition and removal of processes more complex.

■ Ifone of the processes fails, then the entire scheme collapses. This can be dealt
with by continuously monitoring the state of all the processes in the system.

■ Processes that have not entered their critical section must pause frequently to
assure other processes that they intend to enter the critical section. This protocol
is therefore suited for small, stable sets of cooperating processes.

Token-Passing Approach
■ Another method of providing mutual exclusion is to circulate a token among the
processes in the system.
■ A token is a special type of message that is passed around the system.
■ Possession of the token entitles the holder to enter the critical section.
■ Since there is only a single token, only one process can be in its critical section
at a time.

3.Concurrency Control
■ Modify the centralized concurrency schemes to accommodate the distribution of
transactions.
■ Transaction manager coordinates execution of transactions (or subtransactions)
that access data at local sites.
■ Local transaction only executes at that site.
■ Global transaction executes at several sites.

Locking Protocols

■ several
possible schemes. The first deals with the case where no data replication is
allowed. The others deals with where data can be replicated in several sites.
Nonreplicated Scheme
✦ Each site maintains a local lock manager whose function is to administer the lock
and unlock requests.
✦ When a transaction wishes to lock data item Q at site Si, it simply sends a
message to the lock manager at site S; requesting a lock
✦ Once it has been determined that the lock request can be granted, the lock
manager sends a message back to the initiator indicating that the lock request has
been granted.
✦ Simple implementation involves two message transfers for handling lock
requests, and one message transfer for handling unlock requests.
✦ Deadlock handling is more complex.

Single-Coordinator Approach
✦ the system maintains a single lock manager that resides in a single chosen site, S.
✦ All lock and unlock requests are made at site S,.
✦ When a transaction needs to lock a dataitem, it sends a lock request to S.
✦ The lock manager determines whether the lock can be granted immediately. If so,
it sends a message to the site at which the lock request was initiated.
✦ Otherwise, the request is delayed until it can be granted;
✦ The transaction can read the data item from any one of the sites at which a replica
of the data item resides.
✦ In the case of a write operation, all the sites where a replica of the data item
resides must be involved in the writing.
 advantages:
o Simple implementation. requires two messages for handling lock requests and
one message for handling unlock requests.
o Simple deadlock handling. Since all lock and unlock requests are made at one
site, the deadlock-handling algorithms can be applied directly to this
environment.
 The disadvantages
o Bottleneck. The site S, becomes a bottleneck, since all requests must be
processed there.
o Vulnerability. If the site S, fails, the concurrency controller is lost.
 Multiple-coordinator approach distributes lock-manager function over several
sites.

Majority Protocol
■ The system maintains a lock manager at each site.
■ Eachmanager controls the locks for all the data or replicas of data stored at that
site.
■ When a transaction wishes to lock a data item Q that is replicated in n different
sites, it must send a lock request to more than one-half of the n sites in which Q is
stored.
■ Each lock manager determines whether the lock can be granted immediately. the
response is delayed until the request can be granted.
■ The transaction does not operate on Q until it has successfully obtained a lock on
a majority of the replicas
■ Disadvantages:
o Implementation. more complicated to implement .It requires 2(n/2 + 1)
messages for handling lock requests and (n/2 + 1) messages for handling
unlock requests.
o Deadlock-handling algorithms must be modified; possible for deadlock to
occur in locking only one data item.

Biased Protocol
■ Similar to majority protocol, but requests for shared locks prioritized over
requests for exclusive locks.
■ The system maintains a lock manager at each site. Each manager manages the locks for
all the data items stored at that site.
■ Shared and exclusive locks are handled differently.
■ Shared locks. When a transaction needs to lock data item Q, it simply requests a lock on
Q from the lock manager at one site containing a replica
■ Exclusive locks. When a transaction needs to lock data item Q, it requests a lock on Q
from the lock manager at each site containing a replica of
■ Lessoverhead on read operations than in majority protocol; but has additional
overhead on writes.
■ Like majority protocol, deadlock handling is complex.
Primary Copy
■ One of the sites at which a replica resides is designated as the primary site.
Request to lock a data item is made at the primary site of that data item.

■ Concurrency control for replicated data handled in a manner similar to that of


unreplicated data.

■ Simple implementation, but if primary site fails, the data item is unavailable, even
though other sites may have a replica.

Timestamping
■ Two primary methods are used to generate unique timestamps; one is centralized,
and one is distributed.
■ Generate unique timestamps in distributed scheme:
✦ Each site generates a unique local timestamp.
✦ The global unique timestamp is obtained by concatenation of the unique local
timestamp with the unique site identifier
✦ Use a logical clock defined within each site to ensure the fair generation of
timestamps.

■ Timestamp-ordering scheme – combine the centralized concurrency control


timestamp scheme with the 2PC protocol to obtain a protocol that ensures
serializability with no cascading rollbacks.

Você também pode gostar