Você está na página 1de 91

Operating

Systems: Chapter 5
Internals Concurrency:
and Design
Principles Mutual Exclusion
and Synchronization
Eighth Edition
By William Stallings
 OperatingSystem design is concerned
with the management of processes and
threads:
 Multiprogramming
 Multiprocessing
 Distributed Processing
Multiple
Applications
Structured
Applications Operating
invented to allow System
processing time to Structure
be shared among extension of
active applications modular design
and structured
programming OS themselves
implemented as a
set of processes
or threads
Table 5.1

Some Key
Terms
Related
to
Concurrency
 Interleaving and overlapping
 can be viewed as examples of concurrent processing
 both present the same problems

 Uniprocessor – the relative speed of execution of


processes cannot be predicted
 depends on activities of other processes
 the way the OS handles interrupts
 scheduling policies of the OS
 Sharing of global resources
 Difficult
for the OS to manage the allocation
of resources optimally
 Difficultto locate programming errors as
results are not deterministic and
reproducible
void echo()
{
chin = getchar();
chout = chin;
putchar(chout);
}
Process P1 Process P2
. .
chin = getchar(); .
. chin = getchar();
chout = chin; .
chout = chin;
putchar(chout); .
. putchar(chout);
. .
 Occurs when multiple processes or
threads read and write data items
 Thefinal result depends on the order of
execution
 the “loser” of the race is the process
that updates last and will determine the
final value of the variable
Two processes want to access shared memory at the same time.
The final result depends on who runs precisely when.
Operating System Concerns
 Design and management issues raised by the existence of
concurrency:
 The OS must:

be able to keep track of various processes

allocate and de-allocate resources for each active process

protect the data and physical resources of each process


against interference by other processes

ensure that the processes and outputs are independent of the


processing speed
Degree of Awareness Relationship Influence that One Potential Control
Process Has on the Problems
Other

Processes unaware of Competition •Results of one •Mutual exclusion


Table 5.2
each other process independent
of the action of •Deadlock (renewable
others resource) Process
•Timing of process
may be affected
•Starvation Interaction

Processes indirectly Cooperation by •Results of one •Mutual exclusion


aware of each other sharing process may depend
(e.g., shared object) on information •Deadlock (renewable
obtained from others resource)

•Timing of process •Starvation


may be affected
•Data coherence

Processes directly Cooperation by •Results of one •Deadlock


aware of each other communication process may depend (consumable
(have communication on information resource)
primitives available to obtained from others
them) •Starvation
•Timing of process
may be affected
Resource Competition
 Concurrent processes come into conflict when they
are competing for use of the same resource
 for example: I/O devices, memory, processor time, clock

In the case of competing processes three


control problems must be faced:

• the need for mutual exclusion


• deadlock
• starvation
Illustration of Mutual Exclusion
Cooperation among processes by
sharing
• Eg:- shared variables/files/database
• Data items may be accessed in reading and
writing mode and only the writing mode must
be mutually exclusive
• Requirement: data coherence
Cooperation among processes by
Communication
• Communication provides a way to synchronize
or coordinate the various activities
• This is done by messaging.
• So Mutual exclusion is not a control
requirement for this sort of cooperation
• Has deadlock and starvation problems
 Must be enforced
 A process that halts must do so without
interfering with other processes
 No deadlock or starvation
 A process must not be denied access to a critical section
when there is no other process using it
 No assumptions are made about relative process speeds
or number of processes
 A process remains inside its critical section for a finite
time only
Ways to arrive at mutual
exclusion
 Software approaches
 Leave the responsibility to the processes that wish to
execute concurrently.
 Disadv is high processing overhead and bugs

 Hardware approaches
 Special purpose machine instructions
 Adv of reducing overhead

 Some level of support within the OS or programming


language
 Semaphores
 monitors
First Attempt

Buzy Waiting : waiting process repeatedly reads the value of turn(global


memory location) until its allowed to enter its critical section
Disadvantage:- Pace of execution is dictated by the slower of the two
processes
If one process fails the other process is permanently blocked.
Second attempt

If one process fails outside the critical section, other process is not blocked. But if it
fails within the critical section or after setting the flag then it blocks the other process
Third attempt

Eliminates the problems in second attempt. This guarantees mutual


exclusion but creates deadlock, because each process can insist on its
right to enter its critical section.
Livelock
• The process keeps setting and resetting the
flags alternatively and gets neither process
could enter its critical section.
• Alteration in the relative speed of the two
processes will break this cycle and allow one
to enter the critical section
• This is called as livelock
Peterson’s Solution
• Two process solution
• The two processes share two variables:
– int turn;
– Boolean flag[2]
• The variable turn indicates whose turn it is to
enter the critical section.
• The flag array is used to indicate if a process is
ready to enter the critical section. flag[i] = true
implies that process Pi is ready!
Algorithm for Process Pi
 Interrupt Disabling  Special Machine
 uniprocessor system Instructions
 disabling interrupts Processor designers have
guarantees mutual proposed several machine
exclusion instructions that carry out
 Disadvantages: two actions automatically
 the efficiency of execution (Read-Write/Read-Test)
could be noticeably degraded through a single instruction
 this approach will not work in a fetch cycle
multiprocessor architecture Compare and Swap
Exchange
 In an uniprocessor system concurrent processes can have
only interleaved execution
 Process runs until its interrupted
 To guarantee ME its enough to prevent a process from being
interrupted
While (true) {
/* disable interrupts */;
/* critical section */;
/* enable interrupts */;
/* remainder */;
}
 Compare&Swap Instruction
 also called a “compare and exchange
instruction”
 a compare is made between a memory value
and a test value
 if the values are the same a swap occurs
 carried out atomically
Compare and swap

1. Checks a memory location (*word) against a test value(testval).


2. If memory location currval=testval, its replaced by newval
3. Otherwise left unchanged
Compare and swap

Shared variable bolt is


initialized to 0
The only process that can
enter critical section is one
that finds bolt equal to 0
All other processes would go
to buzy waiting mode
Exchange
Exchange contd…
Shared variable bolt is initialized to
zero
Each process has a local variable
key that is initialized to 1
Process that find bolt =0 alone
enters the critical section
Excludes all other processes by
setting bolt=1
Once on exiting it resend bold to 0
 Applicable to any number of processes on
either a single processor or multiple
processors sharing main memory
 Simple and easy to verify
 It can be used to support multiple critical
sections; each critical section can be defined
by its own variable
Special Machine Instruction:
Disadvantages
 Busy-waiting is employed, thus while a
process is waiting for access to a critical
section it continues to consume processor
time
 Starvation is possible when a process
leaves a critical section and more than
one process is waiting
 Deadlock is possible
Properties of machine-instruction
approroach
Disadvantage
• Buzy waiting is employed
• Starvation is possible
• Deadlock is possible
Semaphore An integer value used for signaling among processes. Only three
operations may be performed on a semaphore, all of which are
atomic: initialize, decrement, and increment. The decrement
operation may result in the blocking of a process, and the increment
operation may result in the unblocking of a process. Also known as a
counting semaphore or a general semaphore
Binary Semaphore
Mutex
A semaphore that takes on only the values 0 and 1.
Similar to a binary semaphore. A key difference between the two is
Table 5.3
that the process that locks the mutex (sets the value to zero) must be
the one to unlock it (sets the value to 1).
Condition Variable A data type that is used to block a process or thread until a particular
condition is true. Common
Monitor A programming language construct that encapsulates variables,
access procedures and initialization code within an abstract data type.
The monitor's variable may only be accessed via its access
procedures and only one process may be actively accessing the
monitor at any one time. The access procedures are critical sections. Concurrency
A monitor may have a queue of processes that are waiting to access
it.
Event Flags A memory word used as a synchronization mechanism. Application
code may associate a different event with each bit in a flag. A thread
can wait for either a single event or a combination of events by
Mechanisms
checking one or multiple bits in the corresponding flag. The thread is
blocked until all of the required bits are set (AND) or until at least
one of the bits is set (OR).
Mailboxes/Messages A means for two processes to exchange information and that may be
used for synchronization.
Spinlocks Mutual exclusion mechanism in which a process executes in an
infinite loop waiting for the value of a lock variable to indicate
availability.
Semaphore
A variable used for
signalling that has • There is no way to inspect or
an integer value
upon which only manipulate semaphores other
three operations than these three operations
are defined:
1) May be initialized to a nonnegative integer value
2) The semWait operation decrements the value
3) The semSignal operation increments the value
If a process is waiting for a signal, it is blocked until that signal
is sent
Consequences

There is no way to
know which process You don’t know
There is no way to
will continue whether another
know before a
immediately on a process is waiting so
process decrements a
uniprocessor system the number of
semaphore whether
when two processes unblocked processes
it will block or not
are running may be zero or one
concurrently
Semaphore Primitives
 A queue is used to hold processes waiting on the semaphore

Strong Semaphores
• the process that has been blocked the longest is
released from the queue first (FIFO)

Weak Semaphores
• the order in which processes are removed from the
queue is not specified
Binary Semaphore Primitives
Example of Semaphore Mechanism
Example of Semaphore Mechanism
Mutual Exclusion Using Semaphores
Processes Using Semaphore
Producer/Consumer Problem

General one or more producers are generating data and placing


Statement: these in a buffer
a single consumer is taking items out of the buffer one
at a time
only one producer or consumer may access the buffer
at any one time
The
Problem: ensure that the producer can’t add
data into full buffer and consumer
can’t remove data from an empty
buffer
Producer
• producer:
while (true) {
/* produce item v */
b[in] = v;
in++;
}
Consumer
• consumer:
while (true) {
while (in <= out)
/*do nothing */;
w = b[out];
out++;
/* consume item w */
}
0 1 2 3 4

b[1] b[2] b[3] b[4] b[5]

out in

Note: shaded area indicates portion of buffer that is occupied

Figure 5.8 Infinite Buffer for the Producer/Consumer Problem


/* program producerconsumer */
int n;
binary_semaphore s = 1, delay = 0; Figure 5.9
void producer()
{
while (true) {
produce();
semWaitB(s); An Incorrect
append();
n++;
if (n==1) semSignalB(delay);
Solution
}
semSignalB(s);
to the
}
void consumer() Infinite-Buffer
{
semWaitB(delay);
while (true) {
Producer/Consu
semWaitB(s);
take(); mer
n--;
semSignalB(s);
consume();
Problem Using
}
if (n==0) semWaitB(delay); Binary
}
void main() Semaphores
{
n = 0;
parbegin (producer, consumer);
}
Table 5.4
Possible Scenario for the Program of Figure 5.9
Producer Consumer s n Delay
1 1 0 0
2 semWaitB(s) 0 0 0
3 n++ 0 1 0
4 if (n==1) 0 1 1
(semSignalB(delay))
5 semSignalB(s) 1 1 1
6 semWaitB(delay) 1 1 0
7 semWaitB(s) 0 1 0
8 n-- 0 0 0
9 semSignalB(s) 1 0 0
10 semWaitB(s) 0 0 0
11 n++ 0 1 0
12 if (n==1) 0 1 1
(semSignalB(delay))
13 semSignalB(s) 1 1 1
14 if (n==0) (semWaitB(delay)) 1 1 1
15 semWaitB(s) 0 1 1
16 n-- 0 0 1
Note: White semSignalB(s)
areas
17 1 0 1
represent the 18 if (n==0) (semWaitB(delay)) 1 0 0
critical 19 semWaitB(s) 0 0 0
section
controlled by 20 n-- 0 –1 0
semaphores. 21 semSignalB(s) 1 –1 0
Figure 5.10

A Correct
Solution to the
Infinite-Buffer
Producer/Cons
umer Problem
Using Binary
Semaphores
/* program producerconsumer */
semaphore n = 0, s = 1;
void producer()
Figure 5.11 {
while (true) {
produce();
A Solution semWait(s);
append();
to the semSignal(s);
Infinite- semSignal(n);
Buffer }
}
Producer/ void consumer()
Consumer {
while (true) {
Problem semWait(n);
Using semWait(s);
Semaphores take();
semSignal(s);
consume();
}
}
void main()
{
parbegin (producer, consumer);
}
Producer
• producer:
while (true) {
/* produce item v */
b[in] = v;
in++;
}
Consumer
• consumer:
while (true) {
while (in <= out)
/*do nothing */;
w = b[out];
out++;
/* consume item w */
}
Circular Buffer
Figure 5.13

A Solution to
the Bounded-
Buffer
Producer/Cons
umer Problem
Using
Semaphores
Implementation of
Semaphores
 Imperativethat the semWait and
semSignal operations be implemented as
atomic primitives
 Can be implemented in hardware or firmware
 Software schemes such as Dekker’s or
Peterson’s algorithms can be used
 Useone of the hardware-supported
schemes for mutual exclusion
Figure 5.14
Two Possible Implementations of Semaphores

semWait(s) semWait(s)
{ {
while (compare_and_swap(s.flag, 0 , 1) == 1) inhibit interrupts;
/* do nothing */; s.count--;
s.count--; if (s.count < 0) {
if (s.count < 0) { /* place this process in s.queue */;
/* place this process in s.queue*/; /* block this process and allow interrupts */;
/* block this process (must also set s.flag to 0) }
*/; else
} allow interrupts;
s.flag = 0; }
}
semSignal(s)
semSignal(s) {
{ inhibit interrupts;
while (compare_and_swap(s.flag, 0 , 1) == 1) s.count++;
/* do nothing */; if (s.count <= 0) {
s.count++; /* remove a process P from s.queue */;
if (s.count <= 0) { /* place process P on ready list */;
/* remove a process P from s.queue */; }
/* place process P on ready list */; allow interrupts;
} }
s.flag = 0;
}

(a) Compare and Swap Instruction (b) Interrupts


Monitors
 Programming language construct that provides equivalent
functionality to that of semaphores and is easier to control
 Implemented in a number of programming languages
 including Concurrent Pascal, Pascal-Plus, Modula-2,
Modula-3, and Java
 Has also been implemented as a program library
 Software module consisting of one or more procedures, an
initialization sequence, and local data
Monitor Characteristics
Local data variables are accessible only by the monitor’s
procedures and not by any external procedure

Process enters monitor by invoking one of its procedures

Only one process may be executing in the monitor at a time


Synchronization
 Achieved by the use of condition variables that are
contained within the monitor and accessible only
within the monitor
 Condition variables are operated on by two
functions:
 cwait(c): suspend execution of the calling process on
condition c
 csignal(c): resume execution of some process blocked
after a cwait on the same condition
queue of
entering
processes

monitor waiting area Entrance

MONITOR

condition c1 local data

cwait(c1)
condition variables

Procedure 1

condition cn

cwait(cn)
Procedure k

urgent queue

csignal
initialization code

Exit

Figure 5.15 Structure of a Monitor


Figure 5.16

A Solution to the
Bounded-Buffer
Producer/Consu
mer Problem
Using a Monitor
 When processes interact with one another two
fundamental requirements must be satisfied:

synchronization communication

• to enforce mutual • to exchange


exclusion information

 Message Passing is one approach to providing both


of these functions
 works with distributed systems and shared memory multiprocessor and
uniprocessor systems
Message Passing
 The actual function is normally provided in the form
of a pair of primitives:
send (destination, message)
receive (source, message)
 A process sends information in the form of a message
to another process designated by a destination
 A process receives information by executing the
receive primitive, indicating the source and the
message
Synchronization Format
Send Content
blocking Length
nonblocking fixed
Receive variable
blocking
nonblocking Queueing Discipline
test for arrival FIFO
Priority
Addressing
Direct
send
receive
explicit
implicit
Indirect
static
dynamic
ownership

Table 5.5
Design Characteristics of Message Systems for
Interprocess Communication and Synchronization
 Bothsender and receiver are blocked until
the message is delivered
 Sometimes referred to as a rendezvous
 Allowsfor tight synchronization between
processes
Nonblocking Send
Nonblocking send, blocking receive
• sender continues on but receiver is blocked until the
requested message arrives
• most useful combination
• sends one or more messages to a variety of destinations as
quickly as possible
• example -- a service process that exists to provide a service
or resource to other processes

Nonblocking send, nonblocking receive


• neither party is required to wait
 Schemes for specifying processes in send
and receive primitives fall into two
categories:

Direct Indirect
addressing addressing
Direct Addressing
 Send primitive includes a specific identifier
of the destination process
 Receive primitive can be handled in one of
two ways:
 require that the process explicitly
designate a sending process
 effective for cooperating concurrent processes
 implicit addressing
 source parameter of the receive primitive possesses a
value returned when the receive operation has been
performed
Indirect Addressing

Messages are sent to a


shared data structure Queues are
consisting of queues that referred to as
can temporarily hold mailboxes
messages

Allows for One process sends a


greater flexibility message to the mailbox
and the other process
in the use of picks up the message
messages from the mailbox
S1

S1 Mailbox R1 Port R1

Sn

(a) One to one (b) Many to one

R1 S1 R1

S1 Mailbox Mailbox

Rm Sn Rm

(c) One to many (d) Many to many

Figure 5.18 Indirect Process Communication


Message Type
Destination ID
Header Source ID
Message Length
Control Information

Body Message Contents

Figure 5.19 General Message Format


Figure 5.20
Mutual Exclusion Using Messages
Figure 5.21
A Solution to the
Bounded-Buffer
Producer/Consum
er Problem Using
Messages
Readers/Writers Problem
 A data area is shared among many processes
 some processes only read the data area, (readers)
and some only write to the data area (writers)
 Conditions that must be satisfied:
1. any number of readers may simultaneously
read the file
2. only one writer at a time may write to the file
3. if a writer is writing to the file, no reader
may read it
Figure 5.22

A Solution to
the
Readers/Write
rs Problem
Using
Semaphores:
Readers Have
Priority
Readers only in the system •wsem set
•no queues

Writers only in the system •wsem and rsem set


•writers queue on wsem

Both readers and writers with read first •wsem set by reader
•rsem set by writer
•all writers queue on wsem
•one reader queues on rsem
•other readers queue on z

Both readers and writers with write first •wsem set by writer
•rsem set by writer
•writers queue on wsem
•one reader queues on rsem
•other readers queue on z

Table 5.6
State of the Process Queues for Program of Figure 5.23
Figure 5.23
A Solution to the
Readers/Writers
Problem Using
Semaphores:
Writers Have
Priority
void reader(int i) void controller()
{ {
message rmsg; while (true)
while (true) { {
rmsg = i; if (count > 0) {
send (readrequest, rmsg); if (!empty (finished)) {
receive (mbox[i], rmsg); receive (finished, msg);
READUNIT (); count++;
rmsg = i; }
send (finished, rmsg); else if (!empty (writerequest)) {
} receive (writerequest, msg);
} writer_id = msg.id;
void writer(int j) count = count – 100;
{ }
message rmsg; else if (!empty (readrequest)) {
while(true) { receive (readrequest, msg);
rmsg = j; count--;
send (writerequest, rmsg); send (msg.id, "OK");
receive (mbox[j], rmsg); }
WRITEUNIT (); }
rmsg = j; if (count == 0) {
send (finished, rmsg); send (writer_id, "OK");
} receive (finished, msg);
} count = 100;
}
while (count < 0) {
receive (finished, msg);
count++;
}
}
}

Figure 5.24

A Solution to the Readers/Writers Problem Using Message Passing


Summary
 Principles of concurrency  Monitors
 Race condition  Monitor with signal
 OS concerns  Alternate model of monitors with
 Process interaction notify and broadcast
 Requirements for mutual
exclusion  Message passing
 Synchronization
 Mutual exclusion: hardware support  Addressing
 Interrupt disabling
 Message format
 Special machine instructions
 Queueing discipline
 Semaphores  Mutual exclusion
 Mutual exclusion
 Producer/consumer problem
 Readers/writers problem
 Implementation of semaphores  Readers have priority
 Writers have priority

Você também pode gostar