Você está na página 1de 62

Process Scheduling

Process Scheduling
• Maximize CPU use
• Quickly switch processes onto CPU for time
sharing
• Process “gives” up then CPU under two
conditions:
• I/O request
• After N units of time have elapsed (need a timer)
• Once a process gives up the CPU it is added
to the “ready queue”
• Process scheduler selects among available
processes in the ready queue for next
execution on CPU
Scheduling Queues
• OS Maintains scheduling queues of
processes
• Job queue – set of all processes in the
system
• Ready queue – set of all processes residing
in main memory, ready and waiting to
execute
• Device queues – set of processes waiting for
an I/O device
• Processes migrate among the various
queues
Ready Queue And Various I/O Device Queues
Representation of Process Scheduling

n Queuing diagram represents queues, resources, flows


CPU Switch From Process to Process
Types of Scheduling
Long-Term Scheduling
• Determines which programs are admitted to the system for
processing
• May be first-come-first-served
• Or according to criteria such as priority, I/O requirements or expected
execution time
• Controls the degree of multiprogramming
• More processes, smaller percentage of time each process is executed
Medium-Term
Scheduling
• Part of the swapping function
• Swapping-in decisions are based on the need to manage the degree
of multiprogramming
Short-Term Scheduling
• Known as the dispatcher
• Executes most frequently
• Invoked when an event occurs
• Clock interrupts
• I/O interrupts
• Operating system calls
• Signals
Schedulers
• Short-term scheduler (or CPU scheduler) – selects which process should be
executed next and allocates a CPU
• Sometimes the only scheduler in a system
• Short-term scheduler is invoked frequently (milliseconds)  (must be fast)
• Long-term scheduler (or job scheduler) – selects which processes should be
brought into the ready queue
• Long-term scheduler is invoked infrequently (seconds, minutes)  (may be
slow)
• The long-term scheduler controls the degree of multiprogramming
• Processes can be described as either:
• I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts
• CPU-bound process – spends more time doing computations; few very long
CPU bursts
• Long-term scheduler strives for good process mix
Nonpreemptive vs
Premeptive
• Non-preemptive
• Once a process is in the running state, it will continue until it terminates or
blocks itself for I/O
• Preemptive
• Currently running process may be interrupted and moved to ready state by
the OS
• Preemption may occur when new process arrives,
• on an Clock based interrupt, or periodically.
Scheduling Criteria
• CPU utilization – keep the CPU as busy as
possible
• Throughput – number of processes that
complete their execution per time unit
(e.g., 5 per second)
• Turnaround time – amount of time to
execute a particular process
• Waiting time – total amount of time a
process has been waiting in the ready
queue
• Response time – amount of time it takes
from when a request was submitted until
the first response is produced, not output
(for time-sharing environment)
Optimization Criteria for Scheduling

• Max CPU utilization


• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
Scheduling Algorithm

• First –come, First-serve (FCFS)


• Shortest-Job-First Scheduling (SJF)
• Round-Robin Scheduling (RR)
• Priority Scheduling
• Multilevel Queue Scheduling
First- Come, First-Served (FCFS) Scheduling

• Consider the following three processes and their burst time


Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
• We use Gantt Chart to illustrate a particular schedule
P1 P2 P3
0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
 Consider one CPU-bound and many I/O-bound processes
Shortest-Job-First (SJF)
• Associate with each process the length of its
next CPU burst
• Use these lengths to schedule the process with
the shortest time
• SJF is optimal – gives minimum average
waiting time for a given set of processes
• How do we know what is the length of the next
CPU request
• Could ask the user
• what if the user lies?
Example of SJF
 Consider the following four processes and their burst time

ProcessArrival Time Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
• SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Shortest-remaining-time-first
Preemptive version of SJF is called shortest-remaining-time-
first
Example illustrating the concepts of varying arrival times and
preemption.
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26

Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 =


6.5 msec
Round Robin (RR)
• Each process gets a small unit of CPU time (time
quantum q). After this time has elapsed, the process
is preempted and added to the end of the ready
queue.
• If there are N processes in the ready queue and the
time quantum is q, then each process gets 1/N of
the CPU time in chunks of at most q time units at
once. No process waits more than (N-1)q time units.
• Timer interrupts every quantum to schedule next
process
• Performance
• q large  FIFO
• q small  q must be large with respect to context switch,
otherwise overhead is too high
Example of RR with Time Quantum = 4

 Consider the following three processes and their burst time

Process Burst Time


P1 24
P2 3
P3 3
• The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

• The average waiting time under the RR policy is often longer


• Typically, higher average turnaround than SJF, but better response
• q should be large compared to context switch time
• q is usually 10ms to 100ms, context switch < 10 usec
Time Quantum and Context Switch Time
• The performance of the RR algorithm depends on the
size of the time quantum. If the time quantum is
extremely small (say, 1 millisecond), RR can result in a
large number of context switches.
Turnaround Time Varies With The Time Quantum
• The average turnaround time of a set of processes does
not necessarily improve as the time-quantum size
increases. In general, the average turnaround time can be
improved if most processes finish their next CPU burst in a
single time quantum.
Priority Scheduling
• A priority number (integer) is associated with each
process
• The CPU is allocated to the process with the highest
priority (smallest integer  highest priority)
• Preemptive
• Nonpreemptive

• SJF is priority scheduling where priority is the inverse of


predicted next CPU burst time
• Problem  Starvation – low priority processes may
never execute
• Solution  Aging – as time progresses increase the
priority of the process
Example of Priority Scheduling
 Consider the following five processes and their burst time

ProcessAarri Burst TimeTPriority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

• Priority scheduling Gantt Chart


P1 P2 P1 P3 P4
0 1 6 16 18 19

• Average waiting time = 8.2 msec


Combining Priority Scheduling and RR

 System executes the highest priority process; processes with the same
priority will be run using round-robin.

 Consider the following five processes and their burst time

ProcessA arri Burst TimeT Priority


P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
• Priority scheduling Gantt Chart

• Average waiting time = 8.2 msec


Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
• foreground (interactive)
• background (batch)
• Process permanently in a given queue
• Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues:
• Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
• 20% to background in FCFS
Separate Queue For Each Priority
Multilevel Queue Scheduling
Process Scheduling
Example
• Example set of
processes, consider
each a batch job

– Service time represents total execution time


First-Come-
First-Served
• Each process joins the Ready queue
• When the current process ceases to execute, the longest process in
the Ready queue is selected
First-Come-
First-Served
• A short process may have to wait a very long time before it can
execute
• Favors CPU-bound processes
• I/O processes have to wait until CPU-bound process completes
Round Robin
• Uses preemption based on a clock
• also known as time slicing, because each process is given a slice of time
before being preempted.
Round Robin
• Clock interrupt is generated at periodic intervals
• When an interrupt occurs, the currently running process is placed in
the ready queue
• Next ready job is selected
Effect of Size of
Preemption Time Quantum
Effect of Size of
Preemption Time Quantum
‘Virtual Round Robin’
Shortest Process Next
• Nonpreemptive policy
• Process with shortest expected processing time is selected next
• Short process jumps ahead of longer processes
Shortest Process Next
• Predictability of longer processes is reduced
• If estimated time for process not correct, the operating system may
abort it
• Possibility of starvation for longer processes
Shortest Remaining
Time
• Preemptive version of shortest process next policy
• Must estimate processing time and choose the shortest
Threads
Motivation
• Most modern applications are multithreaded
• Threads run within application
• Multiple tasks with the application can be
implemented by separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
• Process creation is heavy-weight while thread
creation is light-weight
• Can simplify code, increase efficiency
• Kernels are generally multithreaded
Benefits of Threads
• Takes less time to create a new thread than a process
• Less time to terminate a thread than a process
• Switching between two threads takes less time that switching
processes
• Threads can communicate with each other
• without invoking the kernel
Thread Execution States
• States associated with a change in thread state
• Spawn (another thread)
• Block
• Issue: will blocking a thread block other, or all, threads
• Unblock
• Finish (thread)
• Deallocate register context and stacks
User and Kernel Threads
• Support for threads may be provided at two different levels:
• User threads - are supported above the kernel and are managed without kernel support,
primarily by user-level threads library.
• Kernel threads - are supported by and managed directly by the operating system.

• Virtually all contemporary systems support kernel threads:


• Windows, Linux, and Mac OS X
Categories of
Thread Implementation
• User Level Thread (ULT)

• Kernel level Thread (KLT) also called:


• kernel-supported threads
• lightweight processes.
User-Level Threads
• All thread management is
done by the application
• The kernel is not aware of
the existence of threads
Kernel-Level Threads
• Kernel maintains context
information for the process and
the threads
• No thread management done by
application
• Scheduling is done on a thread
basis
• Windows is an example of this
approach
Advantages of KLT
• The kernel can simultaneously schedule multiple threads from the
same process on multiple processors.
• If one thread in a process is blocked, the kernel can schedule another
thread of the same process.
• Kernel routines themselves can be multithreaded.
Disadvantage of KLT
• The transfer of control from one thread to another within the same
process requires a mode switch to the kernel
Combined Approaches
• Thread creation done in the
user space
• Bulk of scheduling and
synchronization of threads by
the application

• Example is Solaris
Multithreading
• The ability of an OS to
support multiple,
concurrent paths of
execution within a
single process.
Single Thread
Approaches
• MS-DOS supports a single
user process and a single
thread.
• Some UNIX, support
multiple user processes but
only support one thread
per process
Multithreading
• Java run-time environment
is a single process with
multiple threads
• Multiple processes and
threads are found in
Windows, Solaris, and many
modern versions of UNIX
One or More Threads in Process
• Each thread has
• An execution state (running, ready, etc.)
• Saved thread context when not running
• An execution stack
• Some per-thread static storage for local variables
• Access to the memory and resources of its process (all threads of a process
share this)
One view…

• One way to view a thread is as an independent program counter


operating within a process.
Threads vs. processes
Single and Multithreaded Processes
Thread use in a
Single-User System
• Foreground and background work
• Asynchronous processing
• Speed of execution
• Modular program structure
Threads
• Several actions that affect all of the threads in a process
• The OS must manage these at the process level.
• Examples:
• Suspending a process involves suspending all threads of the process
• Termination of a process, terminates all threads within the process
Activities similar
to Processes
• Threads have execution states and may synchronize with one another.
• Similar to processes
• We look at these two aspects of thread functionality in turn.
• States
• Synchronisation

Você também pode gostar