Você está na página 1de 30

Embedded and Real Time Systems Process and Operating Systems

Unit -III

Introduction:

Embedded Computing Applications exist in a spectacular range of size and complexity for areas
such as home automation to cell phones, automobiles and industrial controllers. Most of these
applications demand such functionality, performance and reliability from the software that simple
and direct assembly language programming of the processors is clearly ruled out. Moreover, as a
distinguishing feature from general purpose computing, a large part of the computation is “real-time”
or time constrained and also reactive or “external event driven” since such systems generally
interface strongly with the external environment through a variety of devices. Thus, an operating
system is generally used.

An operating system facilitates development of application program by making available a


number of services, which, otherwise would have to be coded by the application program. The
application programs “interface” with the hardware through a operating system services and
functions. In this chapter we present the fundamental concepts of a real-time operating system in a
generic context. Computations for industrial automation are typified by the following characteristics:

 The computation is reactive in nature, that is, the computation is carried out with respect to
real-world physical signals and events. Therefore industrial computers must have extensive
input-output subsystems that interface it with physical signals
 In general speed of computation is not very high. This is because dynamics for variables such
as temperature are very slow. For electromechanical systems speed requirement is higher.
However, there processing is often done by dedicated coprocessors.
 The algorithmic complexity of computation is generally not very high, although it is
increasing due to the deployment of sophisticated control and signal processing algorithms.
Therefore general-purpose processors are sufficient, almost in all cases.
 A large part of the computational tasks are repetitive or cyclic, that is their executions are
invoked periodically in time. Many of these tasks are also real-time, in the sense that there
are deadlines associated with their execution. When there are a large number of such tasks to
be executed on a single processor, appropriate scheduling strategies are required. Since the
number of tasks is mostly fixed, simple static scheduling policies are generally adequate.
 Although computational demands are not very high, the computation is critical in nature.
Indeed, the cost ratios of industrial controllers and the plant equipment they control can well
be in excess of one thousand. This requirement is coupled with the fact that such controllers
are often destined to work in harsher industrial environments. Thus, reliability requirement of
industrial control computers are very high. In fact, this is one of the main reasons that decide
the high price of such systems compared to commercial computers of comparable or even
higher data processing capability.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Operating Systems (OS) Basics:

An Operating System is a collection of programs that provides an interface between application


programs and the computer system (hardware). Its primary function is to provide application
programmers with an abstraction of the system resources, such as memory, input-output and
processor, which enhances the convenience, efficiency and correctness of their use. These programs
or functions within the OS provide various kinds of services to the application programs. The
application programs, in turn, call these programs to avail of such services. Thus the application
programs can view the computer resources as abstract entities, (for example a block of memory can
be used as a named sequential file with the abstract Open, Close, Read, Write operations) without
need for knowing the low level hardware details (such as the addresses of the memory blocks). To get
a better idea of such services provided by the OS, the user may like to refer to the DOS services for
the IBM PC Compatibles.
The natural way to view computation in a typical modern computer system is in the form of a
number of different programs, all of which, apparently, run in parallel. However, very often, all the
programs in the system are executed on a single physical CPU or processor. Each program runs for a
small continuous duration at a time, before it is stopped and another program begins to execute. If
this is done rapidly enough, it appears as if all programs are running simultaneously. These programs
very often perform independent computations, such as the programs executing in different windows
on a PC. In real time systems, most often, such programs cooperate with each other, by exchanging
data and synchronizing each other’s execution, to achieve the overall functionality and performance
of the system.

Types of Operating Systems:

 Stand-Alone Operating systems


 Network Operating systems
 Embedded Operating Systems

Stand-Alone Operating System: It is a complete operating system that works on a desktop or


notebook computer. Examples of stand-alone operating systems are:
 DOS
 Windows 2000 Professional
 Mac OS X

Network Operating System: It is an operating system that provides extensive support for
computer networks. A network operating system typically resides on a server. Examples of a network
operating system are:

 Windows 2000 Server


 Unix
 Linux
 Solaris

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Embedded Operating System: we can find this operating system on handheld computers and
small devices; it resides on a ROM chip. Examples of embedded operating systems are:

 Windows CE
 Pocket PC 2002
 Palm OS
The above classification is based on the computing hardware environment towards which the OS
is targetted. All three types can be either of a real-time or a non-real-time type. For example,
VxWorks is an RTOS of the first category, RT-Linux and ARTS are of the second category and
Windows CE of the third category.

Real-Time Operating Systems:

A Real-Time OS (RTOS) is an OS with special features that make it suitable for building real-
time computing applications also referred to as Real-Time Systems (RTS). An RTS is a (computing)
system where correctness of computing depends not only on the correctness of the logical result of
the computation, but also on the result delivery time. An RTS is expected to respond in a timely,
predictable way to unpredictable external stimuli.
Real-time systems can be categorized as Hard or Soft. For a Hard RTS, the system is taken to
have failed if a computing deadline is not met. In a Soft RTS, a limited extent of failure in meeting
deadlines results in degraded performance of the system and not catastrophic failure. The correctness
and performance of an RTS is therefore not measured in terms of parameters such as, average
number of transactions per second as in transactional systems such as databases.
A Good RTOS is one that enables bounded (predictable) behavior under all system load
scenarios. Note however, that the RTOS, by itself cannot guarantee system correctness, but only is an
enabling technology. That is, it provides the application programmer with facilities using which a
correct application can be built. Speed, although important for meeting the overall requirements, does
not by itself meet the requirements for an RTOS.

The following are important requirements that an OS must meet to be considered an RTOS in the
contemporary sense.
A. The operating system must be multithreaded and preemptive. e.g. handle multiple threads
and be able to preempt tasks if necessary.
B. The OS must support priority of tasks and threads.
C. A system of priority inheritance must exist. Priority inheritance is a mechanism to ensure
that lower priority tasks cannot obstruct the execution of higher priority tasks.
D. The OS must support various types of thread/task synchronization mechanisms.
E. For predictable response :
a. The time for every system function call to execute should be predictable and
independent of the number of objects in the system.
b. Non preemptable portions of kernel functions necessary for interprocess
synchronization and communication are highly optimized, short and deterministic

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

c. Non-preemptable portions of the interrupt handler routines are kept small and
deterministic
d. Interrupt handlers are scheduled and executed at appropriate priority
e. The maximum time during which interrupts are masked by the OS and by device
drivers must be known.
f. The maximum time that device drivers use to process an interrupt, and specific IRQ
information relating to those device drivers, must be known.
g. The interrupt latency (the time from interrupt to task run) must be predictable and
compatible with application requirements
F. For fast response:
a. Run-time overhead is decreased by reducing the unnecessary context switch.
b. Important timings such as context switch time, interrupt latency, semaphore get/release
latency must be minimum

Programs, Processes, Tasks and Threads:


The above four terms are often found in literature on OS in similar contexts. All of them refer to
a unit of computation.
A program is a general term for a unit of computation and is typically used in the context of
programming. Program is a static, algorithmic description, consists of instructions.
int main()
{ int i, prod = 1;
for(I = 0; i < 100; ++i)
prod = prod*i;
}
A process refers to a program in execution. A process is an independently executable unit
handled by an operating system. Sometimes, to ensure better utilisation of computational resources, a
process is further broken up into threads. Several copies of a program may run simultaneously or at
different times. A process has its own state:
o registers;
o memory
All multiprogramming operating systems are built around the concept of processes. OS interleaves
the execution of several processes to maximize CPU usage. OS allocate resources to processes. OS
also support Inter-process communication. We can also define the Process is a dynamic group of
instructions to be executed

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Threads are sometimes referred to as lightweight processes because many threads can be run in
parallel, that is, one at a time, for each process, without incurring significant additional overheads.
A task is a generic term, which, refers to an independently schedulable unit of computation, and
is used typically in the context of scheduling of computation on the processor. It may refer either to a
collection of processes. A task is a functional description of a connected set of operations.

Multitasking:
A multitasking environment allows applications to be constructed as a set of independent tasks, each
with a separate thread of execution and its own set of system resources. The inter-task
communication facilities allow these tasks to synchronize and coordinate their activity. Multitasking
provides the fundamental mechanism for an application to control and react to multiple, discrete real-
world events and is therefore essential for many real-time applications. Multitasking creates the
appearance of many threads of execution running concurrently when, in fact, the kernel interleaves
their execution on the basis of a scheduling algorithm. This also leads to efficient utilisation of the
CPU time and is essential for many embedded applications where processors are limited in
computing speed due to cost, power, silicon area and other constraints. In a multi-tasking operating
system it is assumed that the various tasks are to cooperate to serve the requirements of the overall
system. Co-operation will require that the tasks communicate with each other and share common
data in an orderly and disciplined manner, without creating undue contention and deadlocks. The
way in which tasks communicate and share data is to be regulated such that communication or shared
data access error is prevented and data, which is private to a task, is protected. Further, tasks may be
dynamically created and terminated by other tasks, as
and when needed. Tasks may be synchronous or
asynchronous. Synchronous tasks may recur at different
rates. Processes run at different rates based on
computational needs of the tasks.

Example: engine control

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Tasks:

 spark control

 crankshaft sensing

 fuel/air mixture

 oxygen sensor

 Kalman filter

To realise such a system, the following major functions are to be carried out.

A. Process Management

• interrupt handling

• task scheduling and dispatch

• create/delete, suspend/resume task

• manage scheduling information

– priority, scheduling policy, etc

B. Interprocess Communication and Synchronization

• Code, data and device sharing

• Synchronization, coordination and data exchange mechanisms

• Deadlock and Livelock detection

C. Memory Management

• dynamic memory allocation

• memory locking

• Services for file creation, deletion, reposition and protection

D. Input/Output Management

• Handles request and release functions and read, write functions for a variety of peripherals.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Task graphs:
 Tasks may have data dependencies---must execute in certain
order.
 Task graph shows data/control dependencies between
processes.
 Task: connected set of processes.
 Task set: One or more tasks.
 Task graph assumes that all processes in each task run at the
same rate, tasks do not communicate. In reality, some amount of
inter-task communication is necessary.
 It’s hard to require immediate response for multi-rate
communication Media player

Process Management:
On a computer system with only one processor, only one task can run at any given time,
hence the other tasks must be in some state other than running. The number of other states, the names
given to those states and the transition paths between the different states vary with the operating
system. A typical state diagram is given in Figure 4 and the various states are described below.

Task States
♦ Running: This is the task which has control of the CPU. It will normally be the task which has
the highest current priority of the tasks which are ready to run.
♦ Ready: There may be several tasks in this state. The attributes of the task and the resources
required to run it must be available for it to be placed in the 'ready' state.
♦ Waiting: The execution of tasks placed in this state has been suspended because the task
requires some resources which is not available or because the task is waiting for some signal
from the plant, e.g., input from the analog-to-digital converter, or the task is waiting for the
elapse of time.
♦ New: The operating system is aware of the existence of this task, but the task has not been
allocated a priority and a context and has not been included into the list of schedulable tasks.
 Terminated: The operating system has not as yet been made aware of the existence of this
task, although it may be resident in the memory of the computer

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Figure: The various states a task can be in during its execution life cycle under an
RTOS Task State Transitions

When a task is “spawned”, either by the operating system, or another task, it is to be created,
which involves loading it into the memory, creating and updating certain OS data structures such as
the Task Control Block, necessary for running the task within the multi-tasking environment. During
such times the task is in the new state. Once these are over, it enters the ready state where it waits. At
this time it is within the view of the scheduler and is considered for execution according to the
scheduling policy. A task is made to enter the running state from the ready state by the operating
system dispatcher when the scheduler determines the task to be the one to be run according to its
scheduling policy. While the task is running, it may execute a normal or abnormal exit according to
the program logic, in which case it enters the terminated state and then removed from the view of the
OS. Software or hardware interrupts may also occur while the task is running. In such a case,
depending on the priority of the interrupt, the current task may be transferred to the ready state and
wait for its next time allocation by the scheduler. Finally, a task may need to wait at times during its
course of execution, either due to requirements of synchronization with other tasks or for completion
of some service such as I/O that it has requested for. During such a time it is in the waiting state.
Once the synchronization requirement is fulfilled, or the requested service is completed, it is returned
to the ready state to again wait its turn to be scheduled.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Figure: Processes as stored in memory

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Figure: Trace of process

Timing specifications on processes:


 Release time: time at which process becomes ready.

 Deadline: time at which process must finish.

Rate requirements on processes:


 Period: interval between process activations.

 Rate: reciprocal of period.

 Initiation rate may be higher than period---several copies of process run at once

Timing violations: It tells what happens if a process doesn’t finish by its deadline?
 Hard deadline: system fails if missed.

 Soft deadline: user may notice, but system doesn’t necessarily fail.

Process execution characteristics:


Process execution time Ti is execution time in absence of preemption and measured in
possible time units: seconds, clock cycles. The Worst-case, best-case execution time may be
useful in some cases. Execution time may vary with Data dependencies between processes,
Memory system and CPU pipeline.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

CPU utilization:
It is a Fraction of the CPU that is doing useful work and often calculated assuming no scheduling
overhead

U = (CPU time for useful work)/ (total available CPU time)

= [ t1 ≤ t ≤ t2 T(t) ] / [t2 – t1]

= T/t

Task Control Functions:


RTOSs provide functions to spawn, initialise and activate new tasks. They provide functions
to gather information on existing tasks in the system, for task naming, checking of the state of a
given task, setting options for task execution such as use of co-processor, specific memory models, as
well as task deletion. Deletion often requires special precautions, especially with respect to
semaphores, for shared memory tasks.

Task Context:
Whenever a task is switched its execution context, represented by the contents of the program
counter, stack and registers, is saved by the operating system into a special data structure called a
task control block so that the task can be resumed the next time it is scheduled. Similarly the context
has to be restored from the task control block when the task state is set to running. The information
related to a task stored in the TCB is shown below.

Task Control Block :

o Task ID: the unique identifier for a task

o Address Space : the address ranges of the data and codeblocks of the task loaded in
memory including statically and dynamically allocated blocks

o Task Context: includes the task’s program counter(PC) , the CPU registers and
(optionally) floating-point registers, a stack for dynamic variables and function calls, the
stack pointer (SP), I/O device assignments, a delay timer, a time-slice timer and kernel
control structures

o Task Parameters : includes task type, event list

o Scheduling Information : priority level, relative deadline, period, state

o Synchronization Information: semaphores, pipes, mailboxes, message queues, file


handles etc.

o Parent and Child Tasks

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Task Scheduling and Dispatch:


The basic purpose of task scheduling and dispatch in a real-time multi-tasking OS is to
ensure that each task gets access to the CPU and other system resources in a manner that is necessary
for successful and timely completion of all computation in the system. Secondly, it is desired that this
is done efficiently from the point of view of resource utilisation as well as with correct
synchronisation and protection of data and code for individual tasks against incorrect interference.
Various task scheduling and dispatch models are in use to achieve the above. The appropriateness of
a particular model depends on the application features. The major main task scheduling and dispatch
models are described below.

Cyclic Executive:
This is the simplest of the models in which all the computing tasks are required to be run
periodically in cycles. The computing sequence is static and therefore, a monolithic program called
the Cyclic Executive runs the tasks in the required order in a loop. At times the execution sequence
may also be determined in accordance with models such as an FSM. The execution sequences and
the times allocated to each task are determined a priori and are not expected to vary significantly at
run time. Such systems are often employed for controllers of industrial machines such as
Programmable Logic Controllers that perform fixed set of tasks in fixed orders defined by the user.
These systems have the advantage of being simple to develop and configure as well as faster than
some of the more complex systems since task context switching is faster and less frequent. They are
however suitable for static computing environments only and are custom developed using low level
programming languages for specific hardware platforms.

Coroutines:
In this model of cooperative multitasking the set of tasks are distributed over a set of
processes, called coroutines. These tasks mutually exchange program control rather than relinquish it
to the operating system. Thus each task transfers control to one of the others after saving its data and
control state. Note that the responsibility of scheduling, that is deciding which task gets control of the
processor at a given time is left to the programmer, rather than the operating system. The task which
is transferring control is often left in the waiting or blocked state. This model has now been adapted
to a different form in Multithreading.

Interrupts:
In many cases task scheduling and dispatch needs to be made responsive to external signals or timing
signals. In other cases running tasks may not be assumed to be transferring control to the dispatcher
on their own. In such cases the facility of interrupts provided on all processors can be used for task
switching. The various tasks in the system can be switched either by hardware or software interrupts.
The interrupt handling routine would then transfer control to the task dispatcher. Interrupts through
hardware may occur periodically, such as from a clock, or asynchronously by external devices.
Interrupts can also occur by execution of special software instructions written in the code, or due to

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

processing exceptions such as divide by zero errors. Interrupt-only systems are special case of
foreground/background systems, described below, which are widely used in embedded systems.

Foreground / Background:
Typically, embedded and real-time applications involve a set of tasks, some of which are
periodic and must be finished within deadlines. Others may be sporadic and may not have such
deadlines associated with them. Foreground/background systems are common and simple solutions
for such applications. Such systems involve a set of interrupt driven or real-time tasks called the
foreground and a collection of non-interrupt driven tasks called the background. The foreground
tasks run according to some real-time priority scheduling policy. The background tasks are
preemptable by any foreground task.

Priority Levels in a typical Real-Time Operating System:

To be able to ensure that response to every event is generated by executing tasks within
specific deadlines, it is essential to allocate the CPU and other computational resources to various
tasks in accordance with their priorities. The priority of a process may be fixed or static. It may be
calculated based on their computing time requirements, frequency of invocations or deadlines.
However, it has been established that policies that allow task priorities to be adjusted dynamically are
more efficient. The priority will depend on how quickly a task will have to respond to a particular
event. An event may be some activity of the process or may be the elapsing of a specified amount of
time. The set of tasks can often be categorised into three broad levels of priority as shown below.
Tasks belonging to the same category are typically scheduled and dispatched using similar
mechanisms.

Interrupt Level: Tasks at this level require very fast response measured in milliseconds and occur
very frequently. There is no scheduler at this level, since immediate execution follows an interrupt.
Examples of this task include the real-time clock task. Obviously, to meet the deadlines of the other
tasks in the system, the context switching and processing time requirements for these tasks are to be
kept at the bare minimum level and must be highly predictable to make the whole system behaviour
predictable. To ensure the former, often, all Interrupt Service Routines (ISRs) run in special common
and fixed contexts, such as common stacks. To ensure the latter the interrupt service routines are
sometimes divided into two parts. The first part executes immediately, spawns a new task for the
remaining processing and returns to kernel. This new task gets executed as per the scheduling policy
in course of time. The price to pay for fast execution of ISRs is often several constraints on the
programming of ISRs which lacks many flexibilities compared to the programming of tasks.
Priorities among tasks at this level are generally ensured through the hardware and software interrupt
priority management system of the processor. There may also exist interrupt controllers that masks
interrupts of lower priority in the presence of a higher priority one. The system clock and watchdog
timers associated with them are tasks that execute at interrupt level. The dispatcher for the next level
of priority is also a task at this level. The frequency of execution of this task depends on the
frequency or period of the highest priority clock-level task.

Hard Real-Time Level: At this level are the tasks which are periodic, such as the sampling and
control tasks, and tasks which require accurate timing. The scheduling of these tasks is carried out

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

based on the real-time system clock. A system clock device consists of a counter, a timer queue and
an interrupt handler. Content of counter gives the current time, timer queue has pending timers
associated with the clock device. Each system clock interrupt is known as a tick and represents the
smallest time interval known to the system. Since most programs in real-time applications make use
of time, virtual software clocks and delays can also be created by tasks and associated with the
system clock device. The system clock device raises interrupts periodically and the kernel updates
the software clock according to current time. Also every few clock cycles a new task gets dispatched
according to the scheduling policy adopted. The lowest priority task at this level is the base level
scheduler. Thus if at a clock level interrupt, the clock level scheduler finds no request for higher
priority clock level tasks pending, the base level scheduler is dispatched.

Soft/Non-Real Time Level: Tasks at this level are of soft or non-real-time in that they either have no
deadlines to meet or are allowed a wide margin of error in their timing. These are therefore taken to
be of low priority and executed only when no request for a higher priority task is pending. Tasks at
this level may be allocated priorities or may all run at a single priority level - that of the base level
scheduler in a round robin fashion. These tasks are typically initiated on demand rather that at some
predetermined time interval. The demand may be a user input from a keypad, some process event
such as the arrival of a packet or some particular requirement of the data being processed. Note that,
since the base level scheduler is the lowest priority clock level task, the priorities of all base level
tasks are lower than those at clock levels.
Each of the above priority levels can support multiple task priorities. For an RTOS to be compliant
with the RT-POSIX standard, number of priority levels supported must be at least 32. Among
commercial RTOS, the priority levels supported can be as low as 8 for Windows CE or 256 for
VxWorks. For scheduling of equal priority threads, FIFO or Round-Robin policy is generally applied.
Thread priorities be changed at run-time.

Task Scheduling Management:


Advanced multi-tasking RTOSs mostly use preemptive priority scheduling. These support
more than one scheduling policy and often allow the user to set parameters associated with such
policies, such as the time-slice in Round Robin scheduling where each task in the task queue is
scheduled for up to a maximum time, set by the time-slice parameter, in a Round Robin manner. Task
priorities can also be set. Hundred of priority levels are commonly available for scheduling. Specific
tasks can also be indicated to be nonpremeptive.

Types of multitasking:

– Cooperative multitasking
o Multiple tasks execute by voluntarily ceding control to other tasks.
o One defines a series of tasks and each task gets its own subroutine stack.
o Idle task calls an idle routine
o Another architecture used is an event queue, removing events and calling
subroutines based on their values.
o Pros & Cons: Same as control loop except that more modular approach. But due
to non-determinism of time factor, not used in embedded systems.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

– Pre-emptive multitasking (Let OS Be the Boss!)


 Used in most of the embedded systems.
 Time-slices allotted to processes.
 Process context-switched with the next process in the scheduling queue due to
following reasons:
 the process has consumed its time slice, and the system clock interrupt
pre-empts the process
 the process goes to wait state (often due to planned sleeping, waiting
for an I/O event to happen, or just Mutual exclusion)
 a higher priority process becomes ready for execution, which causes a
pre-emption
 the process gives away its time slice voluntarily
 the process terminates itself
Multiprocessing:
– Each task has a distinct code and data space.
– Boundaries are enforced by hardware memory management.
– Increased overhead due to memory translation and context switching.
– Essentially this is desktop computing done on embedded systems but is necessary for
high reliability requirements application

Priority based Scheduling:


The operating system’s fundamental job is to allocate resources in the computing system among
programs that request them. Naturally, the CPU is the scarcest resource, so scheduling the CPU
is the operating system’s most important job.
Common scheduling algorithm in general-purpose operating systems is roundrobin. All the
processes are kept on a list and scheduled one after the other. This is generally combined with
preemption so that one process does not grab all the CPU time. Round-robin scheduling provides
a form of fairness in that all processes get a chance to execute. However, it does not guarantee
the completion time of any task; as the number of processes increases, the response time of all
the processes increases. Real-time systems, in contrast, require their notion of fairness to include
timeliness and satisfaction of deadlines.
A common way to choose the next executing process in an RTOS is based on process priorities.
Each process is assigned a priority, an integer-valued number. The next process to be chosen to
execute is the process in the set of ready processes that has the highest-valued priority.

Rate-monotonic scheduling:
Rate-monotonic scheduling (RMS), introduced by Liu and Leyland, was one of the first
scheduling policies developed for real-time systems and is still very widely used. RMS is a static
scheduling policy because it assigns fixed priorities to processes. It turns out that these fixed
priorities are sufficient to efficiently schedule the processes in many situations. The theory
underlying RMS is known as rate-monotonic analysis (RMA). This theory, as summarized
below, uses a relatively simple model of the system.
• All processes run periodically on a single CPU.
• Context switching time is ignored.
• There are no data dependencies between processes.
• The execution time for a process is constant.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

All deadlines are at the ends of their periods.


• The highest-priority ready process is always selected for execution.
The major result of rate-monotonic analysis is that a relatively simple scheduling policy is
optimal. Priorities are assigned by rank order of period, with the process with the shortest period
being assigned the highest priority. This fixed-priority scheduling policy is the optimum
assignment of static priorities to processes, in that it provides the highest CPU utilization while
ensuring that all processes meet their deadlines.
Example:

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

RMA priority assignment is optimal using critical-instant analysis. We define the response time
of a process as the time at which the process finishes. The critical instant for a process is defined
as the instant during execution at which the task has the largest response time. It is easy to prove
that the critical instant for any process P, under the RMA model, occurs when it is ready and all
higher-priority processes are also ready—if we change any higher-priority process to waiting,
then P’s response time can only go down. We can use critical-instant analysis to determine
whether there is any feasible schedule for the system. In the case of the second set of execution
times in, there was no feasible schedule. Critical-instant analysis also implies that priorities
should be assigned in order of periods. Let the periods and computation times of two processes
P1 and P2 be t1, t2 and T1, T2, with τ1< τ2.
In the first case, let P1 have the higher priority. In the worst case we then execute P2 once during
its period and as many iterations of P1 as fit in the same interval. Because there are t2/t1
iterations of P1 during a single period of P2, the required constraint on CPU time, ignoring
context switching overhead, is

If, on the other hand, we give higher priority to P2, then critical-instant analysis tells us that we
must execute all of P2 and all of P1 in one of P1’s periods in the worst case:

There are cases where the first relationship can be satisfied and the second cannot, but there are
no cases where the second relationship can be satisfied and the first cannot.

Although RMS is the optimal static-priority schedule, it does not allow the system to use 100%
of the available CPU cycles. The total CPU utilization for a set of n tasks is

It is possible to show that for a set of two tasks under RMS scheduling, the CPU utilization U
will be no greater than 2(21/2 - 1) , 0.83. In other words, the CPU will be idle at least 17% of the
time. This idle time is due to the fact that priorities are assigned statically

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Earliest-deadline-first scheduling:
Earliest deadline first (EDF) is another well-known scheduling policy used for real time systems.
It is a dynamic priority scheme—it changes process priorities during execution based on
initiation times. As a result, it can achieve higher CPU utilizations than RMS.
The EDF policy is also very simple: It assigns priorities in order of deadline. The highest-priority
process is the one whose deadline is nearest in time, and the lowest priority process is the one
whose deadline is farthest away. Clearly, priorities must be recalculated at every completion of a
process. However, the final step of the OS during the scheduling procedure is the same as for
RMS—the highest-priority ready process is chosen for execution.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

A feasible schedule exists if the CPU utilization is less than or equal to 1. When an EDF system
is overloaded and misses a deadline, it will run at 100% capacity for a time before the deadline is
missed. Distinguishing between a system that is running just at capacity and one that is about to
miss a deadline is difficult, making the use of very high deadlines problematic. The
implementation of EDF is more complex than the RMS code. The major problem is keeping the
processes sorted by time to deadline—because the times to deadlines for the processes change
during execution, we cannot presort the processes into an array, as we could for RMS.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

RMS versus EDF


Which scheduling policy is better: RMS or EDF? That depends on our criteria. EDF can extract
higher utilization out of the CPU, but it may be difficult to diagnose the possibility of an
imminent overload. Because the scheduler does take some overhead to make scheduling
decisions, a factor that is ignored in the schedulability analysis of both EDF and RMS, running a
scheduler at very high utilizations is somewhat problematic. RMS achieves lower CPU
utilization but is easier to ensure that all deadlines will be satisfied. In some applications, it may
be acceptable for some processes to occasionally miss deadlines. For example, a set-top box for
video decoding is not a safety-critical application, and the occasional display artifacts caused by
missing deadlines may be acceptable in some markets.
What if our set of processes is unschedulable and we need to guarantee that they complete their
deadlines? There are several possible ways to solve this problem:
• Get a faster CPU. That will reduce execution times without changing the periods, giving us
lower utilization. This will require us to redesign the hardware, but this is often feasible because
we are rarely using the fastest CPU available.
• Redesign the processes to take less execution time. This requires knowledge of the code and
may or may not be possible.
• Rewrite the specification to change the deadlines. This is unlikely to be feasible, but may be in
a few cases where some of the deadlines were initially made tighter than necessary.

Interprocess Communication Mechanisms:


Processes often need to communicate with each other. Interprocess communication mechanisms
are provided by the operating system as part of the process abstraction. In general, a process can
send a communication in one of two ways: blocking or nonblocking. After sending a blocking
communication, the process goes into the waiting state until it receives a response. Nonblocking
communication allows the process to continue execution after sending the communication. Both
types of communication are useful.
There are two major styles of interprocess communication: shared memory and message passing.
The two are logically equivalent—given one, we can build an interface that implements the
other. However, some programs may be easier to write using one rather than the other. In
addition, the hardware platform may make one easier to implement or more efficient than the
other.

Shared memory communication


Figure below illustrates how shared memory communication works in a bus-based system. Two
components, such as a CPU and an I/O device, communicate through a shared memory location.
The software on the CPU has been designed to know the address of the shared location; the
shared location has also been loaded into the proper register of the I/O device. If, as in the figure,
the CPU wants to send data to the device, it writes to the shared location. The I/O device then
reads the data from that location. The read and write operations are standard and can be
encapsulated in a procedural interface.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Message passing
Message passing communication complements the shared memory model. As shown in Figure
below, each communicating entity has its own message send/receive unit. The message is not
stored on the communications link, but rather at the senders/receivers at the endpoints. In
contrast, shared memory communication can be seen as a memory block used as a
communication device, in which all the data are stored in the communication link/memory.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Applications in which units operate relatively autonomously are natural candidates for message
passing communication. For example, a home control system has one microcontroller per
household device—lamp, thermostat, faucet, appliance, and so on.
The devices must communicate relatively infrequently; furthermore, their physical separation is
large enough that we would not naturally think of them as sharing a central pool of memory.
Passing communication packets among the devices is a natural way to describe coordination
between these devices. Message passing is the natural implementation of communication in
many 8-bit microcontrollers that do not normally operate with external memory. A queue is a
common form of message passing. The queue uses a FIFO discipline and holds records that
represent messages.

Signals:
Another form of interprocess communication commonly used in Unix is the signal. A signal is
simple because it does not pass data beyond the existence of the signal itself. A signal is
analogous to an interrupt, but it is entirely a software creation. A signal is generated by a process
and transmitted to another process by the operating system. A UML signal is actually a
generalization of the Unix signal. While a Unix signal carries no parameters other than a
condition code, a UML signal is an object. As such, it can carry parameters as object attributes.

Mailboxes
The mailbox is a simple mechanism for asynchronous communication. Some architectures define
mailbox registers. These mailboxes have a fixed number of bits and can be used for small
messages. We can also implement a mailbox using P() and V() using main memory for the
mailbox storage. A very simple version of a mailbox, one that holds only one message at a time,
illustrates some important principles in interprocess communication.
In order for the mailbox to be most useful, we want it to contain two items: the message itself
and a mail ready flag. The flag is true when a message has been put into the mailbox and cleared
when the message is removed.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Evaluating operating system performance:


The scheduling policy does not tell us all that we would like to know about the performance of a
real system running processes. Our analysis of scheduling policies makes some simplifying
assumptions:
• We have assumed that context switches require zero time. Although it is often reasonable to
neglect context switch time when it is much smaller than the process execution time, context
switching can add significant delay in some cases.
• We have largely ignored interrupts. The latency from when an interrupt is requested to when
the device’s service is complete is a critical parameter of realtime performance.
• We have assumed that we know the execution time of the processes. Bu the program time is not
single number; instead it is bounded by worst-case and best-case execution times.
• We probably determined worst-case or best-case times for the processes in isolation. But, in
fact, they interact with each other in the cache. Cache conflicts among processes can drastically
degrade process execution time.
We need to examine the validity of all these assumptions.
Context switching time depends on several factors:
• the amount of CPU context that must be saved;
• scheduler execution time.
The execution time of the scheduler can of course be affected by coding practices. However, the
choice of scheduling policy also affects the time required by the schedule to determine the next
process to run. We can classify scheduling complexity as a function of the number of tasks to be
scheduled. For example, round-robin scheduling, although it doesn’t guarantee deadline
satisfaction, is a constant-time algorithm whose execution time is independent of the number of
processes. Round-robin is often referred to as an O (1) scheduling algorithm because its
execution time is a constant independent of the number of tasks. Earliest deadline first
scheduling, in contrast, requires sorting deadlines, which is an O (n log n) activity.
Interrupt latency for an RTOS is the duration of time from the assertion of a device interrupt to
the completion of the device’s requested operation. In contrast, when we discussed CPU interrupt
latency, we were concerned only with the time the hardware took to start execution of the
interrupt handler. Interrupt latency is critical because data may be lost when an interrupt is not
serviced in a timely fashion. Figure below shows a sequence diagram for RTOS interrupt latency.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

A task is interrupted by a device. The interrupt goes to the kernel, which may need to finish a
protected operation. Once the kernel can process the interrupt, it calls the interrupt service
routine (ISR), which performs the required operations on the device. Once the ISR is done, the
task can resume execution. Several factors in both hardware and software affect interrupt
latency:
• the processor interrupt latency;
• the execution time of the interrupt handler;
• delays due to RTOS scheduling.
The processor interrupt latency was chosen when the hardware platform was selected; this is
often not the dominant factor in overall latency. The execution time of the handler depends on
the device operation required, assuming that the interrupt handler code is not poorly designed.
This leaves RTOS scheduling delays, which can be the dominant component of RTOS interrupt
latency, particularly if the operating system was not designed for low interrupt latency.

The RTOS can delay the execution of an interrupt handler in two ways. First, critical sections in
the kernel will prevent the RTOS from taking interrupts. A critical section may not be
interrupted, so the semaphore code must turn off interrupts. Some operating systems have very
long critical sections that disable interrupt handling for very long periods. Linux is an example of
this phenomenon—Linux was not originally designed for real-time operation and interrupt
latency was not a major concern. Longer critical sections can improve performance for some
types of workloads because it reduces the number of context switches. However, long critical
sections cause major problems for interrupts. Figure below shows the effect of critical sections
on interrupt latency.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

If a device interrupts during a critical section, that critical section must finish before the kernel
can handle the interrupt. The longer the critical section, the greater the potential delay. Critical
sections are one important source of scheduling jitter because a device may interrupt at different
points in the execution of processes and hit critical sections at different points.
Second, a higher-priority interrupt may delay a lower-priority interrupt. A hardware interrupt
handler runs as part of the kernel, not as a user thread. The priorities for interrupts are determined
by hardware, not the RTOS. Furthermore, any interrupt handler preempts all user threads because
interrupts are part of the CPU’s fundamental operation. We can reduce the effects of hardware
preemption by dividing interrupt handling into two different pieces of code. First, a very simple
piece of code usually called an interrupt service handler (ISH) performs the minimal operations
required to respond to the device. The rest of the required processing, which may include
updating user buffers or other more complex operations, is performed by a user-mode thread
known as an interrupt service routine (ISR). Because the ISR runs as a thread, the RTOS can use
its standard policies to ensure that all the tasks in the system receive their required resources.
Some RTOSs provide simulators or other tools that allow you to view the operation of the
processes in the system. These tools will show not only abstract events such as processes but also
context switching time, interrupt response time, and other overheads. This sort of view can be
helpful in both functional and performance debugging.
Windows CE provides several performance analysis tools: ILTiming, an instrumentation routine
in the kernel that measures both interrupt service routine and interrupt service thread latency; OS
Bench measures the timing of operating system tasks such as critical section access, signals, and
so on; Kernel Tracker provides a graphical user interface for RTOS events.
Many real-time systems have been designed based on the assumption that there is no cache
present, even though one actually exists. This grossly conservative assumption is made because
the system architects lack tools that permit them to analyze the effect of caching. Because they
do not know where caching will cause problems, they are forced to retreat to the simplifying
assumption that there is no cache. The result is extremely overdesigned hardware, which has
much more computational power than is necessary. However, just as experience tells us that a
well designed cache provides significant performance benefits for a single program, a properly
sized cache can allow a microprocessor to run a set of processes much more quickly. By
analyzing the effects of the cache, we can make much better use of the available hardware.
Li and Wolf developed a model for estimating the performance of multiple processes sharing a
cache. In the model, some processes can be given reservations in the cache, such that only a
particular process can inhabit a reserved section of the cache; other processes are left to share the
cache. We generally want to use cache partitions only for performance-critical processes because
cache reservations are wasteful of limited cache space. Performance is estimated by constructing
a schedule, taking into account not just execution time of the processes but also the state of the
cache. Each process in the shared section of the cache is modeled by a binary variable: 1 if
present in the cache and 0 if not. Each process is also characterized by three total execution
times: assuming no caching, with typical caching, and with all code always resident in the cache.
The always-resident time is unrealistically optimistic, but it can be used to find a lower bound on
the required schedule time. During construction of the schedule, we can look at the current cache
state to see whether the no-cache or typical-caching execution time should be used at this point
in the schedule. We can also update the cache state if the cache is needed for another process.
Although this model is simple, it provides much more realistic performance estimates than

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

assuming the cache either is nonexistent or is perfect. Example below shows how cache
management can improve CPU utilization.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Power optimization strategies for processes:


The RTOS and system architecture can use static and dynamic power management mechanisms
to help manage the system’s power consumption. A power management policy is a strategy for
determining when to perform certain power management operations. A power management
policy in general examines the state of the system to determine when to take actions. However,
the overall strategy embodied in the policy should be designed based on the characteristics of the
static and dynamic power management mechanisms.
Going into a low-power mode takes time; generally, the more that is shut off, the longer the delay
incurred during restart. Because power-down and power-up are not free, modes should be
changed carefully. Determining when to switch into and out of a power-up mode requires an
analysis of the overall system activity.
• Avoiding a power-down mode can cost unnecessary power.
• Powering down too soon can cause severe performance penalties.
Reentering run mode typically costs a considerable amount of time. A straightforward method is
to power up the system when a request is received. This works as long as the delay in handling
the request is acceptable. A more sophisticated technique is predictive shutdown. The goal is to
predict when the next request will be made and to start the system just before that time, saving
the requestor the start-up time. In general, predictive shutdown techniques are probabilistic—
they make guesses about activity patterns based on a probabilistic model of expected behavior.
Because they rely on statistics, they may not always correctly guess the time of the next activity.
This can cause two types of problems:
• The requestor may have to wait for an activity period. In the worst case, the requestor may not
make a deadline due to the delay incurred by system start-up.
• The system may restart itself when no activity is imminent. As a result, the system will waste
power.
Clearly, the choice of a good probabilistic model of service requests is important. The policy
mechanism should also not be too complex, because the power it consumes to make decisions is
part of the total system power budget. Several predictive techniques are possible. A very simple
technique is to use fixed times. For instance, if the system does not receive inputs during an
interval of length Ton, it shuts down; a powered- down system waits for a period T off before
returning to the power-on mode. The choice of Toff and Ton must be determined by
experimentation.

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

Example real-time operating systems:


POSIX:
POSIX is a version of the Unix operating system created by a standards organization. POSIX-
compliant operating systems are source-code compatible—an application can be compiled and
run without modification on a new POSIX platform assuming that the application uses only
POSIX-standard functions. While Unix was not originally designed as a real-time operating
system, POSIX has been extended to support realtime requirements. Many RTOSs are POSIX-
compliant and it serves as a good model for basic RTOS techniques. The POSIX standard has
many options; particular implementations do not have to support all options. The existence of
features is determined by C preprocessor variables; for example, the FOO option would be
available if the _POSIX_FOO preprocessor variable were defined. All these options are defined
in the system include file unistd.h.
The Linux operating system has become increasing popular as a platform for embedded
Computing. Linux is a POSIX-compliant operating system that is available as open source.
However, Linux was not originally designed for real-time operation. Some versions of Linux
may exhibit long interrupt latencies, primarily due to large critical sections in the kernel that
delay interrupt processing. Two methods have been proposed to improve interrupt latency. A
dual-kernel approach uses a specialized kernel, the co-kernel, for real-time processes and the
standard kernel for non-real-time processes. All interrupts must go through the co-kernel to
ensure that real-time operations are predictable. The other method is a kernel patch that provides
priority inheritance to reduce the latency of many kernel operations. These features are enabled
using the PREEMPT_RT mode.
Creating a process in POSIX requires somewhat cumbersome code although the underlying
principle is elegant. In POSIX, a new process is created by making a copy of an existing process.
The copying process creates two different processes both running the same code. The
complication comes in ensuring that one process runs the code intended for the new process
while the other process continues the work of the old process.
A process makes a copy of itself by calling the fork() function. That function causes the
operating system to create a new process (the child process) which is a nearly exact copy of the
process that called fork() (the parent process). They both share the same code and the same data
values with one exception, the return value of fork(): the parent process is returned the process
ID number of the child process, while the child process gets a return value of 0. We can therefore
test the return value of fork() to determine which process is the child:
childid = fork();
if (childid == 0) { /* must be the child */
/* do child process here */
}
POSIX does not implement lightweight processes. Each POSIX process runs in its own address
space and cannot directly access the data or code of other processes. POSIX supports real-time
scheduling in the _POSIX_PRIORITY_SCHEDULING resource. Not all processes have to run
under the same scheduling policy. The sched_setscheduler() function is used to determine a
process’s scheduling policy and other parameters:
#include <sched.h>
int i, my_process_id;
struct sched_param my_sched_params;

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

i = sched_setscheduler(my_process_id,SCHED_FIFO,&sched_params);
This call tells POSIX to use the SCHED_FIFO policy for this process, along with some other
scheduling parameters.
POSIX supports rate-monotonic scheduling in the SCHED_FIFO scheduling policy. The name of
this policy is unfortunately misleading. It is a strict priority-based scheduling scheme in which a
process runs until it is preempted or terminates. The term FIFO simply refers to the fact that,
within a priority,processes run in first-come first-served order. Two other useful functions allow a
process to determine the minimum and maximum priority values in the system:
minval = sched_get_priority_min(SCHED_RR);
maxval = sched_get_priority_max(SCHED_RR);

Windows CE:
Windows CE supports devices such as smartphones, electronic instruments, etc. Windows CE is
designed to run on multiple hardware platforms and instruction set architectures. Some aspects of
Windows CE, such as details of the interrupt structure, are determined by the hardware
architecture and not by the operating system itself. Figure below shows a layer diagram for
Windows CE.

Applications run under the shell and its user interface. The Win32 APIs manage access to the
operating system. A variety of services and drivers provide the core functionality. The OEM
Adaption Layer (OAL) provides an interface to the hardware in much the same way that a HAL
does in other software architecture. The architecture of the OAL is shown in Figure below:

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.


Embedded and Real Time Systems Process and Operating Systems

The hardware provides certain primitives such as a real-time clock and an external connection.
The OAL itself provides services such as a real-time clock, power management, interrupts, and a
debugging interface. A Board Support Package (BSP) for a particular hardware platform includes
the OAL and drivers.
Windows CE provides support for virtual memory with a flat 32-bit virtual address space. A
virtual address can be statically mapped into main memory for key kernel- mode code; an
address can also be dynamically mapped, which is used for all user-mode and some kernel-mode
code. Flash as well as magnetic disk can be used as a backing store.
WinCE supports two kernel-level units of execution: the thread and the driver. Threads are
defined by executable files while drivers are defined by dynamically-linked libraries (DLLs). A
process can run multiple threads. All the threads of a process share the same execution
environment. Threads in different processes run in different execution environments. Threads are
scheduled directly by the operating system. Threads may be launched by a process or a device
driver. A driver may be loaded into the operating system or a process. Drivers can create threads
to handle interrupts. Each thread is assigned an integer priority. Lower-valued priorities signify
higher priority: 0 is the highest priority and 255 is the lowest possible priority. Priorities 248
through 255 are used for non-real-time threads while the higher priorities are used for various
categories of real-time execution.
The operating system maintains a queue of ready processes at each priority level. Execution is
divided into time quanta. Each thread can have a separate quantum, which can be changed using
the API. If the running process does not go into the waiting state by the end of its time quantum,
it is suspended and put back into the queue. Execution of a thread can also be blocked by a
higher-priority thread. Tasks may be scheduled using either of two policies: a thread runs until
the end of its quantum; or a thread runs until a higher-priority thread is ready to run. Within each
priority level, round-robin scheduling is used.
WinCE supports priority inheritance. When priorities become inverted, the kernel temporarily
boosts the priority of the lower-priority thread to ensure that it can complete and release its
resources. However, the kernel will apply priority inheritance to only one level. If a thread that
suffers from priority inversion in turn causes priority inversion for another thread, the kernel will
not apply priority inheritance to solve the nested priority inversion

S.SENTHIL KUMAR, Associate Professor, Dept.of ECE, GRTIET, Tiruttani.

Você também pode gostar