Escolar Documentos
Profissional Documentos
Cultura Documentos
Process Concept Process Scheduling Operations on Processes Independent and Cooperating Processes Interprocess Communication Communication in Client-Server Systems
Process Concept
A process is a program in execution (a unit of work)
- an active entity OS processes execute system code and user processes execute user code Types of processes An operating system executes a variety of programs: Process execution must progress in sequential fashion A process includes:
program counter stack data section
Process State
As a process executes, it changes state: New : The process is being created running: Instructions are being executed Waiting: The process is waiting for some event to occur Ready : The process is waiting to be assigned to a CPU terminated: The process has finished execution Only one process can be running on any processor at
any instant
memory, ready and waiting to execute Device queues set of processes waiting for an I/O device Process migrates between the various queues throughout its lifetime Linked list structure is used to build a queue Queuing diagram represents process scheduling Events to occur are:
I/O request Creation of a subprocess An interrupt to remove a process from CPU Time-slice expiry
Schedulers
The job of a scheduler is to select process from
various queues Long-term scheduler (or job scheduler) selects which processes should be brought into the mainmemory Short-term scheduler (or CPU scheduler) selects which process should be executed next and allocates CPU CPU Scheduler:
Is invoked very frequently Must be very fast Process scheduling wastes some time of CPU
Long-Term Scheduler: Is invoked very infrequently Controls the degree of multiprogramming Can afford to take more time to select a process for execution Must select a good process-mix of I/O-bound and CPUbound processes May be absent or minimal on some systems like timesharing systems Medium-Term Scheduler: Is available in time-sharing systems Removes processes from main-memory so reduces the degree of multiprogramming Carries out swapping-in or swapping-out to improve the process-mix and change in memory requirements
Schedulers
Context Switch
When CPU switches to another process, the system
must save the state of the old process and load the saved state for the new process --- context-switch. Context-switch time is overhead; the system does no useful work while switching. Context-switching speed varies from machine to machine (depends upon memory speed, number of registers to be copied, existence of special instructions). Context-switch times are highly dependent on hardware support. Advanced memory-management techniques are used to shift data to and from memory. Threads are introduced to reduce performance bottleneck because of context switching.
Operations on Processes
Process Creation
Parent process creates children processes, which, in turn
create other processes, forming a tree of processes. Process requires certain resources (i.e CPU time, memory, files, I/O devices) and initialization data (for input) to accomplish its task. Resource sharing
Parent and children share all resources. Children share subset of parents resources. Parent and child share no resources.
process: Parent and children execute concurrently. Parent waits until children terminate.
Process Creation
Address space possibilities of the new processes: Child duplicate of parent. Child has a program loaded into it.
UNIX examples fork system call creates new process. exec system call used after a fork to replace the process memory space with a new program.
specified program into the address space of the new process. MS Windows NT OS supports both models.
Process Termination
Process executes last statement and asks the operating
.... abnormal termination. Parent may terminate the execution of its children due to following reasons: Child has exceeded allocated resources. Task assigned to child is no longer required. Parent is exiting Operating system does not allow child to continue if its parent
Cooperating Processes
Concurrently executing processes may be independent or
other processes. Execution result is non-deterministic. Execution result cannot be predicted. Execution can be started/stopped causing ill-effects.
Cooperating Processes
Reasons for having cooperating processes are:
Information sharing. Computation speed-up. Modularity. Convenience.
Synchronization
for
consumer process. Examples of producer & consumer are: A print program produces characters that are consumed by the
printer driver. A compiler may produce assembly code, which is consumed by an assembler. The assembler, in turn, may produce object modules, which are consumed by the loader.
buffer - the consumer may have to wait for new items, but the producer can always produce new items. Bounded-buffer assumes that there is a fixed buffer size - the consumer must wait if the buffer is empty and the producer must wait if the buffer is full.
#define BUFFER_SIZE 10 Typedef struct { ... } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; The shared buffer is implemented by the programmer as a circular array with two printers in & out. Synchronization of producer and consumer processes is required. Solution is correct, but can only use BUFFER_SIZE-1 elements.
use of an inter process-communication (IPC) facility or by explicitly coded by the application programmer with the use of shared memory.
OUT
n=10
IN
full empty
Bounded-Buffer Producer Process /* produce an item in nextProduced variable */ item nextProduced; while (1) { while (((in + 1) % BUFFER_SIZE) == out); /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; }
Bounded-Buffer Consumer Process /* Consume the item in nextConsumed variable) item nextConsumed; while (1) { while (in == out); /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; }
synchronize their actions. IPC is useful in a distributed environment where the communicating processes may reside on different computers connected with a network. IPC is best provided by a message-passing system. In message system, processes communicate with each other without resorting to shared variables. IPC facility provides two operations:
send(message) receive(message)
If P and Q processes wish to communicate, they need to: establish a communication link between them. exchange messages via send/receive operations. Implementation of communication link: physical (e.g., shared memory, hardware bus or network). logical (e.g., logical properties).
Implementation Questions
How are links established? How many links can there be between every
pair of communicating processes? What is the capacity of a link? Is the size of a message that the link can accommodate fixed or variable? Is a link unidirectional or bi-directional?
Direct or indirect communication Symmetric or asymmetric communication Automatic or explicit buffering Send by copy or send by reference Fixed-sized or variable-sized messages
Direct Communication
Processes must name each other explicitly
send (P, message) send a message to process P. receive(Q, message) receive a message from process Q.
Direct Communication
Symmetric communication Both the sender and the receiver processes have to name each other to communicate. Asymmetric communication Only the sender names the recipient; the recipient is not required to name the sender. send(P, message). Send a message to process P. receive(id, message). Receive a message from any process. The disadvantage in both of these schemes
(symmetric and asymmetric) is the limited modularity of the resulting process definitions.
Indirect Communication
Messages are directed and received from mailboxes (also
referred to as ports).
Each mailbox has a unique id. Processes can communicate only if they share a mailbox.
Operations
create a new mailbox. send and receive messages through mailbox. destroy a mailbox.
Indirect Communication
Mailbox sharing
P1, P2 and P3 share mailbox A. P1, sends; P2 and P3 receive. Who gets the message?
Solutions
Allow a link to be associated with at most two processes. Allow only one process at a time to execute a receive
operation.
Allow the system to select arbitrarily the receiver. Sender
Direct Communication
Symmetric communication Both the sender and the receiver processes have to name each other to communicate. Asymmetric communication Only the sender names the recipient; the recipient is not required to name the sender. send(P, message). Send a message to process P. receive(id, message). Receive a message from any process. The disadvantage in both of these schemes
(symmetric and asymmetric) is the limited modularity of the resulting process definitions.
through this mailbox and user of the mailbox can only send messages to this mailbox.
When owner terminates, the mailbox disappears. No confusion regarding reception of any message as
It is independent and cannot be attached to any particular process. The OS allows a process to:
Create a new mailbox (owner). Send and receive messages through the mailbox. Delete a mailbox.
The ownership and receive privileges may be passed to other processes through system calls. Processes can share a mailbox. The OS terminates the mailbox when no more required by any process.
Synchronization
Message passing may be either blocking or non-blocking. Blocking is considered synchronous. Non-blocking is considered asynchronous. send and receive primitives may be either blocking or non
blocking. Blocking send: The sending process is blocked until the message is received by the receiving process or by the mailbox. Nonblocking send: The sending process sends the message and resumes operation. Blocking receive: The receiver blocks until a message is available. Nonblocking receive: The receiver retrieves either a valid message or a null. Rendezvous when both the sender and the receiver are blocking.
Buffering
A link has some capacity for residing a queue of
on it. Sender must wait until the recipient receives the message. The two processes must be synchronized for message transfer (rendezvous).
Bounded Capacity
Queue length = n; so at most n messages can reside in it. If the queue is not full, the message is placed in the queue (either
it is copied or pointer is kept). Sender can continue execution without waiting . If the queue is full, the sender must wait until the space is available in the queue.
Buffering
Unbounded Capacity
Queue length = infinite; so any number of messages can wait on it.
The sender is never delayed. Bounded and unbounded capacity provides automatic buffering.
Special Cases
Sender is never delayed: New message can overwrite the previous message Advantage: Messages need not to be copied more than once. Disadvantages: Programming task becomes more difficult. Process need to synchronize explicitly. Sender is delayed until it receives a reply: In this scheme, messages are of fixed size. Receiver sends a reply after receiving the message. The reply message overwrites the original message buffer. Can be expanded into a RPC system.
Exception Conditions
An error recovery mechanism (exception conditions) must
resolve various issues in a context of message scheme. Process Termination either a sender or receiver terminates before a message is processed. A receiver process P may wait for a message from a process Q that has terminated. P will be blocked forever. OS either terminates P or notifies P that Q has terminated Process P may send a message to a process Q that has terminated: In the automatic buffering case no harm is done. In the no buffering case P will be blocked forever. OS either terminates P or notifies P that Q has terminated.
Lost Messages
The following methods can be used to deal with a message that is lost
message.
The sending process is responsible for detecting the event and for
process that the message has been lost. The sending process can proceed as it wants. Timeouts are used to detect that a message is lost.
Scrambled Messages
The message may be delivered to its destination but was scrambled on the way due to noise in the communication channel. Scrambled message is handled like a lost message. Checksums are used to detect such errors.
Examples Mach OS
Mostly inter process communication is carried out by
messages using mailboxes (ports). System calls are also made by messages. Two special mailboxes created for the task are : Kernel mailbox & Notify mailbox. System calls needed for message transfer are : msgsend, msg-receive & msg-rpc. The port-allocate system call creates a mailbox and up-to 8 messages can be placed in the message queue. Messages are copied into the mailbox and messages are queued in FIFO order. The message consists of: Fixed-length header (length of the message & two mailbox names
i.e. sender & receiver). Variable-length data portion (access rights, task states, memory segments). Each entry in the list has a type, size and value.
If the Mailbox is full, the sending process has following options: Wait indefinitely until there is a room in the mailbox. Wait for n milliseconds. Do not wait at all. Temporarily cache a message (given to the OS). A port-status system call returns the number of messages in a
mailbox. The receive operation can receive from any mailbox in a mailbox set or a specific (named) mailbox. If no message is waiting to be received, the receiving thread (process) may: Wait indefinitely. Wait for n milliseconds. Do not wait at all.
Mach OS has been designed for distributed systems. Gives poor performance due to double-copy operations (sender to
mailbox & mailbox to receiver). Better performance has been achieved for intra-system messages by mapping the address space containing the senders message into the receivers address space.
messages or callbacks and to listen to replies. Message-passing techniques in Windows 2000 over a port are:
Uses the ports message queue as intermediate storage and copies the
message from one process to the other ---- used for smaller messages (up to 256 bytes).
Client passes the larger messages through a section object (or shared
memory).
Server passes the larger messages (replies) through a section object (or
shared memory). A small message along with a pointer is sent in the section object to
A socket is defined as an endpoint for communication two sockets are required for two processes and use a client server architecture for communication. A socket is made up of an IP address concatenated with a port number. Servers implementing specific services (such as telnet, ftp, and http) listen to well-known ports (a telnet server listens to port 23, an ftp server listens to port 21, and a web (or http) server listens to port 80). For communication using sockets, If a client on host X with IP address 146.86.5.20 wishes to establish a connection with a web server (which is listening on port 80) at address 161.25.19.8; host X may be assigned port 1625 (ports below 1024 are used to implement standard services). All connections consist of a unique pair of sockets. Sockets can be implemented using Java as it provides a much easier interface to sockets and has a rich library for networking utilities. Java provides three different types of sockets:
Connection-oriented (TCP) sockets are implemented with the Socket class. Connectionless (UDP) sockets use the DatagramSocket class.
The Multicast Socket class (allows data to be sent to multiple recipients) which is a
on the same host to communicate using the TCP/IP protocol. Communication using sockets (although common and efficient) is considered a low-level form of communication between distributed processes because sockets allow only an unstructured stream of bytes to be exchanged between the communicating threads.
procedure,call mechanism for use between systems with network connections a high level method to provide remote service. The messages exchanged for RPC communication are well structured and are thus no longer just packets of data. RPC provides remote service:
Messages are addressed to an RPC daemon listening to a port on
the remote system, and contain an identifier of the function to execute and the parameters to pass to that function. The function is then executed as requested, and any output is sent back to the requester in a separate message.
messages to the proper port - One network address on a system can have many ports within that address to differentiate the many network services it supports. Any remote system could obtain the needed information (e.g. the list of current users) by sending an RPC message to port 3027(suppose) on the server; the data would be received in a reply message. The semantics of RPCs allow a client to invoke a procedure on a remote host as it would invoke a procedure locally. The RPC system hides the necessary details allowing the communication to take place by providing a stub (one for each RPC) on the client side.
data such as external data representation (XDR) to resolve data representative issue. The semantics of a call must ensure that RPCs avoid duplication of message issue over unreliable communication links using timestamp mechanism. Approaches to address binding issue of the client and the server port are: The binding information may be predermined in the form of fixed port addresses. Binding can be done dynamically by a rendezvous mechanism an operating system provides a rendezvous (also called a matchmaker) daemon one fixed RPC port.
Execution of a RPC
similar to RPCs. RMI allows a Java program on one machine to invoke a method on a remote object. The remote object may be in a different JVM on the same computer or on a remote host connected by a network. RMI and RPCs differ in two fundamental ways:
RPCs
support procedural programming whereby only remote procedures of functions may be called. RMI is object-based so supports invocation of methods on remote objects. The parameters to remote procedures are ordinary data structures in RPC; with RMI it is possible to pass objects as parameters to remote methods.
applications that are distributed across a network. To make remote methods transparent to both the client and the server, RMI implements the remote object using stubs and skeletons.
with the client. When a client invokes a remote method, this stub for the remote object is called. This client-side stub is responsible for creating a parcel consisting of the name of the method to be invoked on the server and the marshalled parameters for the method. The stub then sends this parcel to the server. The skeleton is responsible for unmarshalling the parameters and invoking the desired method on the server. The skeleton then marshalls the return value (or exception, if any) into a parcel and returns this parcel to the client.
and passes it to the client. Rules about the behaviour of parameter passing are:
If
the marshalled parameters are local (or nonremote objects, they are passed by copy using a technique known as object serialization. However, if the parameters are also remote objects, they are passed by reference. If local objects are to be passed as parameters to remote objects, they must implement the interface java.io.Serializable.
Marshalling Parameters