Você está na página 1de 8

Section-A

(i) What are the three main purpose of an operating system?


ANS. The operating system is the most important program that runs on a computer. Every general-purpose
computer must have an operating system to run other programs and applications. Operating systems perform basic
tasks, such as recognizing inputfrom the keyboard, sending output to the display screen, keeping track
of files and directories on the disk, and controlling peripheral devices such as disk drives and printers.

To provide an environment for a computer user to execute programs on computer hardware in a


convenient and efficient manner.
To allocate the separate resources of the computer as needed to solve the problem given. The allocation
process should be as fair and efficient as possible.
As a control program it serves two major functions: (1) supervision of the execution of user programs to
prevent errors and improper use of the computer, and (2) management of the operation and control of I/O
devices.

(ii) What is Process and Threads?


Ans. A process is an executing instance

of an application. What does that mean? Well, for example,

when you double-click the Microsoft Word icon, you start a process that runs Word. A thread is a path
of execution withina process. Also, a process can contain multiple threads. When you start Word, the
operating system creates a process and begins executing the primary thread of that process.
Its important to note that a thread can do anything a process can do. But since a process can consist
of multiple threads, a thread could be considered a lightweight process. Thus, the essential difference
between a thread and a process is the work that each one is used to accomplish. Threads are used for
small tasks, whereas processes are used for more heavyweight tasks basically the execution of
applications.
Another difference between a thread and a process is that threads within the same process share the
same address space, whereas different processes do not. This allows threads to read from and write to
the same data structures and variables, and also facilitates communication between threads.
Communication between processes also known as IPC, or inter-process communication is quite
difficult and resource-intensive.

(iii) Define the scheduler and its types.


Ans. Types

of schedulers

Operating systems may feature up to 3 distinct types of schedulers: a long-term scheduler (also
known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler and
a short-term scheduler . The names suggest the relative frequency with which these functions are
performed.
Long-term scheduler

The long-term, or admission, scheduler decides which jobs or processes are to be admitted to
the ready queue; that is, when an attempt is made to execute a program, its admission to the set of
currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this
scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported
at any one time - ie: whether a high or low amount of processes are to be executed concurrently, and
how the split between IO intensive and CPU intensive processes is to be handled. In modern OS's, this is
used to make sure that real time processes get enough CPU time to finish their tasks. Without proper real
time scheduling, modern GUI interfaces would seem sluggish.

Long-term scheduling is also important in large-scale systems such as batch processing


systems, computer clusters, supercomputers and render farms. In these cases, special purpose job

scheduler software is typically used to assist these functions, in addition to any underlying admission
scheduling support in the operating system.
Mid-term scheduler

The mid-term scheduler temporarily removes processes from main memory and places them
on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as "swapping
out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The mid-term scheduler may
decide to swap out a process which has not been active for some time, or a process which has a low
priority, or a process which is page faulting frequently, or a process which is taking up a large amount of
memory in order to free up main memory for other processes, swapping the process back in later when
more memory is available, or when the process has been unblocked and is no longer waiting for a
resource.

In many systems today (those that support mapping virtual address space to secondary
storage other than the swap file), the mid-term scheduler may actually perform the role of the long-term
scheduler, by treating binaries as "swapped out processes" upon their execution. In this way, when a
segment of the binary is required it can be swapped in on demand, or "lazy loaded".
Short-term scheduler

The short-term scheduler (also known as the CPU scheduler) decides which of the ready, inmemory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt,
an operating system call or another form of signal. Thus the short-term scheduler makes scheduling
decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at
a minimum have to be made after every time slice, and these are very short.

(iv) What are the advantages and disadvantages of semaphore?


Ans.
Advantage

In semaphores there is no spinning, hence no waste of resources due to no busy waiting.


That is because threads intending to access the critical section are queued. And could
access the priority section when the are de-queued, which is done by the semaphore
implementation itself, hence, unnecessary CPU time is not spent on checking if a condition is
satisfied to allow the thread to access the critical section.
Semaphores permit more than one thread to access the critical section, in contrast to
alternative solution of synchronization like monitors, which follow the mutual exclusion
principle strictly. Hence, semaphores allow flexible resource management.
Finally, semaphores are machine independent, as they are implemented in the machine
independent code of the microkernel services.
Disadvantages
Problem 1: Programming using Semaphores makes life harder as utmost care must be taken
to ensure Ps and Vs are inserted correspondingly and in the correct order so that mutual
exclusion and deadlocks are prevented. In addition, it is difficult to produce a structured
layout for a program as the Ps and Vs are scattered all over the place. So the modularity is
lost. Semaphores are quite impractical when it comes to large scale use.
Problem 2: Semaphores involve a queue in its implementation. For a FIFO queue, there is a
high probability for a priority inversion to take place wherein a high priority process which
came a bit later might just have to wait when a low priority one is in the critical section. For
example, consider a case when a new smoker joins and is desperate to smoke. What if the

agent who handles the distribution of the ingredients follows a FIFO queue (wherein the
desperate smoker is last according to FIFO) and chooses the ingredients apt for another
smoker who would rather wait some more time for a next puff?

(v) Write short notes on System threats


Ans.. System Threats
System threats refers to misuse of system services and network connections to put user in trouble.
System threats can be used to launch program threats on a complete network called as program
attack. System threats creates such an environment that operating system resources/ user files are
mis-used. Following is the list of some well known system threats.

Worm -Worm is a process which can choked down a system performance by using system
resources to extreme levels.A Worm process generates its multiple copies where each copy
uses system resources, prevents all other processes to get required resources. Worms
processes can even shut down an entire network.

Port Scanning - Port scanning is a mechanism or means by which a hacker can detects
system vulnerabilities to make an attack on the system.

Denial of Service - Denial of service attacks normally prevents user to make legitimate use
of the system. For example user may not be able to use internet if denial of service attacks
browser's content settings.

Section-B
Q.1. Explain the threads in Linux.

A thread is a sequential flow of control through a program. Multi-threaded


programming is, thus, a form of parallel programming where several threads of
control are executing concurrently in the program. All threads execute in the same
memory space, and can therefore work concurrently on shared data.
Multi-threaded programming differs from using multiple Unix processes in that all
threads share the same memory space (and a few other system resources, such as file
descriptors), instead of running in their own memory space as is the case with Unix
processes. Threads are useful for several reasons. First, they allow a program to
exploit multi-processor machines: the threads can run in parallel on several
processors, allowing a single program to divide its work between several processors,
thus running faster than a single-threaded program, which runs on only one processor
at a time. Second, even on uniprocessor machines, threads allow overlapping I/O and
computations in a simple way. Last, some programs are best expressed as several
threads of control that communicate together, rather than as one big monolithic
sequential program. Examples include server programs, overlapping asynchronous
I/O, and graphical user interfaces.

The Linux Implementation of Threads

Threads are a popular modern programming abstraction. They provide multiple threads of execution within the
same program in a shared memory address space. They can also share open files and other resources.
Threads allow for concurrent programming and, on multiple processor systems, true parallelism.
Linux has a unique implementation of threads. To the Linux kernel, there is no concept of a thread. Linux
implements all threads as standard processes. The Linux kernel does not provide any special scheduling
semantics or data structures to represent threads. Instead, a thread is merely a process that shares certain
resources with other processes. Each thread has a unique task_struct and appears to the kernel as a
normal process (which just happens to share resources, such as an address space, with other processes).
This approach to threads contrasts greatly with operating systems such as Microsoft Windows or Sun Solaris,
which have explicit kernel support for threads (and sometimes call threads lightweight processes). The name
"lightweight process" sums up the difference in philosophies between Linux and other systems. To these other
operating systems, threads are an abstraction to provide a lighter, quicker execution unit than the heavy
process. To Linux, threads are simply a manner of sharing resources between processes (which are already
quite lightweight) . For example, assume you have a process that consists of four threads. On systems with
11

explicit thread support, there might exist one process descriptor that in turn points to the four different threads.
The process descriptor describes the shared resources, such as an address space or open files. The threads
then describe the resources they alone possess. Conversely, in Linux, there are simply four processes and
thus four normal task_struct structures. The four processes are set up to share certain resources.
Threads are created like normal tasks, with the exception that the clone () system call is passed flags
corresponding to specific resources to be shared:

Q.2.Explain the deadlocks ,deadlock prevention, avoidance and detection.


In concurrent programming, a deadlock is a situation in which two or more competing actions are
each waiting for the other to finish, and thus neither ever does.
In a transactional database, a deadlock happens when two processes each within its own
transaction updates two rows of information but in the opposite order. For example, process A
updates row 1 then row 2 in the exact timeframe that process B updates row 2 then row 1. Process A
can't finish updating row 2 until process B is finished, but process B cannot finish updating row 1
until process A is finished. No matter how much time is allowed to pass, this situation will never
resolve itself and because of this database management systems will typically kill the transaction of
the process that has done the least amount of work.
In an operating system, a deadlock is a situation which occurs when a process or thread enters a
waiting state because a resource requested is being held by another waiting process, which in turn
is waiting for another resource held by another waiting process. If a process is unable to change its
state indefinitely because the resources requested by it are being used by another waiting process,
then the system is said to be in a deadlock.[1]
Deadlock is a common problem in multiprocessing systems, parallel computing and distributed
systems, where software and hardware locks are used to handle shared resources and
implement process synchronization.[2]

Deadlock prevention
To prevent Deadlock at least one of the four conditions mentioned above must be
denied.
Condition 1 is difficult to deny since some resources, for example printers, by nature
cannot be shared between processes. However, the use of spooling can
remove Deadlock potential of non-shared peripherals.
Condition 2 can be denied by stipulating that processes request all the resources at
once and that they cannot proceed until all the requests are granted. The disadvantage
of this action is that resources which are used for a short time are allocated and
therefore inaccessible for a long period.
Condition 3 is easily denied by imposing the rule that if a process is denied a request
then it must release all the resources that it currently hold and if necessary request for
them later together with additional resources. This strategy can be inconvenient in
practice since pre-empting a resource like a printer can results into inter-leaving of
outputs from several jobs. Further, even if a resource is conveniently pre-emptied the
overhead of storing and restoring its states at the time of pre-emption can be quite
high.
Condition 4 can be denied by imposing an order on resource types so that if a process
has been allocated a resource of type A then it may only request resources of types
that fall in that order.

Deadlock detection and recovery

How deadlock occur and what causes deadlock


This strategy allows the possibility of Deadlock but rely on detection when it occurs
and being able to stage recovery. The value of this approach depends on the frequency
on which deadlock occurs and the kind of recovery that can be made. Detection

algorithms work by detecting the circular wait seen in condition 4. The state of the
system at any time can be represented by a state graph.

How to detect deadlock and recover from deadlock


A circular wait is represented as a circular closed loop A, B and D (see above).
The Deadlock detection algorithm maintains a representation of state graph and
inspects it at intervals for existence of a circular loop.
The inspection may occur at every resource allocation. Since the cost of doing the
inspection may be very high it may occur at fixed intervals of time and not at each
allocation. Detection of deadlock is only useful if an acceptable level of recovery can
be made.
The definition of acceptable can be stretched according to circumstances to include
the following techniques listed according to the order of sophistication:1. Abort all deadlocked processes this is the method adopted in most general
purpose systems.
2. Re-start all deadlocked processes, however, this method may lead straight back to
the original Deadlock.
3. Successfully (one at a time) abort deadlocked processes until Deadlock no longer
exist. The order in which this is done should reduce resources already used.

Deadlock avoidance
Deadlock avoidance strategy uses an algorithm that anticipates that a Deadlock is
likely to occur and therefore deny a resource request which would otherwise be
granted.
Before granting a resource request, the state graph of the system is tentatively changed
to what it will be if the resource was to be granted and then deadlock detection is
applied. If the detection algorithm is clear then the request is granted otherwise the
request is denied and the state graph is returned to its original form.
It is important to note that detection and recovery of Deadlock is sometimes left to the
computer user rather than being performed by the system itself. An observant
computer user will note that certain processes are stuck and realize that Deadlock has
occurred. The traditional recovery action is to abort and re-start the deadlocked
processes if possible.
Modern Operating Systems provide users with an option to kill a process or processes
(for example End Task command in Windows Task Manager) without necessarily
shutting down the Operating System.

Você também pode gostar