Escolar Documentos
Profissional Documentos
Cultura Documentos
when you double-click the Microsoft Word icon, you start a process that runs Word. A thread is a path
of execution withina process. Also, a process can contain multiple threads. When you start Word, the
operating system creates a process and begins executing the primary thread of that process.
Its important to note that a thread can do anything a process can do. But since a process can consist
of multiple threads, a thread could be considered a lightweight process. Thus, the essential difference
between a thread and a process is the work that each one is used to accomplish. Threads are used for
small tasks, whereas processes are used for more heavyweight tasks basically the execution of
applications.
Another difference between a thread and a process is that threads within the same process share the
same address space, whereas different processes do not. This allows threads to read from and write to
the same data structures and variables, and also facilitates communication between threads.
Communication between processes also known as IPC, or inter-process communication is quite
difficult and resource-intensive.
of schedulers
Operating systems may feature up to 3 distinct types of schedulers: a long-term scheduler (also
known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler and
a short-term scheduler . The names suggest the relative frequency with which these functions are
performed.
Long-term scheduler
The long-term, or admission, scheduler decides which jobs or processes are to be admitted to
the ready queue; that is, when an attempt is made to execute a program, its admission to the set of
currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this
scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported
at any one time - ie: whether a high or low amount of processes are to be executed concurrently, and
how the split between IO intensive and CPU intensive processes is to be handled. In modern OS's, this is
used to make sure that real time processes get enough CPU time to finish their tasks. Without proper real
time scheduling, modern GUI interfaces would seem sluggish.
scheduler software is typically used to assist these functions, in addition to any underlying admission
scheduling support in the operating system.
Mid-term scheduler
The mid-term scheduler temporarily removes processes from main memory and places them
on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as "swapping
out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The mid-term scheduler may
decide to swap out a process which has not been active for some time, or a process which has a low
priority, or a process which is page faulting frequently, or a process which is taking up a large amount of
memory in order to free up main memory for other processes, swapping the process back in later when
more memory is available, or when the process has been unblocked and is no longer waiting for a
resource.
In many systems today (those that support mapping virtual address space to secondary
storage other than the swap file), the mid-term scheduler may actually perform the role of the long-term
scheduler, by treating binaries as "swapped out processes" upon their execution. In this way, when a
segment of the binary is required it can be swapped in on demand, or "lazy loaded".
Short-term scheduler
The short-term scheduler (also known as the CPU scheduler) decides which of the ready, inmemory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt,
an operating system call or another form of signal. Thus the short-term scheduler makes scheduling
decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at
a minimum have to be made after every time slice, and these are very short.
agent who handles the distribution of the ingredients follows a FIFO queue (wherein the
desperate smoker is last according to FIFO) and chooses the ingredients apt for another
smoker who would rather wait some more time for a next puff?
Worm -Worm is a process which can choked down a system performance by using system
resources to extreme levels.A Worm process generates its multiple copies where each copy
uses system resources, prevents all other processes to get required resources. Worms
processes can even shut down an entire network.
Port Scanning - Port scanning is a mechanism or means by which a hacker can detects
system vulnerabilities to make an attack on the system.
Denial of Service - Denial of service attacks normally prevents user to make legitimate use
of the system. For example user may not be able to use internet if denial of service attacks
browser's content settings.
Section-B
Q.1. Explain the threads in Linux.
Threads are a popular modern programming abstraction. They provide multiple threads of execution within the
same program in a shared memory address space. They can also share open files and other resources.
Threads allow for concurrent programming and, on multiple processor systems, true parallelism.
Linux has a unique implementation of threads. To the Linux kernel, there is no concept of a thread. Linux
implements all threads as standard processes. The Linux kernel does not provide any special scheduling
semantics or data structures to represent threads. Instead, a thread is merely a process that shares certain
resources with other processes. Each thread has a unique task_struct and appears to the kernel as a
normal process (which just happens to share resources, such as an address space, with other processes).
This approach to threads contrasts greatly with operating systems such as Microsoft Windows or Sun Solaris,
which have explicit kernel support for threads (and sometimes call threads lightweight processes). The name
"lightweight process" sums up the difference in philosophies between Linux and other systems. To these other
operating systems, threads are an abstraction to provide a lighter, quicker execution unit than the heavy
process. To Linux, threads are simply a manner of sharing resources between processes (which are already
quite lightweight) . For example, assume you have a process that consists of four threads. On systems with
11
explicit thread support, there might exist one process descriptor that in turn points to the four different threads.
The process descriptor describes the shared resources, such as an address space or open files. The threads
then describe the resources they alone possess. Conversely, in Linux, there are simply four processes and
thus four normal task_struct structures. The four processes are set up to share certain resources.
Threads are created like normal tasks, with the exception that the clone () system call is passed flags
corresponding to specific resources to be shared:
Deadlock prevention
To prevent Deadlock at least one of the four conditions mentioned above must be
denied.
Condition 1 is difficult to deny since some resources, for example printers, by nature
cannot be shared between processes. However, the use of spooling can
remove Deadlock potential of non-shared peripherals.
Condition 2 can be denied by stipulating that processes request all the resources at
once and that they cannot proceed until all the requests are granted. The disadvantage
of this action is that resources which are used for a short time are allocated and
therefore inaccessible for a long period.
Condition 3 is easily denied by imposing the rule that if a process is denied a request
then it must release all the resources that it currently hold and if necessary request for
them later together with additional resources. This strategy can be inconvenient in
practice since pre-empting a resource like a printer can results into inter-leaving of
outputs from several jobs. Further, even if a resource is conveniently pre-emptied the
overhead of storing and restoring its states at the time of pre-emption can be quite
high.
Condition 4 can be denied by imposing an order on resource types so that if a process
has been allocated a resource of type A then it may only request resources of types
that fall in that order.
algorithms work by detecting the circular wait seen in condition 4. The state of the
system at any time can be represented by a state graph.
Deadlock avoidance
Deadlock avoidance strategy uses an algorithm that anticipates that a Deadlock is
likely to occur and therefore deny a resource request which would otherwise be
granted.
Before granting a resource request, the state graph of the system is tentatively changed
to what it will be if the resource was to be granted and then deadlock detection is
applied. If the detection algorithm is clear then the request is granted otherwise the
request is denied and the state graph is returned to its original form.
It is important to note that detection and recovery of Deadlock is sometimes left to the
computer user rather than being performed by the system itself. An observant
computer user will note that certain processes are stuck and realize that Deadlock has
occurred. The traditional recovery action is to abort and re-start the deadlocked
processes if possible.
Modern Operating Systems provide users with an option to kill a process or processes
(for example End Task command in Windows Task Manager) without necessarily
shutting down the Operating System.