Você está na página 1de 7

Name: Rennie Ramlochan - ID: #56579

Course Name: Advance Operating Systems Technology


Assignment 1

What is cache memory?

1. What is the distinction between spatial locality and temporal locality?
Answer:

Temporal locality refers to the tendency for a processor to access memory locations that
have been used recently, while spatial locality refers to the tendency of execution to
involve a number of memory locations that are clustered.

In addition spatial locality is generally exploited by using larger cache blocks and by
incorporating prefetching mechanisms (fetching items of anticipated use) into the cache
control logic while temporal locality is exploited by keeping recently used instruction and
data values in cache memory and by exploiting a cache hierarchy.
For Example:Consider the following code fragment:
for ( int i = 0; i < 20; i++ ) {
for ( int j = 0; j < 10; j++ ) {
a[i][j] = 0;
}
}
Spatial locality occurs when the process uses nearby memory contents. This occurs when
the array entries are accessed more or less sequentially within the inner loop. It can also
be said that spatial locality occurs when the instructions in the loop code are executed.

In the example above, Temporal locality occurs when the same memory location is used
repeatedly over some relatively short time interval. This occurs when the loop index
variables i and j are repeatedly incremented. The repeated accesses to a[i], for a particular
value of i, occur within a relatively short time.














2. In virtually all systems that include DMA modules, DMA access to main memory is
given higher priority than processor access to main memory. Why?
Answer:

DMA has a higher priority because the CPU accesses the memory very frequently and as
a result the DMA can starve, waiting for the bus to be free, resulting in the loss of data.
For example, a DMA transfer may be from/to a device that is sending/receiving data in a
continuous time sensitive stream (a network port, or a tape drive) and cannot be stopped
without the loss of data.

While a processor access to main memory is given a lower priority because if the
processor is held up in attempting to read or write memory, usually no damage occurs to
the data, except a slight loss of time.

3. A computer has a cache, main memory, and a disk used for virtual memory. If a
referenced word is in the cache, 20 ns are required to access it. If it is in main
memory but not in the cache, 60 ns are needed to load it into the cache (this includes
the time to originally check the cache), and then the reference is started again. If the
word is not in main memory, 12 ms are required to fetch the word from disk,
followed by 60 ns to copy it to the cache, and then the reference is started again. The
cache hit ratio is 0.9 and the main-memory hit ratio is 0.6.What is the average time
in ns required to access a referenced word on this system?Question: is this design
any good?
Answer:

There are three cases to consider:

Location of referenced word Probability Total time for access in ns
In cache 0.90 20
Not in cache, but in main memory (0.10)(0.6) = 0.06 60 + 20 = 80
Not in cache or main memory (0.10)(0.4) = 0.04 12ms + 60 + 20 = 12000080

So the average access time would be:
Avg = (0.90)(20) + (0.06)(80) + (0.04)( 12000080) = 480026 ns
Design is no good. Average access time is over 24000 (480026/20) time cache access
time.

4. Explain the distinction between a real address and a virtual address.
Answer:

A real address is an address in main memory while a virtual address refers to a memory
location in virtual memory. That location is on disk and at some times in main memory.

For example, a program references a word by means of a virtual address consisting of a
page number and an offset within the page. Each page of a process may be located
anywhere in main memory. The paging system provides for a dynamic mapping between
the virtual address used in the program and a real address, or physical address, in main
memory.



5. Explain the difference between a monolithic kernel and a microkernel.
Answer:
In a Monolithic kernel architecture, all the basic system services like process and
memory management ,interrupt handling etc. are packaged in to a single module in the
kernel space. All the functional components of the kernel have access to all of its internal
data structures and routines. This type of architecture led to some serious drawbacks like:
1)Size of kernel ,which was huge
2)Poor maintainability ,which means bug fixing or addition of new features resulted in
recomplimention of the whole kernel which could consume hours.

In Microkernel architecture, the architecture caters to the problem of the ever growing
size of kernel code which we could not be controlled in the monolithic approach.
The kernel consists of different modules which can be dynamically loaded and unloaded.
This modular approach, allows the maintainability of the kernel to become very easy as
only the concerned module needs to be loaded and unloaded every time there is a change
in a particular module. This reduces the kernel code size and also increases the security
and stability of the OS as we have the bare minimum code running in the kernel.

Monolithic Kernel Microkernel
Kernel is a single large block of code. Only core OS functions are in the kernel.
Runs as single process within a single address
space.
Less essential services and applications are
built on microkernel and execute in user
mode.
Virtual any procedure can call any other
procedure.
Simplifies implementation, provides
flexibility and better suited for distributed
environment.
Difficult to add new device driver of file
system functions.

If anything is changed, all modules and
functions must be recompiled and relinked
and system rebooted.





6. An I/O-bound program is one that, if run alone, would spend more time waiting for
I/O than using the processor. A processor-bound program is the opposite. Suppose a
short-term scheduling algorithm favors those programs that have used little
processor time in the recent past. Explain why this algorithm favors I/O-bound
programs and yet does not permanently deny processor time to processor-bound
programs.
Answer:

The algorithm favors I/O-bound processes, since I/O-bound processes use little processor
time.

In addition to this, if CPU-bound process is denied access to the processor, the algorithm
will not permanently deny processor time to processor-bound programs since the CPU-
bound process won't use the processor in the recent past, thus the CPU-bound process
won't be permanently denied access.


7. Read the following description and answer the question below.
In IBMs mainframe operating system,OS/390, one of the major modules in the
kernel is the System Resource Manager (SRM).This module is responsible for the
allocation of resources among address spaces (processes).The SRM gives OS/390 a
degree of sophistication unique among operating systems. No other mainframe OS,
and certainly no other type of OS, can match the functions performed by SRM. The
concept of resource includes processor, real memory, and I/O channels. SRM
accumulates statistics pertaining to utilization of processor, channel, and various
key data structures. Its purpose is to provide optimum performance based on
performance monitoring an analysis. The installation sets forth various
performance objectives, and these serve as guidance to the SRM, which dynamically
modifies installation and job performance characteristics based on system
utilization. In turn, the SRM provides reports that enable the trained operator to
refine the configuration and parameter settings to improve user service. This
problem concerns one example of SRM activity. Real memory is divided into equal-
sized blocks called frames, of which there may be many thousands. Each frame can
hold a block of virtual memory referred to as a page. SRM receives control
approximately 20 times per second and inspects each and every page frame. If the
page has not been referenced or changed, a counter is incremented by 1. Over time,
SRM averages these numbers to determine the average number of seconds that a
page frame in the system goes untouched. What might be the purpose of this and
what action might SRM take?
Answer:

SRM, dynamically modifies installation and job performance characteristics based on
system utilization. In turn, the SRM provides reports that enable the trained operator to
refine the configuration and parameter settings to improve user service.
Therefore the operator of the system can review the quantity (average number of seconds
that a page frame in the system goes untouched) to determine the degree of "stress" on
the system. In order to improve user service, the operator of the system can reduce the
number of active jobs allowed on the system.


8. What does it mean to preempt a process?
Answer:

To preempt a process, means reclaiming a resource from a process before the process has
finished using it. An example would be if the process asks for a semaphore, but the
semaphore is not free. Then the process must be preempted from the processor and wait
in the semaphore queue for the semaphore to become free.



9. What is swapping and what is its purpose?
Answer:

Swapping was an older form of memory management. The purpose of swapping is to
interchange the contents of an area of main storage with the contents of an area in
secondary memory.


10. Consider a computer with N processors in a multiprocessor configuration.
a. How many processes can be in each of the Ready, Running, and Blocked states
at one time?
Answer:
There can be N processes in running state. The number of processes in ready and
blocked state depends on the size of ready list" and "blocked list".

b. What is the minimum number of processes that can be in each of the Ready,
Running, and Blocked states at one time?
Answer:
The minimum number of processes can be 0, if the system is idle and there are no
blocked jobs or no ready jobs.



11. Consider the state transition diagram of Figure 3.9b. Suppose that it is time for the
OS to dispatch a process and that there are processes in both the Ready state and
the Ready/Suspend state, and that at least one process in the Ready/Suspend state
has higher scheduling priority than any of the processes in the Ready state. Two
extreme policies are as follows: (1) Always dispatch from a process in the Ready
state, to minimize swapping, and (2) always give preference to the highest-priority
process, even though that may mean swapping when swapping is not necessary.
Suggest an intermediate policy that tries to balance the concerns of priority and
performance.
Answer:

Ready state means less swapping, also fairer in that there will be no starvation. Ready
suspend means that the process needs to swap (take memory away from) another process
(in the ready Q) and thus is more time consuming. It also may lead to starvation. It does
guarantee that the highest priority processes are executing.
From this:
a. One possible policy that can be enforced is to Penalize the Ready, suspend
processes by some fixed amount, such as one or two priority levels.
b. Then a Ready/Suspend process is chosen next ,only if it has a higher priority than
the highest-priority Ready process by several levels of priority.















References:
http://wiki.answers.com/
http://wiki.answers.com/Q/What_is_swapping_and_what_is_its_purpose
Operating Systems - William Stalling 6th edition--(Majority of definitions/answers)
Andrew_S._Tanenbaum]_Distributed_Operating_System

Você também pode gostar