Você está na página 1de 45

PRACTICAL FILE

OF

OPERATING SYSTEM

SUBMITTED TO: SUBMITTED BY:


INDEX

PROGRAM NAME DATE REMARKS


1. Introduction

What is an Operating System?

The 1960’s definition of an operating system is “the software that controls the hardware”.
However, today, due to microcode we need a better definition. We see an operating system as the
programs that make the hardware useable. In brief, an operating system is the set of programs
that controls a computer. Some examples of operating systems are UNIX, Mach, MS-DOS, MS-
Windows, Windows/NT, Chicago, OS/2, MacOS, VMS, MVS, and VM.

Operating Systems are resource managers. The main resource is computer hardware in the form of
processors, storage, input/output devices, communication devices, and data. Some of the operating
system functions are: implementing the user interface, sharing hardware among users, allowing users to
share data among themselves, preventing users from interfering with one another, scheduling resources
among users, facilitating input/output, recovering from errors, accounting for resource usage,
facilitating parallel operations, organizing data for secure and rapid access, and handling network
communications.]]

2.System Components

Even though, not all systems have the same structure many modern operating systems share the
same goal of supporting the following types of system components.

Process Management

The operating system manages many kinds of activities ranging from user programs to system
programs like printer spooler, name servers, file server etc. Each of these activities is
encapsulated in a process. A process includes the complete execution context (code, data, PC,
registers, OS resources in use etc.).

It is important to note that a process is not a program. A process is only ONE instant of a
program in execution. There are many processes can be running the same program. The five
major activities of an operating system in regard to process management are

 Creation and deletion of user and system processes.


 Suspension and resumption of processes.
 A mechanism for process synchronization.
 A mechanism for process communication.
 A mechanism for deadlock handling.

Main-Memory Management

Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its
own address. Main-memory provides storage that can be access directly by the CPU. That is to
say for a program to be executed, it must in the main memory.

The major activities of an operating in regard to memory-management are:

 Keep track of which part of memory are currently being used and by whom.
 Decide which process are loaded into memory when memory space becomes available.
 Allocate and deallocate memory space as needed.

File Management

A file is a collected of related information defined by its creator. Computer can store files on the
disk (secondary storage), which provide long term storage. Some examples of storage media are
magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like
speed, capacity, data transfer rate and access methods.

A file systems normally organized into directories to ease their use. These directories may
contain files and other directions.

The five main major activities of an operating system in regard to file management are
1. The creation and deletion of files.
2. The creation and deletion of directions.
3. The support of primitives for manipulating files and directions.
4. The mapping of files onto secondary storage.
5. The back up of files on stable storage media.

I/O System Management

I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the device
driver knows the peculiarities of the specific device to whom it is assigned.

Secondary-Storage Management

Generally speaking, systems have several levels of storage, including primary storage, secondary
storage and cache storage. Instructions and data must be placed in primary storage or cache to be
referenced by a running program. Because main memory is too small to accommodate all data
and programs, and its data are lost when power is lost, the computer system must provide
secondary storage to back up main memory. Secondary storage consists of tapes, disks, and other
media designed to hold information that will eventually be accessed in primary storage (primary,
secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes.
Each location in storage has an address; the set of all addresses available to a program is called
an address space.

The three major activities of an operating system in regard to secondary storage management are:

1. Managing the free space available on the secondary-storage device.


2. Allocation of storage space when new files have to be written.
3. Scheduling the requests for memory access.

Networking

A distributed systems is a collection of processors that do not share memory, peripheral devices,
or a clock. The processors communicate with one another through communication lines called
network. The communication-network design must consider routing and connection strategies,
and the problems of contention and security.

Protection System

If a computer systems has multiple users and allows the concurrent execution of multiple
processes, then the various processes must be protected from one another's activities. Protection
refers to mechanism for controlling the access of programs, processes, or users to the resources
defined by a computer systems.

Command Interpreter System

A command interpreter is an interface of the operating system with the user. The user gives
commands with are executed by operating system (usually by turning them into system calls).
The main function of a command interpreter is to get and execute the next user specified
command. Command-Interpreter is usually not part of the kernel, since multiple command
interpreters (shell, in UNIX terminology) may be support by an operating system, and they do
not really need to run in kernel mode. There are two main advantages to separating the command
interpreter from the kernel.

1. If we want to change the way the command interpreter looks, i.e., I want to change the
interface of command interpreter, I am able to do that if the command interpreter is
separate from the kernel. I cannot change the code of the kernel so I cannot modify the
interface.
2. If the command interpreter is a part of the kernel it is possible for a malicious process to
gain access to certain part of the kernel that it showed not have to avoid this ugly scenario
it is advantageous to have the command interpreter separate from kernel.

]]
3.Process State

The process state consist of everything necessary to resume the process execution if it is
somehow put aside temporarily. The process state consists of at least following:

 Code for the program.


 Program's static data.
 Program's dynamic data.
 Program's procedure call stack.
 Contents of general purpose registers.
 Contents of program counter (PC)
 Contents of program status word (PSW).
 Operating Systems resource in

A process goes through a series of discrete process states.


 New State: The process being created.
 Running State: A process is said to be running if it has the CPU, that is, process actually
using the CPU at that particular instant.
 Blocked (or waiting) State: A process is said to be blocked if it is waiting for some
event to happen such that as an I/O completion before it can proceed. Note that a process
is unable to run until some external event happens.
 Ready State: A process is said to be ready if it use a CPU if one were available. A ready
state process is runable but temporarily stopped running to let another process run.
 Terminated state: The process has finished execution.

]]

4.Definition of Process

The notion of process is central to the understanding of operating systems. There are quite a few
definitions presented in the literature, but no "perfect" definition has yet appeared.

Definition

The term "process" was first used by the designers of the MULTICS in 1960's. Since then, the
term process, used somewhat interchangeably with 'task' or 'job'. The process has been given
many definitions for instance

 A program in Execution.
 An asynchronous activity.
 The 'animated sprit' of a procedure in execution.
 The entity to which processors are assigned.
 The 'dispatchable' unit.

and many more definitions have given. As we can see from above that there is no universally
agreed upon definition, but the definition "Program in Execution" seem to be most frequently
used. And this is a concept are will use in the present study of operating systems.
Now that we agreed upon the definition of process, the question is what is the relation between
process and program. It is same beast with different name or when this beast is sleeping (not
executing) it is called program and when it is executing becomes process. Well, to be very
precise. Process is not the same as program. In the following discussion we point out some of the
difference between process and program. As we have mentioned earlier.

Process is not the same as program. A process is more than a program code. A process is an
'active' entity as oppose to program which consider to be a 'passive' entity. As we all know that a
program is an algorithm expressed in some suitable notation, (e.g., programming language).
Being a passive, a program is only a part of process. Process, on the other hand, includes:

 Current value of Program Counter (PC)


 Contents of the processors registers
 Value of the variables
 The process stack (SP) which typically contains temporary data such as subroutine
parameter, return address, and temporary variables.
 A data section that contains global variables.

A process is the unit of work in a system.

In Process model, all software on the computer is organized into a number of sequential
processes. A process includes PC, registers, and variables. Conceptually, each process has its
own virtual CPU. In reality, the CPU switches back and forth among processes. (The rapid
switching back and forth is called multiprogramming).

]]

5.CPU/Process Scheduling

The assignment of physical processors to processes allows processors to accomplish work. The
problem of determining when processors should be assigned and to which processes is called
processor scheduling or CPU scheduling.
When more than one process is runable, the operating system must decide which one first. The
part of the operating system concerned with this decision is called the scheduler, and algorithm it
uses is called the scheduling algorithm.

Goals of Scheduling (objectives)

In this section we try to answer following question: What the scheduler try to achieve?

Many objectives must be considered in the design of a scheduling discipline. In particular, a


scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc.,
Some of these goals depends on the system one is using for example batch system, interactive
system or real-time system, etc. but there are also some goals that are desirable in all systems.

General Goals

Fairness
Fairness is important under all circumstances. A scheduler makes sure that each process
gets its fair share of the CPU and no process can suffer indefinite postponement. Note that giving
equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant.

Policy Enforcement
The scheduler has to make sure that system's policy is enforced. For example, if the local
policy is safety then the safety control processes must be able to run whenever they want to, even
if it means delay in payroll processes.

Efficiency
Scheduler should keep the system (or in particular CPU) busy cent percent of the time when
possible. If the CPU and all the Input/Output devices can be kept running all the time, more work
gets done per second than if some components are idle.

Response Time
A scheduler should minimize the response time for interactive user.

Turnaround
A scheduler should minimize the time batch users must wait for an output.

Throughput
A scheduler should maximize the number of jobs processed per unit time.
A little thought will show that some of these goals are contradictory. It can be shown that any
scheduling algorithm that favors some class of jobs hurts another class of jobs. The amount of
CPU time available is finite, after all.

Preemptive Vs Nonpreemptive Scheduling

The Scheduling algorithms can be divided into two categories with respect to how they deal with
clock interrupts.

Nonpreemptive Scheduling

A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU
cannot be taken away from that process.

Following are some characteristics of nonpreemptive scheduling

1. In nonpreemptive system, short jobs are made to wait by longer jobs but the overall
treatment of all processes is fair.
2. In nonpreemptive system, response times are more predictable because incoming high
priority jobs can not displace waiting jobs.
3. In nonpreemptive scheduling, a schedular executes jobs in the following two situations.
a. When a process switches from running state to the waiting state.
b. When a process terminates.

Preemptive Scheduling

A scheduling discipline is preemptive if, once a process has been given the CPU can taken away.

The strategy of allowing processes that are logically runable to be temporarily suspended is
called Preemptive Scheduling and it is contrast to the "run to completion" method.

]]

6.Scheduling Algorithms
CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU.

Following are some scheduling algorithms we will study

 FCFS Scheduling.
 Round Robin Scheduling.
 SJF Scheduling.
 SRT Scheduling.
 Priority Scheduling.
 Multilevel Queue Scheduling.
 Multilevel Feedback Queue Scheduling.

]]

10.Deadlock
“Crises and deadlocks when they occur have at least this advantage, that they force us to
think.”- Jawaharlal Nehru (1889 - 1964) Indian political leader

A set of process is in a deadlock state if each process in the set is waiting for an event that can be
caused by only another process in the set. In other words, each member of the set of deadlock
processes is waiting for a resource that can be released only by a deadlock process. None of the
processes can run, none of them can release any resources, and none of them can be awakened. It
is important to note that the number of processes and the number and kind of resources possessed
and requested are unimportant.

The resources may be either physical or logical. Examples of physical resources are Printers,
Tape Drivers, Memory Space, and CPU Cycles. Examples of logical resources are Files,
Semaphores, and Monitors.

The simplest example of deadlock is where process 1 has been allocated non-shareable resources
A, say, a tap drive, and process 2 has be allocated non-sharable resource B, say, a printer. Now, if
it turns out that process 1 needs resource B (printer) to proceed and process 2 needs resource A
(the tape drive) to proceed and these are the only two processes in the system, each is blocked
the other and all useful work in the system stops. This situation ifs termed deadlock. The system
is in deadlock state because each process holds a resource being requested by the other process
neither process is willing to release the resource it holds.

Preemptable and Nonpreemptable Resources

Resources come in two flavors: preemptable and nonpreemptable. A preemptable resource is one
that can be taken away from the process with no ill effects. Memory is an example of a
preemptable resource. On the other hand, a nonpreemptable resource is one that cannot be taken
away from process (without causing ill effect). For example, CD resources are not preemptable
at an arbitrary moment.
Reallocating resources can resolve deadlocks that involve preemptable resources. Deadlocks that
involve nonpreemptable resources are difficult to deal with.

]]

11.Necessary and Sufficient Deadlock Conditions

Coffman (1971) identified four (4) conditions that must hold simultaneously for there to be a
deadlock.

1. Mutual Exclusion Condition


The resources involved are non-shareable.
Explanation: At least one resource (thread) must be held in a non-shareable mode, that is, only
one process at a time claims exclusive control of the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been released.

2. Hold and Wait Condition


Requesting process hold already, resources while waiting for requested resources.
Explanation: There must exist a process that is holding a resource already allocated to it while
waiting for additional resource that are currently being held by other processes.
3. No-Preemptive Condition
Resources already allocated to a process cannot be preempted.
Explanation: Resources cannot be removed from the processes are used to completion or
released voluntarily by the process holding it.

4. Circular Wait Condition


The processes in the system form a circular list or chain where each process in the list is
waiting for a resource held by the next process in the list.

As an example, consider the traffic deadlock in the following figure

Consider each section of the street as a resource.

1. Mutual exclusion condition applies, since only one vehicle can be on a section of the
street at a time.
2. Hold-and-wait condition applies, since each vehicle is occupying a section of the street,
and waiting to move on to the next section of the street.
3. No-preemptive condition applies, since a section of the street that is a section of the street
that is occupied by a vehicle cannot be taken away from it.
4. Circular wait condition applies, since each vehicle is waiting on the next vehicle to move.
That is, each vehicle in the traffic is waiting for a section of street held by the next
vehicle in the traffic.
The simple rule to avoid traffic deadlock is that a vehicle should only enter an intersection if it is
assured that it will not have to stop inside the intersection.

It is not possible to have a deadlock involving only one single process. The deadlock involves a
circular “hold-and-wait” condition between two or more processes, so “one” process cannot hold
a resource, yet be waiting for another resource that it is holding. In addition, deadlock is not
possible between two threads in a process, because it is the process that holds resources, not the
thread that is, each thread has access to the resources held by the process.

]]

12.Deadlock Prevention

Havender in his pioneering work showed that since all four of the conditions are necessary for
deadlock to occur, it follows that deadlock might be prevented by denying any one of the
conditions.

 Elimination of “Mutual Exclusion” Condition


The mutual exclusion condition must hold for non-sharable resources. That is, several
processes cannot simultaneously share a single resource. This condition is difficult to
eliminate because some resources, such as the tap drive and printer, are inherently non-
shareable. Note that shareable resources like read-only-file do not require mutually
exclusive access and thus cannot be involved in deadlock.

 Elimination of “Hold and Wait” Condition


There are two possibilities for elimination of the second condition. The first alternative is
that a process request be granted all of the resources it needs at once, prior to execution.
The second alternative is to disallow a process from requesting resources whenever it has
previously allocated resources. This strategy requires that all of the resources a process
will need must be requested at once. The system must grant resources on “all or none”
basis. If the complete set of resources needed by a process is not currently available, then
the process must wait until the complete set is available. While the process waits,
however, it may not hold any resources. Thus the “wait for” condition is denied and
deadlocks simply cannot occur. This strategy can lead to serious waste of resources. For
example, a program requiring ten tap drives must request and receive all ten derives
before it begins executing. If the program needs only one tap drive to begin execution and
then does not need the remaining tap drives for several hours. Then substantial computer
resources (9 tape drives) will sit idle for several hours. This strategy can cause indefinite
postponement (starvation). Since not all the required resources may become available at
once.

 Elimination of “No-preemption” Condition


The nonpreemption condition can be alleviated by forcing a process waiting for a
resource that cannot immediately be allocated to relinquish all of its currently held
resources, so that other processes may use them to finish. Suppose a system does allow
processes to hold resources while requesting additional resources. Consider what happens
when a request cannot be satisfied. A process holds resources a second process may need
in order to proceed while second process may hold the resources needed by the first
process. This is a deadlock. This strategy require that when a process that is holding some
resources is denied a request for additional resources. The process must release its held
resources and, if necessary, request them again together with additional resources.
Implementation of this strategy denies the “no-preemptive” condition effectively.
High Cost When a process release resources the process may lose all its work to that
point. One serious consequence of this strategy is the possibility of indefinite
postponement (starvation). A process might be held off indefinitely as it repeatedly
requests and releases the same resources.

 Elimination of “Circular Wait” Condition


The last condition, the circular wait, can be denied by imposing a total ordering on all of
the resource types and than forcing, all processes to request the resources in order
(increasing or decreasing). This strategy impose a total ordering of all resources types,
and to require that each process requests resources in a numerical order (increasing or
decreasing) of enumeration. With this rule, the resource allocation graph can never have a
cycle.
For example, provide a global numbering of all the resources, as shown

1 ≡ Card reader
2 ≡ Printer
3 ≡ Plotter
4 ≡ Tape drive
5 ≡ Card punch

Now the rule is this: processes can request resources whenever they want to, but all
requests must be made in numerical order. A process may request first printer and then a
tape drive (order: 2, 4), but it may not request first a plotter and then a printer (order: 3,
2). The problem with this strategy is that it may be impossible to find an ordering that
satisfies everyone.
]]

13.Deadlock Avoidance

This approach to the deadlock problem anticipates deadlock before it actually occurs. This
approach employs an algorithm to access the possibility that deadlock could occur and acting
accordingly. This method differs from deadlock prevention, which guarantees that deadlock
cannot occur by denying one of the necessary conditions of deadlock.

If the necessary conditions for a deadlock are in place, it is still possible to avoid deadlock by
being careful when resources are allocated. Perhaps the most famous deadlock avoidance
algorithm, due to Dijkstra [1965], is the Banker’s algorithm. So named because the process is
analogous to that used by a banker in deciding if a loan can be safely made.

Banker’s Algorithm

In this analogy

Customers ≡ processes
resources, say,
Units ≡
tape drive
Banker ≡ Operating System

Customers Used Max


A 0 6
B 0 5 Available Units
C 0 4 = 10
D 0 7

Fig. 1
In the above figure, we see four customers each of whom has been granted a number of credit
nits. The banker reserved only 10 units rather than 22 units to service them. At certain moment,
the situation becomes

Customers Used Max


A 1 6
B 1 5 Available Units
C 2 4 =2
D 4 7

Fig. 2

Safe State The key to a state being safe is that there is at least one way for all users to finish. In
other analogy, the state of figure 2 is safe because with 2 units left, the banker can delay any
request except C's, thus letting C finish and release all four resources. With four units in hand,
the banker can let either D or B have the necessary units and so on.

Unsafe State Consider what would happen if a request from B for one more unit were granted
in above figure 2.

We would have following situation

Customers Used Max


A 1 6
B 2 5 Available Units
C 2 4 =1
D 4 7

Fig. 3

This is an unsafe state.

If all the customers namely A, B, C, and D asked for their maximum loans, then banker could not
satisfy any of them and we would have a deadlock.

Important Note: It is important to note that an unsafe state does not imply the existence or
even the eventual existence a deadlock. What an unsafe state does imply is simply that some
unfortunate sequence of events might lead to a deadlock.

The Banker's algorithm is thus to consider each request as it occurs, and see if granting it leads to
a safe state. If it does, the request is granted, otherwise, it postponed until later. Haberman [1969]
has shown that executing of the algorithm has complexity proportional to N2 where N is the
number of processes and since the algorithm is executed each time a resource request occurs, the
overhead is significant.
]]

14.Deadlock Detection

Deadlock detection is the process of actually determining that a deadlock exists and identifying
the processes and resources involved in the deadlock.
The basic idea is to check allocation against resource availability for all possible allocation
sequences to determine if the system is in deadlocked state a. Of course, the deadlock detection
algorithm is only half of this strategy. Once a deadlock is detected, there needs to be a way to
recover several alternatives exists:

 Temporarily prevent resources from deadlocked processes.


 Back off a process to some check point allowing preemption of a needed resource and
restarting the process at the checkpoint later.
 Successively kill processes until the system is deadlock free.

These methods are expensive in the sense that each iteration calls the detection algorithm until
the system proves to be deadlock free. The complexity of algorithm is O(N2) where N is the
number of proceeds. Another potential problem is starvation; same process killed repeatedly.

]]

15.Memory Management
This chapter describes the way that Linux handles the memory in the system. The memory
management subsystem is one of the most important parts of the operating system.

ReviewNotes This chapter is quite old and in need of reworking. Perhaps the memory allocation
should move to the kernel chapter.

Since the early days of computing, there has been a need for more memory than there exists
physically in a system. Strategies have been developed to overcome this limitation and the most
successful of these is virtual memory. Virtual memory makes the system appear to have more
memory than it actually has by sharing it between competing processes as they need it. This
sleight of hand is invisible to those processes and to the users of the system. Virtual memory
allows:

Large Address Spaces


The operating system makes the system appear as if it has a larger amount of memory
than it actually has. The virtual memory can be many times larger than the physical
memory in the system,
Fair Physical Memory Allocation
The memory management subsystem must fairly share the physical memory of the
system between the running processes in the system,
Protection
Memory management ensures that every process in the system is protected from all other
processes; in this way a crashing application cannot affect other processes or the
operating system itself.
Shared Virtual Memory
Virtual memory allows two processes to share memory between themselves, for example
use a shared library. Shared libraries mean that library code only needs to exist in one
place and not be duplicated in every application.

Before considering the methods that Linux uses to support virtual memory it is useful to consider
an abstract model that is not cluttered by too much detail.

]]

16.An Abstract Model of Virtual Memory


Figure: Abstract model of Virtual to Physical address mapping

In this model, both virtual and physical memory are divided up into handy sized chunks called
pages. These pages are all the same size, they need not be but if they were not the system would
be very hard to administer. Linux on Alpha AXP uses 8 Kbyte pages. Each of these pages is
given a unique number; the Page Frame Number (PFN). For every instruction in a program,
for example to load a register with the contents of a location in memory, the CPU performs a
mapping from a virtual address to a physical one. Also, if the instruction itself references
memory then a translation is performed for that reference.

The address translation between virtual and physical memory is done by the CPU using page
tables which contain all the information that the CPU needs. Typically there is a page table for
every process in the system. Figure shows a simple mapping between virtual addresses and
physical addresses using page tables for Process X and Process Y. This shows that Process X's
virtual PFN 0 is mapped into memory in physical PFN 1 and that Process Y's virtual PFN 1 is
mapped into physical PFN 4. Each entry in the theoretical page table contains the following
information:

 The virtual PFN,


 The physical PFN that it maps to,
 Access control information for that page.

To translate a virtual address into a physical one, the CPU must first work out the addresses
virtual PFN and the offset within that virtual page. If you make the page size a power of 2, then
this can be easily done by masking and shifting. Looking again at Figures and assuming a
page size of 8192 bytes (which is hexadecimal 0x2000) and an address of 0x2194 in Process Y's
virtual address space then the CPU would translate that address into offset 0x194 into virtual
PFN 1.
The CPU searches the process's page tables are searched for an entry which matches the virtual
PFN. This gives us the physical PFN which we are looking for. The CPU then takes that physical
PFN and multiplies it by the page size to get the address of the base of that page in physical
memory. Finally, the CPU adds in the offset to the instruction or data that it needs. Using the
above example again, Process Y's virtual PFN 1 is mapped to physical PFN 4 which starts at
0x8000 (4 x 0x2000). Adding in the 0x194 byte offset gives us a final physical address of
0x8194.

By mapping virtual to physical addresses this way, the virtual memory can be mapped into the
system's physical pages in any order. For example, in Figure Process X's virtual PFN 0 is
mapped to physical PFN 1 whereas virtual PFN 7 is mapped to physical PFN 0 even though it is
higher in virtual memory than virtual PFN 0. This demonstrates an interesting byproduct of
virtual memory; the pages of virtual memory do not have to be present in physical memory in
any particular order.

]]

17,19,20.Demand paging
From Wikipedia, the free encyclopedia
Jump to: navigation, search

In computer operating systems, demand paging (as opposed to anticipatory paging) is an


application of virtual memory. In a system that uses demand paging, the operating system copies
a disk page into physical memory only if an attempt is made to access it (i.e., if a page fault
occurs). It follows that a process begins execution with none of its pages in physical memory,
and many page faults will occur until most of a process's working set of pages is located in
physical memory. This is an example of lazy loading techniques.

Contents
[hide]

 1 Advantages
 2 Disadvantages
 3 Basic concept
 4 See also
 5 References
[edit] Advantages
Demand paging, as opposed to loading all pages immediately:

 Only loads pages that are demanded by the executing process.


 As there is more space in main memory, more processes can be loaded reducing context
switching time which utilizes large amounts of resources.
 Less loading latency occurs at program startup, as less information is accessed from
secondary storage and less information is brought into main memory.

[edit] Disadvantages
 Individual programs face extra latency when they access a page for the first time. So
demand paging may have lower performance than anticipatory paging algorithms.
 Programs running on low-cost, low-power embedded systems may not have a memory
management unit that supports page replacement.
 Memory management with page replacement algorithms becomes slightly more complex.
 Possible security risks, including vulnerability to timing attacks; see Percival 2005 Cache
Missing for Fun and Profit (specifically the virtual memory attack in section 2).

[edit] Basic concept


Demand paging follows that pages should only be brought into memory if the executing process
demands them. This is often referred to as lazy evaluation as only those pages demanded by the
process are swapped from secondary storage to main memory. Contrast this to pure swapping,
where all memory for a process is swapped from secondary storage to main memory during the
process startup.

When a process is to be swapped into main memory for processing, the pager guesses which
pages will be used prior to the process being swapped out again. The pager will only load these
pages into memory. This process avoids loading pages that are unlikely to be used and focuses
on pages needed during the current process execution period. Therefore, not only is unnecessary
page load during swapping avoided but we also try to preempt which pages will be needed and
avoid loading pages during execution.

Commonly, to achieve this process a page table implementation is used. The page table maps
logical memory to physical memory. The page table uses a bitwise operator to mark if a page is
valid or invalid. A valid page is one that currently resides in main memory. An invalid page is
one that currently resides in secondary memory. When a process tries to access a page, the
following steps are generally followed:

 Attempt to access page.


 If page is valid (in memory) then continue processing instruction as normal.
 If page is invalid then a page-fault trap occurs.
 Check if the memory reference is a valid reference to a location on secondary memory. If
not, the process is terminated (illegal memory access). Otherwise, we have to page in
the required page.
 Schedule disk operation to read the desired page into main memory.
 Restart the instruction that was interrupted by the operating system trap.

]]

18.Swapping
When the physical memory in the system runs out and a process needs to bring a page into
memory then the operating system must decide what to do. It must fairly share the physical
pages in the system between the processes running in the system, therefore it may need to
remove one or more pages from the system to make room for the new page to be brought into
memory. How virtual pages are selected for removal from physical memory affects the
efficiency of the system. Linux uses a page aging technique to fairly choose pages which might
be removed from the system. This scheme involves every page in the system having an age
which changes as the page is accessed. The more that a page is accessed, the younger it is; the
less that it is accessed the older it becomes. Old pages are good candidates for swapping.

If the page to be removed came from an image or data file and has not been written to then the
page does not need to be saved. Instead it can be discarded and if the process needs that page
again it can be brought back into memory from the image or data file again. However, if the page
has been written to then the operating system must preserve the contents of that page so that it
can be accessed at a later time.

This type of page is known as a dirty page and it is saved in a special sort of file called the swap
file. These unwanted dirty virtual pages are stored on hard disk in the swap file. Accesses to the
disk are very long relative to the speed of the CPU and the operating system must juggle the need
to write pages to disk with the need to retain them in memory to be used again. The operating
system must use an algorithm which fairly swaps out the less used pages of the processes
competing for resources. If the swapping algorithm is not efficient then a condition known as
thrashing occurs. In this case, pages are constantly being written to disk and then being read
back and the operating system is too busy to allow much real work to be performed. If, for
example, physical PFN 1 in Figure is being regularly accessed then it is not a good candidate
for swapping to hard disk.

]]
21.Memory Fragmentation and Segmentation
Articles and Tips:
 Printer Friendly

Mickey Applebaum

01 Apr 1998

Editor's Note: "Technically Speaking" answers your technical questions, focusing on issues that
affect network administrators. To submit a question for a future column, please send an e-mail
message to nwc-editors@novell.com, or fax the question to 1-801-228-4576.

I was recently asked to solve the following problems:

 My company's server has plenty of memory, but NetWare 4 displays Short Term
Memory Alloc messages. What's wrong?
 My company's server has 128 MB of RAM, but NetWare 4 reports only 64 MB of RAM.
What's wrong?

These errors are caused by two problems: The first error is caused by memory fragmentation,
and the second error is caused by memory segmentation. This article describes how memory
fragmentation and memory segmentation occur and explains how you can resolve both problems
on a NetWare 4 server. (Because the network operating system component of intraNetWare is
NetWare 4.11, this article also applies to intraNetWare.)

MEMORY FRAGMENTATION

Memory fragmentation eventually occurs on all NetWare 4 servers. Depending on the way you
manage a server and the NetWare Loadable Modules (NLMs) you run, memory fragmentation
can occur daily or occasionally, over a period of years.

The most common cause of memory fragmentation is loading and unloading a scheduled NLM,
such as a backup NLM. However, other automated NLMs can also cause memory fragmentation.
For example, Novell's FTP server can cause memory fragmentation because the FTP server
automatically loads when a request is received and then unloads when the request times out.

Memory fragmentation can also be caused by NLMs that are unloaded and then reloaded as part
of another process. For example, a backup NLM could schedule the unloading of a database
NLM during the backup process. The backup NLM would then reload the database NLM when
this process was completed.
Since a database NLM is designed to be loaded and left running, this NLM makes permanent
memory pool allocations, which are not returned to system memory when the NLM is unloaded.
When the database NLM is reloaded, it may not reuse its permanent memory pool allocation and
may, therefore, leave gaps in memory. As a result, memory fragmentation may occur.

Although memory fragmentation can cause several errors, it most often results in Short Term
Memory Alloc messages at the server console. These messages indicate that small memory
resources are not available to the requesting process.

SOLUTIONS FOR MEMORY FRAGMENTATION

To resolve memory fragmentation, you should first ensure that the following command is
included in the STARTUP.NCF filebeforeyou load disk drivers or name space drivers:

 SET RESERVED BUFFERS BELOW 16 MEG = 300

By setting this parameter to 300, you allocate the largest memory pool possible in low memory
to be used for short-term memory allocations. As a result, NetWare 4 does not need to allocate
high memory to NLMs that make short-term memory allocations.

If changing this parameter does not resolve memory fragmentation, you must down the server
and restart it. If the server frequently experiences severe memory fragmentation, you should
identify which NLMs are being loaded and unloaded and determine how you can leave these
NLMs loaded all the time.

MEMORY SEGMENTATION

Memory segmentation occurs when system memory is presented to NetWare 4 as two or more
noncontiguous memory blocks. Although several factors can cause this condition, the result is
always the same: The NetWare Cache Memory Allocator cannot use all of the installed RAM.
Depending on the cause, NetWare 4 may or may not see all of the installed RAM.

If the NetWare Cache Memory Allocator cannot use all of the installed RAM, the server may
display error messages. Most frequently, the server reports that the NetWare Cache Memory
Allocator is out of available memory.

SOLUTIONS FOR MEMORY SEGMENTATION

The solutions used to resolve memory segmentation on NetWare 3 servers do not work on
NetWare 4 servers. NetWare 3 is based on a multipool memory model and doesn't allocate
memory for the NetWare Cache Memory Allocator until the first volume is mounted. As a result,
you can prevent disk drivers from loading in the STARTUP.NCF file, and you can use the
REGISTER MEMORY command before loading disk drivers and mounting volumes in the
AUTOEXEC.NCF file. NetWare 3 can then see all of the available memory before allocating
memory for the NetWare Cache Memory Allocator.
Unlike NetWare 3, NetWare 4, is based on a single-pool, flat-memory model. When NetWare 4
is initialized, it immediately allocates memory for the NetWare Cache Memory Allocator. As a
result, NetWare 4 can allocate only the memory that is physically available at the time. Once
NetWare 4 allocates memory for the NetWare Cache Memory Allocator, NetWare 4 cannot
dynamically reallocate this memory.

If you use the REGISTER MEMORY command to resolve memory segmentation, NetWare 4
cannot use the additional memory it sees for internal processes. NetWare 4 can use this
additional memory only for file cache buffers.

To resolve memory segmentation, you should first ensure that you have not imposed false
memory limitations on the server. Loading a DOS memory manager (HIMEM.SYS or
EMM386.EXE, for example) in the CONFIG.SYS file is one of the most common causes of
memory segmentation. You should also ensure that you are not loading a CONFIG.SYS file on
the server's boot disk or boot partition.

Next, you should run the computer's Setup program to ensure that you have not set any special
memory options. If you do not understand the memory options, you can contact the computer
manufacturer to ensure that you are using the proper settings for a NetWare 4 server.

Next, you should install the latest NetWare 4 patches and updates. One of the most important
patches is the Loader patch, which is a static patch that is applied to the SERVER.EXE file. This
patch changes the way NetWare 4 looks at system memory. (You can download intraNetWare
Support Pack 4.0, which contains the latest NetWare 4 patches and updates, from Novell's
Support Connection World-Wide Web site at http://support.novell.com.)

Unfortunately, the Loader patch does not resolve memory segmentation on some computers. And
on some computers, this patch prevents NetWare 4 from running, either causing an ABEND
when the SERVER.EXE file is loaded or locking the computer as soon as this file starts to run.

The Loader patch may also have conflicts with some PCI plug-and-play adapters. These adapters
set their memory address buffers when they power on and cannot reset these buffers when
NetWare 4 loads. As a result, these adapters may fail if you install the Loader patch.

If the Loader patch causes these kinds of problems, you must replace the modified
SERVER.EXE file with the original file (which is renamed SERVER.OLD when you install the
patch).

If you have not imposed false memory limitations, if you have selected the proper memory
options, and if you have installed the latest patches and updates and NetWare 4 still does not see
all of the installed memory, the problem is a physical limitation of the computer. This problem
can occur if NetWare 4 is running on a clone PCI and ISA bus computer.

At this point, you should contact the computer manufacturer to see if this manufacturer has a
system BIOS upgrade or a flash update that allows NetWare 4 to see all of the installed RAM.
Because some computers were not designed specifically to be used as servers, the system BIOS
may implement a RAM addressing limiter to ensure compatibility with Windows 95, Windows
3.x, and DOS. The limiter allows the computer to have more RAM but reports only 16 MB of
RAM to NetWare 4. In some PCI and ISA computers, for example, the limiter reports a
maximum of 64 MB of RAM to NetWare 4.

The computer may also be physically limited because it has a 16-bit bus master device. Such
devices can address only a limited amount of RAM because their embedded processor has a
fixed number of address lines available. Trying to force 16-bit bus master devices to address
RAM beyond their physical limitation eventually causes RAM corruption, which leads to
ABENDs and data corruption.

Although you may not have installed a 16-bit bus master device, the computer may have an
embedded 16-bit bus master device. For example, many lower cost PCI and ISA computers that
were not explicitly designed to be servers use 16-bit PCI-to-ISA bridge devices. These bridges
are inherently bus master devices. And because the computer has an embedded 16-bit bus master
device, the computer cannot directly address memory above 16 MB.

Some device manufacturers are providing drivers for their 16-bit bus master devices that allow
for double buffering. These drivers can reduce memory segmentation because they perform
double memory moves for each I/O operation. Because these drivers do not remove the
computer's physical limitation, however, a highly utilized server may run out of reserved buffers
below 16 MB. Once again, RAM corruption may occur.

To permanently resolve memory segmentation, you must use computers that are specifically
designed to be used as servers. These computers have a true 32-bit bus design, which means that
they use EISA or Micro Channel primary buses and can also have PCI local buses through 32-bit
PCI bridge devices.

Some computer manufacturers have produced lower cost PCI and ISA computers that use a 32-
bit PCI bridge device. Before purchasing such a computer, you should compare it to a true 32-bit
computer. You may find that many lower cost PCI and ISA computers use 16-bit embedded
LAN and disk adapters. EISA and PCI computers, on the other hand, use 32-bit embedded
adapters.

When configuring a computer to be used as a server, you should use 32-bit LAN and disk
adapters. If you do install a 16-bit ISA, Micro Channel, or PCI adapter, you should ensure that it
is not a bus master adapter. (For more information about how memory segmentation affects
NetWare 4, download the Memory Segmentation technical information document from Novell's
Support Connection site at http://support.novell.com. To find this document, search for
document number 2908018.)

CONCLUSION

Memory fragmentation occurs with all operating systems, not just NetWare 4. If memory
fragmentation begins to affect system performance or data integrity, you must reboot the server.
Memory segmentation, on the other hand, is caused by the physical limitations of the computer.
(Such physical limitations also affect all operating systems, not just NetWare 4.) Either the
computer is limited through its inherent design or through the use of devices that prevent
NetWare 4 from directly addressing all of the available memory. The best solution is to use a
computer that does not have these physical limitations.

]]

22.I/O scheduling
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For process scheduling, see scheduling (computing). For process management, see process
management (computing).

Input/output (I/O) scheduling is a term used to describe the method computer operating
systems decide the order that block I/O operations will be submitted to storage volumes. I/O
Scheduling is sometimes called 'disk scheduling'.

Contents
[hide]

 1 Purpose
 2 Implementation
 3 Common disk I/O scheduling disciplines
 4 See also
 5 References

23.[edit] Purpose
I/O schedulers can have many purposes depending on the goal of the I/O scheduler, some
common goals are:

 To minimize time wasted by hard disk seeks.


 To prioritize a certain processes' I/O requests.
 To give a share of the disk bandwidth to each running process.
 To guarantee that certain requests will be issued before a particular deadline.

[edit] Implementation
I/O Scheduling usually has to work with hard disks which share the property that there is long
access time for requests which are far away from the current position of the disk head (this
operation is called a seek). To minimize the effect this has on system performance, most I/O
schedulers implement a variant of the elevator algorithm which re-orders the incoming randomly
ordered requests into the order in which they will be found on the disk.

[edit] Common disk I/O scheduling disciplines


 Random scheduling (RSS);
 First In, First Out (FIFO), also known as First Come First Served (FCFS);
 Last In, First Out (LIFO);
 Shortest seek first, also known as Shortest Seek / Service Time First (SSTF);
 Elevator algorithm, also known as SCAN (including its variants, C-SCAN, LOOK, and
C-LOOK);
 N-Step-SCAN SCAN of N records at a time;
 FSCAN, N-Step-SCAN where N equals queue size at start of the SCAN cycle;
 Completely Fair Queuing (Linux);
 Anticipatory scheduling;
 Noop scheduler;
 Deadline scheduler.

]]

24.

Disk Operating System (DOS)

5.1 INTRODUCTION

When the computer starts, it starts the operating system that takes the control of the machine. An
Operating System is a set of programs that help in controlling and managing the Hardware and
the Software resources of a computer system. A good operating system should have the
following features;

1. Help in the loading of programs and data from external sources into the internal memory
before they are executed.

2. Help programs to perform input/output operations, such as;

o Print or display the result of a program on the printer or the screen.

o Store the output data or programs written on the computer in storage device.

o Communicate the message from the system to the user through the VDU.

o Accept input from the user through the keyboard or mouse.


5.2 OBJECTIVES

At the end of this lesson, you would be able to;

 explain the concept operating system


 discuss the functions of operating system
 understand the procedures of loading operating system into the memory
 use file management features of operating system
 create separate locations for logically related files
 copy files from one computer to another
 use Windows for File Management

5.3 DISK OPERATING SYSTEM

As the name suggests, the operating System is used for operating the system or the computer. It
is a set of computer programs and also known as DOS (Disk Operating System). The main
functions of DOS are to manage disk files, allocate system resources according to the
requirement. DOS provides features essential to control hardware devices such as Keyboard,
Screen, Disk Devices, Printers, Modems and programs.

Basically, DOS is the medium through which the user and external devices attached to the
system communicate with the system. DOS translate the command issued by the user in the
format that is understandable by the computer and instruct computer to work accordingly. It also
translates the result and any error message in the format for the user to understand.

(a) Loading of DOS

The BOOT Record into the computer memory loads DOS. BOOT Record in turn is triggered by
ROM program already there in the computer.

The system start-up routine of ROM runs a reliability test called Power On Self Test (POST)
which initializes the chips and the standard equipment attached to the PC, and check whether
peripherals connected to the computer are working or not. Then it tests the RAM memory.
Once this process is over, the ROM bootstrap loader attempts to read the Boot record and if
successful, passes the control on to it. The instructions/programs in the boot record then
load the rest of the program. After the ROM boot strap loader turns the control over to boot
record, the boot tries to load the DOS into the memory by reading the two hidden files
IBMBIO.COM and IBMDOS.COM. If these two are found, they are loaded along with the
DOS command interpreter COMMAND.COM. COMMAND.COM contains routines that
interpret what is typed in through the keyboard in the DOS command mode. By comparing the
input with the list of command, it acts by executing the required routines/commands or by
searching for the required routine utility and loads it into the memory.
5.4 COMPUTER FILES IN DOS

A file may contain a program or any other kind of information. Generally, a file must be given
a name that can be used to identify it. DOS permits the user to assign a name consisting of two
parts to a file - primary and secondary names. Primary name can be of a maximum of eight
characters consisting of Characters, Alphabets, Number and Hyphen), and the Secondary
name should consist of three characters, which is optional. The primary name and the
secondary (or extension) name, if any, are to be separated by a dot (.).

Primary name can be linked to proper name, whereas extensions are like surnames of people.
Using an extension with the file name is preferable, though optional. However, once the
extension is specified, using the complete name (primary name and extension, with the period
separating them can only refer the file). Using extensions can be an excellent way of naming a
file so that it can be identified easily.

Examples:

Filename Primary Name Separator Secondary Name


Employee Employee
Employee.Exe Employee . Exe
Employee.Dbf Employee . Dbf

DOS has a way of showing which disk drive is currently active. The floppy disk drives are
assigned alphabets A and B, whereas the hard disk drive is assigned the alphabet C. If your PC
has a single floppy drive, the drive would be A and if it has two, they would be termed as A
and B. If your PC includes a hard disk, besides a FDD (Floppy Disk Drive), the drive names
would be A and C. If the prompt is A, then it implies that the first floppy disk drive is active.
Where as the DOS prompt would be C, if the hard disk is active. Data as well as instructions
reside in a file stored in a disk.

5.5 DIRECTORY STRUCTURE IN DOS

The files in the computer come from various sources. Some files come with DOS, while other
come with publications such as a word processor. These files contain codes and other
information that is necessary to make the computer application operational. Before long, there
will be hundreds or even thousands of files in the computer, which can make it difficult to locate
specific files.

The names of all the files created in a disk are stored in its directory. Directory is just like a file
folder, which contain all the logically related files. DOS files are organized in a hierarchical or
an inverted tree-like structure. The general analogy is with a file cabinet containing a number of
drawers, which in turn may contain folders. The content of these folders is the needed
information.

The file cabinet here is the ROOT DIRECTORY, the drawer is INDIVIDUAL
DIRECTORY, the folders are SUBDIRECTORY and the information in these folders may in
turn be classified as FILES.

Otherwise, the large number of files that get created for various purposes in a disk can make the
directory huge and difficult to view and manage. Therefore, DOS enables the user to organize
the files in a disk into directories and sub-directories in a hierarchical structure. Directories can
contain other directories. A directory within another directory is called a sub-directory.

Of course, there may be sub-directories of sub-directories, but a file name is the furthest you
may descend down the (inverted) tree of directories and files. Thus, a file name corresponds to
a tree leaf, a sub-directory to a branch, the directory to the trunk, and the root directory to the
root of the tree, hence the name ROOT DIRECTORY.

Sample of Directory Structure

The directory/sub-directory is represented in bold letters.

5.6 DIRECTORY COMMAND

The content of each of the sub-directory cannot be viewed unless it is made active, or a sub-
directory is specified as part of the DIR command. Doing either of these requires an
understanding of the concepts of navigating around the disk.

The directory, the user is in at any point of time, is called the


WORKING/PRESENT/CURRENT directory. DOS indicates which directory you are in by
displaying the directory's name in the command prompt. For example, the following command
prompt indicate that you are in the DOS directory: C:\DOS>. Knowing which directory is
current helps you find files, and to move from one directory to another more easily.
Typically, the ROOT DIRECTORY (\) is the initial working directory. The entire
specification of directory from root is called a PATH. By itself, the DIR command is
applicable to the working/present directory. The names of the sub-directories at adjacent levels
are separated by backslash (\), while specifying the path to be followed while traveling to a sub-
directory.

5.7 USING PATH TO SPECIFY THE LOCATION OF FILES

A path is the route that leads from the root directory of a drive to the file you want to use.

For example, to access the NOS.LET file in the LETTER subdirectory of NOS directory, DOS
must go from the ROOT (\) directory through the NOS directory to the LETTER directory, as
shown in the following figure:

To specify the same path at the command prompt, you would type it as shown in the following
illustration:

C:\NOS\LETTER\NOS.LET

This is the path or route to the file NOS.LET. The first letter and the colon (C:) represent the
drive the file is on. The first back slash (\) represents the root directory. The second backslash
separates the NOS directory from the LETTER sub-directory. The third backslash separates the
LETTER sub-directories from the file name, NOS.LET.

Note: MS-DOS recognizes path up to 67 characters long (including the Drive letter, colon, and
backslash).

5.8 DIR COMMAND

The DIR command gives the list of is there on the disk that is mounted on the active drive.

Syntax : C:\> DIR A:\> DIR


Example

A:\> DIR

Volume in drive A has no label


Directory of A:\

COMMAND COM 23612 10-20-88 11.30a


DISKCOPY COM 4235 10-20-88 12.00p
FORMAT COM 15780 03-12-89 12.00p
3 files 325012 bytes free

A:\>

As can be seen, on typing DIR followed by <Enter> key at DOS prompt, five columns of data
followed by the number of files and bytes that are free in the disk are displayed. The first
column contains the primary name of each file resident on the disk. However, most files are
named with an extension, which appear in the second column. Whereas, the third column
contains the size of the file in bytes, and the fourth and fifth columns show the date and time on
which the files was created or last modified. The last line displays the number of file(s) and
remaining disk space free in bytes. It is important to note that the DIR command only
displays the names of the files and not their contents.

5.9 CHANGING A DIRECTORY

All the names displayed using DIR command that have <DIR> besides them are directories.
You can see the list of files in another directory by changing to that directory and then using
the DIR command again.

The Change Directory (CHDIR) or CD command enables the user to travel around the
directories in a disk. Type the CD command at the command prompt.

Syntax:

A:\> CHDIR {path} or a:\> CD {path}

Examples : (Refer to the figure)

# 1. A:\>CD \NOS

This command makes the NOS sub-directory under the root directory (\) active.

# 2. A:\>CD \NOS\LETTERS
The backslash indicates the root, and LETTERS, which is a sub-directory under the NOS
directory, becomes the working directory.

# 3. A:\> CD \

The root directory becomes the working directory; i.e. you will change back to the root or
main directory. The slash typed in this command is a backslash (\). No matter which directory
you are in, this command always returns you to the root directory of a drive. The root directory
does not have a name, it is simply referred to by a backslash (\).

5.10 MAKING OR CREATING DIRECTORY

As the number of files increases in a disk, a need is felt to organize them in a meaningful way
by creating sub-directories to store a group of logically related/similar files.

To create a directory, DOS provides the MKDIR (Make Directory) or MD command.

Syntax:

A:\>MKDIR [drive:] {pathname} or A:\>MD [drive:] {pathname}

Square brackets indicate that [drive:] entry is optional.

The MD or MKDIR command creates a new empty directory whose name is the last item
specified in the pathname, in the specified drive. If active, the drive need not be specified. If
the directory is to be created as a sub-directory of the working directory on the active drive,
typing MD {directory name} at the DOS prompt or command prompt is sufficient.

Examples:

# 1. A:\> MD \ACCT\SALARY

makes a SALARY directory in the: drive, under ACCT directory.

# 2. A:\> MD C:\> SALARY

Makes a salary directory in the C: drive, under root directory.

5.11 DELETING A DIRECTORY

You may want to delete or remove a directory to simplify your directory structure. DOS
provides RD (Remove Directory) to delete a directory.
Example:

# 1. A:\> RD \ACCT\SALARY

removes the SALARY sub-directory in ACCT directory.

NOTE: You cannot delete a directory if you are in it. Before you can delete a directory, you
must type cd.. at the command prompt. At the same point of time, the directory to be deleted
should be empty.

5.12 COPYING FILES

To copy a file, DOS provides `COPY' command. When you use `copy' command, you must use
the following two parameters; the location and the name of the file you want to copy, or the
source; and the location and the file name to which you want to copy the file or the target
(destination). You separate the source and the destination or target with a space. The syntax of
the `COPY' command is

COPY {source} {destination} or,

COPY [drive:] [path] [filename] [drive:] [path] [filename]

i.e. the first set of drive, path and filename refers to the source file, and the second set of
drive, path and filename

refers to the destination file.

(a) Copying Single File

To copy the DEBUG.EXE file from the DOS directory to the NOS

directory

1. Return to the root directory by typing the following command prompt: CD\

2. Change to the DOS directory by typing the following commands at the DOS prompt: CD DOS

3. To copy the file DEBUG.EXE file from the DOS directory to the
NOS, directory type the following at the command prompt:

Copy c:\dos\debug.exe c:\nos and the following message appears: 1 file (s) copied

Examples:
# 1. A:\> copy a:\letter\office.doc \letter\office.bak

makes a copy of the office.doc file in the current or working directory with a new name
office.bak

# 2. A:\> copy office.doc a:\letters\nos.mem

copy the file office.doc from the root directory to the sub-directory LETTER under root directory
with a new name nos.mem.

If the target drive is not specified, the copied file will reside in the disk mounted on the active
drive.

5.13 USE OF WILDCARD CHARACTERS

If you want to carry out a task for a group of files whose names have something in common,
you can use wildcard characters to specify groups of files. DOS recognize two wildcard
characters: asterisk (*) represents one or more characters that a group of files has in common;
and the question mark (?) represents a single character that a group of files has in common.
You can use wildcards to replace all or part of a file's name or its extension. The following table
shows examples of wildcards:

Wildcard What it Represents Example

JULY.TXT
*.TXT All files with a .TXT extension
LETTER.TXT

REPORT.TXT
REPORT.* All files named REPORT with any extension REPORT.LET
REPORT.WRI

M MEMO.TXT
M*.* All files beginning with letter regardless of their extension
MARCH.XLI

SUN.BMP
???.* All files having 3 letter names, with any or no extension
WIN.LET

You can include the wildcard in the command.

Use of wildcard characters in COPY command


# 1. A:\>COPY \letters\*.COB B:

It means, copy all files with extension *.COB from the directory LETTERS under the ROOT
directory to the working or ROOT directory of the `B' drive.

# 2. A:\> COPY B:\COMPANY\OPEL.*

The command is to copy all files with primary name OPEL (irrespective extension) in the
directory COMPANY under ROOT of the drive `B' into the current working directory of the
disk mounted in `A' drive. Incase of one drive, the system will ask for the source and

target drive.

The command,

#3.A:\>COPY C:\*.*

copies all files of the ROOT directory of the 'C' drive into the working directory of the 'A' drive.

# 4. A:\> COPY LETTE?.* B:

copies all files with primary name consisting of 6 characters in total and LETTE as the first five
characters (irrespective of extension name) into drive `B'.

# 5. A:\> COPY B:\?.DOC

copies all files having a primary name of one character with an extension .DOC from ROOT
directory of 'B' to the ROOT directory of `A' drive.

5.14 RENAMING FILES

To rename a file, DOS provides REN command. The REN command stands for "Rename".
When you use the REN command, you must include two parameters. The first is the file you
want to rename, and the second is the new name for the file. You separate the two names with a
space. The REN command follows this pattern:

REN oldname newname

Example: REN NOS.DOC NOS.MEM

Rename the old filename NOS.DOC to a new filename NOS.MEM.


5.15 DELETING FILES

This section explains how to delete or remove a file that is no longer required in the disk. DOS
provides DEL command, which means to delete.

Syntax : DEL {drive:} {path} {filename}

Example:

# 1. DEL \DOS\EDIT.HLP

delete the EDIT.HLP from the DOS directory under ROOT directory.

5.16 PRINTING A FILE

The `PRINT' command of DOS works more or less like `TYPE' com

mand, but at the same time, it enables the content of a text file

to be printed on a paper.

Syntax:

A:\> PRINT [drive:] {path} {filename}

Example:

A:\> PRINT \AIAET\LETTER\AIAET.LET

Top

IN-TEXT QUESTIONS 5.1

1. The startup routine runs, when machine boots up is known as


a. POST
b. BOOT up
c. Operating Routine
d. I/O operation
2. Operating system is also known as:
a. database
b. system software
c. hardware
d. printer
3. What is the maximum length allowed for primary name of a computer file under DOS?
a. 8
b. 12
c. 3
d. None of the above
4. Which of the following could be a valid DOS file specification?
a. NOSFILE.POST
b. NOSFILE.P.OST
c. NOSFILE.DOC
d. NOST.FILEDOC
5. How many characters form a secondary name for a file?
a. 8
b. 12
c. 3
d. None of the above
6. What is the name given to something that the computer will automatically use unless
you tell it otherwise?
a. a specification
b. a wildcard
c. a default
d. a rule
7. As per symbolic notation of DOS, which of the following indicates the ROOT directory
a. *
b. >
c. /
d. None of the above
8. In wildcard specification `?' is used as replacement for
a. one character
b. two character
c. three character
d. none of the above
9. With DOS, you may use the `*' and `?':
a. when changing the default settings
b. to represent unspecified characters in a filename
c. instead of wildcard characters
d. in the extension but not in the drive name or the file name
10. DOS system file consists of
a. IBMBIO.COM, IBMDOS.COM, COMMAND.COM
b. COMMAND.COM, IBMBIO.COM, FORMAT.COM
c. SYS.COM,IBMBIO.COM,IBMDOS.COM
d. None of the above
11. The batch file uses the extension
a. .BAT
b. .DOC
c. .PRG
d. .DOS
12. To display the list of all the file of the disk you would type
a. DIR
b. COPY
c. DIR FILES
d. DIR AUTOEXEC.BAT
13. State whether the following questions are True(T) or False(F).
a. Command.Com is hidden file.
b. Primary name of a file can be of 10 characters.
c. The command MKDIR and MD performs the same task.
d. Under DOS .EXE is not an executable file.
e. DIR command is used to see the content of a specific file.

Top

5.17 WHAT YOU HAVE LEARNT

In this lesson you were introduced to one of the most popular desktop operating system and its
working environment. It explained the directory structure, file naming conventions. It also talked
in great length about the file management in terms of COPY, DEL, and MOVE. Here you
learned the steps involved in loading of operating systems into computer.

TERMINAL QUESTIONS

1. Explain in brief what do you understand by Operating system.


2. Explain the process involved in loading of Operating System.

5.19 FEEDBACK TO IN-TEXT QUESTIONS

1. b.
2. b.
3. a.
4. b.
5. b
6. c.
7. d.
8. a.
9. b.
10. a.
11. a
12. a..
13. True OR False
a. False
b. False
c. True
d. False
e. False

]]

DOS Lesson 3: Internal Commands

Commands

Once your computer has completed the boot process (if you haven’t
done so yet, please review Lesson 2 for a discussion of the boot process), the
computer is sitting there waiting for you to do something. The way you do anything
in DOS is through commands
that the computer understands. A command may cause the computer to take some
action, or to execute some file. We’ll leave most of the file execution discussion
for another lesson, and for now focus on the topic of internal commands.

Internal vs. External Commands

As you will recall from Lesson 2, DOS comes with a built-in command
interpreter called COMMAND.COM. This file is loaded during the boot process,
which means that COMMAND.COM is resident in memory at all times, and the commands
that it understands are available to the user at all times. Not all DOS commands
are understood by COMMAND.COM. There are commands called external commands
that reside in separate files on your hard drive, and must be called specifically
for you to use them. Why is this?

One of the biggest reasons has to do with the limitations of how


DOS handles memory. A full discussion of this topic will come in a later lesson,
but for now it is enough to know that DOS could only address a very limited
amount of memory (1MB total), and that programs were very quickly bumping up
against the constraint of available memory. since COMMAND.COM is loaded into
memory at the beginning of the boot process, and stays resident in memory at
all times, it would not make sense to load commands that you would only use
infrequently, or to load commands that only certain users would ever need. So
these commands were placed in external files where they could be accessed if
needed. If you look in your DOS directory on your hard drive (usually C:\DOS),
you will see these external commands represented by files that are either *.EXE
or *.COM files. You won’t see the internal commands here, though, because these
commands are all contained within COMMAND.COM.

If you have created a DOS boot disk, which I recommend highly


to anyone who will be working with computers, it will contain three files, as
we discussed in Lesson 2: IO.SYS, MSDOS.SYS, and COMMAND.COM. The first two
files are hidden, so you won’t see them in a DOS dir comand normally.
But if you examine a boot disk in Windows 95′s Windows Explorer, and set it
to display all files (in Windows Explorer, select View, Options, and select
Show All Files), you will see them there. These files are located in specific
places on the disk. The third file, COMMAND.COM, must be in the root directory.
Since it is on the boot disk (you cannot boot without it), that means that the
commands it contains are available to you when you boot from this disk. The
reason any well-prepared computer person has a DOS boot disk handy at all times
is that a problem on the hard drive may render the computer unbootable. Booting
from a DOS boot disk and using the commands available to you in COMMAND.COM
may enable you to diagnose and fix the problem.

The other major reason why someone might want to create a DOS
boot disk these days is to run legacy DOS software that has problems running
with more current operating systems like Windows 95, 98, and NT.

One last note regarding internal commands. The internal commands


contained within COMMAND.COM are the commands that are used in writing batch
files. We will discuss batch files more in a future installment, but one
consequence is worth noting here: the batch file will not run properly if it
cannot find COMMAND.COM. Normally this ought to be handled by the path
command, but if you ever have problems getting a batch file to run, try putting
a copy of COMMAND.COM in the same directory as the batch file. This often gets
the batch file to run perfectly.

List of Internal Commands

Here are all of the 62 Internal Commands contained within the COMMAND.COM command
interpreter:

break buffers call cd

chcp chdir choice cls

copy country ctty date

del device devicehigh dir


dos drivparm echo erase

errorlevel exist exit fcbs

files for goto if

include install lastdrive lh

loadfix loadhigh md menucolor

menudefault menuitem mkdir move

not numlock path pause

prompt rd rem ren

rename rmdir set shell

shift stacks submenu switches

time truename type ver

verify vol

Some of these internal commands (e.g. dir, cd) are meant to be executed from
the command line, or within a batch file, which is what you usually think of
as a command. Others (e.g. files, switches) are generally used within a configuration
file like CONFIG.SYS to help configure your system. Because both CONFIG.SYS
and AUTOEXEC.BAT use commands that are found in COMMAND.COM, they must load
later in the boot process. So if you were wondering why things happen in that
specific order in the boot process (discussed last week), now you know.

Você também pode gostar